Federal Register 2010, 2011, 2012, 2013, 2014
2013-12-26
... Seeks Comment on Application of the IP Closed Captioning Rules to Video Clips AGENCY: Federal... captioning of video clips delivered by Internet protocol (``IP''), including the extent to which industry has voluntarily captioned IP- delivered video clips. The Commission directed the Media Bureau to issue this...
The use of video clips in teleconsultation for preschool children with movement disorders.
Gorter, Hetty; Lucas, Cees; Groothuis-Oudshoorn, Karin; Maathuis, Carel; van Wijlen-Hempel, Rietje; Elvers, Hans
2013-01-01
To investigate the reliability and validity of video clips in assessing movement disorders in preschool children. The study group included 27 children with neuromotor concerns. The explorative validity group included children with motor problems (n = 21) or with typical development (n = 9). Hempel screening was used for live observation of the child, full recording, and short video clips. The explorative study tested the validity of the clinical classifications "typical" or "suspect." Agreement between live observation and the full recording was almost perfect; Agreement for the clinical classification "typical" or "suspect" was substantial. Agreement between the full recording and short video clips was substantial to moderate. The explorative validity study, based on short video clips and the presence of a neuromotor developmental disorder, showed substantial agreement. Hempel screening enables reliable and valid observation of video clips, but further research is necessary to demonstrate the predictive value.
Chinese Language Video Clips. [CD-ROM].
ERIC Educational Resources Information Center
Fleming, Stephen; Hipley, David; Ning, Cynthia
This compact disc includes video clips covering six topics for the learner of Chinese: personal information, commercial transactions, travel and leisure, health and sports, food and school. Filmed on location in Beijing, these naturalistic video clips consist mainly of unrehearsed interviews of ordinary people. The learner is lead through a series…
Content-based video retrieval by example video clip
NASA Astrophysics Data System (ADS)
Dimitrova, Nevenka; Abdel-Mottaleb, Mohamed
1997-01-01
This paper presents a novel approach for video retrieval from a large archive of MPEG or Motion JPEG compressed video clips. We introduce a retrieval algorithm that takes a video clip as a query and searches the database for clips with similar contents. Video clips are characterized by a sequence of representative frame signatures, which are constructed from DC coefficients and motion information (`DC+M' signatures). The similarity between two video clips is determined by using their respective signatures. This method facilitates retrieval of clips for the purpose of video editing, broadcast news retrieval, or copyright violation detection.
Teaching professionalism to first year medical students using video clips.
Shevell, Allison Haley; Thomas, Aliki; Fuks, Abraham
2015-01-01
Medical schools are confronted with the challenge of teaching professionalism during medical training. The aim of this study was to examine medical students' perceptions of using video clips as a beneficial teaching tool to learn professionalism and other aspects of physicianship. As part of the longitudinal Physician Apprenticeship course at McGill University, first year medical students viewed video clips from the television series ER. The study used qualitative description and thematic analysis to interpret responses to questionnaires, which explored the educational merits of this exercise. Completed questionnaires were submitted by 112 students from 21 small groups. A major theme concerned the students' perceptions of the utility of video clips as a teaching tool, and consisted of comments organized into 10 categories: "authenticity and believability", "thought provoking", "skills and approaches", "setting", "medium", "level of training", "mentorship", "experiential learning", "effectiveness" and "relevance to practice". Another major theme reflected the qualities of physicianship portrayed in video clips, and included seven categories: "patient-centeredness", "communication", "physician-patient relationship", "professionalism", "ethical behavior", "interprofessional practice" and "mentorship". This study demonstrated that students perceived the value of using video clips from a television series as a means of teaching professionalism and other aspects of physicianship.
Public Awareness of Melioidosis in Thailand and Potential Use of Video Clips as Educational Tools
Chansrichavala, Praveen; Wongsuwan, Nittayasee; Suddee, Suthee; Malasit, Mayura; Hongsuwan, Maliwan; Wannapinij, Prapass; Kitphati, Rungreung; Day, Nicholas P. J.; Michie, Susan; Peacock, Sharon J.; Limmathurotsakul, Direk
2015-01-01
Background Melioidosis causes more than 1,000 deaths in Thailand each year. Infection occurs via inoculation, ingestion or inhalation of the causative organism (Burkholderia pseuodmallei) present in soil and water. Here, we evaluated public awareness of melioidosis using a combination of population-based questionnaire, a public engagement campaign to obtain video clips made by the public, and viewpoints on these video clips as potential educational tools about the disease and its prevention. Methods A questionnaire was developed to evaluate public awareness of melioidosis, and knowledge about its prevention. From 1 March to 31 April 2012, the questionnaire was delivered to five randomly selected adults in each of 928 districts in Thailand. A video clip contest entitled “Melioidosis, an infectious disease that Thais must know” was run between May and October 2012. The best 12 video clips judged by a contest committee were shown to 71 people at risk from melioidosis (diabetics). Focus group interviews were used to evaluate their perceptions of the video clips. Results Of 4,203 Thais who completed our study questionnaire, 74% had never heard of melioidosis, and 19% had heard of the disease but had no further knowledge. Most participants in all focus group sessions felt that video clips were beneficial and could positively influence them to increase adherence to recommended preventive behaviours, including drinking boiled water and wearing protective gear if in contact with soil or environmental water. Participants suggested that video clips should be presented in the local dialect with simple words rather than medical terms, in a serious manner, with a doctor as the one presenting the facts, and having detailed pictures of each recommended prevention method. Conclusions In summary, public awareness of melioidosis in Thailand is very low, and video clips could serve as a useful medium to educate people and promote disease prevention. Presented in Part World Melioidosis Congress 2013, Bangkok, Thailand, 18–20 September 2013 (abstract OS VII-04). PMID:25803048
Authoring Data-Driven Videos with DataClips.
Amini, Fereshteh; Riche, Nathalie Henry; Lee, Bongshin; Monroy-Hernandez, Andres; Irani, Pourang
2017-01-01
Data videos, or short data-driven motion graphics, are an increasingly popular medium for storytelling. However, creating data videos is difficult as it involves pulling together a unique combination of skills. We introduce DataClips, an authoring tool aimed at lowering the barriers to crafting data videos. DataClips allows non-experts to assemble data-driven "clips" together to form longer sequences. We constructed the library of data clips by analyzing the composition of over 70 data videos produced by reputable sources such as The New York Times and The Guardian. We demonstrate that DataClips can reproduce over 90% of our data videos corpus. We also report on a qualitative study comparing the authoring process and outcome achieved by (1) non-experts using DataClips, and (2) experts using Adobe Illustrator and After Effects to create data-driven clips. Results indicated that non-experts are able to learn and use DataClips with a short training period. In the span of one hour, they were able to produce more videos than experts using a professional editing tool, and their clips were rated similarly by an independent audience.
Exploring the Use of Video-Clips for Motivation Building in a Secondary School EFL Setting
ERIC Educational Resources Information Center
Park, Yujong; Jung, Eunsu
2016-01-01
By employing an action research framework, this study evaluated the effectiveness of a video-based curriculum in motivating EFL learners to learn English. Fifteen Korean EFL students at the secondary school context participated in an 8-week English program, which employed video clips including TED talk replays, sitcoms, TV news reports and movies…
VIDEO BLOGGING AND ENGLISH PRESENTATION PERFORMANCE: A PILOT STUDY.
Alan Hung, Shao-Ting; Danny Huang, Heng-Tsung
2015-10-01
This study investigated the utility of video blogs in improving EFL students' performance in giving oral presentations and, further, examined the students' perceptions toward video blogging. Thirty-six English-major juniors participated in a semester-long video blog project for which they uploaded their 3-min. virtual presentation clips over 18 weeks. Their virtual presentation clips were rated by three raters using a scale for speaking performance that contained 14 presentation skills. Data sources included presentation clips, reflections, and interviews. The results indicated that the students' overall presentation performance improved significantly. In particular, among the 14 presentation skills projection, intonation, posture, introduction, conclusion, and purpose saw the most substantial improvement. Finally, the qualitative data revealed that learners perceived that the video blog project facilitated learning but increased anxiety.
A new method for digital video documentation in surgical procedures and minimally invasive surgery.
Wurnig, P N; Hollaus, P H; Wurnig, C H; Wolf, R K; Ohtsuka, T; Pridun, N S
2003-02-01
Documentation of surgical procedures is limited to the accuracy of description, which depends on the vocabulary and the descriptive prowess of the surgeon. Even analog video recording could not solve the problem of documentation satisfactorily due to the abundance of recorded material. By capturing the video digitally, most problems are solved in the circumstances described in this article. We developed a cheap and useful digital video capturing system that consists of conventional computer components. Video images and clips can be captured intraoperatively and are immediately available. The system is a commercial personal computer specially configured for digital video capturing and is connected by wire to the video tower. Filming was done with a conventional endoscopic video camera. A total of 65 open and endoscopic procedures were documented in an orthopedic and a thoracic surgery unit. The median number of clips per surgical procedure was 6 (range, 1-17), and the median storage volume was 49 MB (range, 3-360 MB) in compressed form. The median duration of a video clip was 4 min 25 s (range, 45 s to 21 min). Median time for editing a video clip was 12 min for an advanced user (including cutting, title for the movie, and compression). The quality of the clips renders them suitable for presentations. This digital video documentation system allows easy capturing of intraoperative video sequences in high quality. All possibilities of documentation can be performed. With the use of an endoscopic video camera, no compromises with respect to sterility and surgical elbowroom are necessary. The cost is much lower than commercially available systems, and setting changes can be performed easily without trained specialists.
Intelligent keyframe extraction for video printing
NASA Astrophysics Data System (ADS)
Zhang, Tong
2004-10-01
Nowadays most digital cameras have the functionality of taking short video clips, with the length of video ranging from several seconds to a couple of minutes. The purpose of this research is to develop an algorithm which extracts an optimal set of keyframes from each short video clip so that the user could obtain proper video frames to print out. In current video printing systems, keyframes are normally obtained by evenly sampling the video clip over time. Such an approach, however, may not reflect highlights or regions of interest in the video. Keyframes derived in this way may also be improper for video printing in terms of either content or image quality. In this paper, we present an intelligent keyframe extraction approach to derive an improved keyframe set by performing semantic analysis of the video content. For a video clip, a number of video and audio features are analyzed to first generate a candidate keyframe set. These features include accumulative color histogram and color layout differences, camera motion estimation, moving object tracking, face detection and audio event detection. Then, the candidate keyframes are clustered and evaluated to obtain a final keyframe set. The objective is to automatically generate a limited number of keyframes to show different views of the scene; to show different people and their actions in the scene; and to tell the story in the video shot. Moreover, frame extraction for video printing, which is a rather subjective problem, is considered in this work for the first time, and a semi-automatic approach is proposed.
Choi, Yeonja; Song, Eunju; Oh, Eunjung
2015-04-01
This study aims to verify the communication skills training for nursing students by using a video clip on a smart phone. The study settings were the nursing departments of two universities in South Korea. This study was a quasi-experimental one using a nonequivalent control group pre-posttest design. The experimental and control groups consisted of second-year nursing students who had taken a communication course. The experimental group included 45 students, and the control group included 42 students. The experimental group improved more significantly than the control group in communication competence and emotional intelligence. Using a video clip on a smart phone is helpful for communication teaching method. Copyright © 2014 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Michel, Robert G.; Cavallari, Jennifer M.; Znamenskaia, Elena; Yang, Karl X.; Sun, Tao; Bent, Gary
1999-12-01
This article is an electronic publication in Spectrochimica Acta Electronica (SAE), a section of Spectrochimica Acta Part B (SAB). The hardcopy text is accompanied by an electronic archive, stored on the CD-ROM accompanying this issue. The archive contains video clips. The main article discusses the scientific aspects of the subject and explains the purpose of the video files. Short, 15-30 s, digital video clips are easily controllable at the computer keyboard, which gives a speaker the ability to show fine details through the use of slow motion. Also, they are easily accessed from the computer hard drive for rapid extemporaneous presentation. In addition, they are easily transferred to the Internet for dissemination. From a pedagogical point of view, the act of making a video clip by a student allows for development of powers of observation, while the availability of the technology to make digital video clips gives a teacher the flexibility to demonstrate scientific concepts that would otherwise have to be done as 'live' demonstrations, with all the likely attendant misadventures. Our experience with digital video clips has been through their use in computer-based presentations by undergraduate and graduate students in analytical chemistry classes, and by high school and middle school teachers and their students in a variety of science and non-science classes. In physics teaching laboratories, we have used the hardware to capture digital video clips of dynamic processes, such as projectiles and pendulums, for later mathematical analysis.
NASA Technical Reports Server (NTRS)
1991-01-01
A montage of video clips over the years, footage shows the spacecrews, launch, and landing for different orbiters and missions. Clips include the Endeavour and Atlantis Orbiters and are shown to the music of the American National Anthem.
"Physics on Stage" Festival Video Now Available
NASA Astrophysics Data System (ADS)
2001-01-01
ESO Video Clip 01/01 is issued on the web in conjunction with the release of an 18-min documentary video from the Science Festival of the "Physics On Stage" programme. This unique event took place during November 6-11, 2000, on the CERN premises at the French-Swiss border near Geneva, and formed part of the European Science and Technology Week 2000, an initiative by the European Commission to raise the public awareness of science in Europe. Physics On Stage and the Science Festival were jointly organised by CERN, ESA and ESO, in collaboration with the European Physical Society (EPS) and the European Association for Astronomy Education (EAAE) and national organisations in about 25 European countries. During this final phase of the yearlong Physics On Stage programme, more than 500 physics teachers, government officials and media representatives gathered at CERN to discuss different aspects of physics education. The meeting was particular timely in view of the current decline of interest in physics and technology by Europe's citizens, especially schoolchildren. It included spectacular demonstrations of new educational materials and methods. An 18-min video is now available that documents this event. It conveys the great enthusiasm of the many participants who spent an extremely fruitful week, meeting and exchanging information with colleagues from all over the continent. It shows the various types of activities that took place, from the central "fair" with national and organisational booths to the exciting performances and other dramatic presentations. Based of the outcome of 13 workshops that focussed on different subject matters, a series of very useful recommendations was passed at the final session. The Science Festival was also visited by several high-ranking officials, including the European Commissioner for Research, Phillipe Busquin. Full reports from the Festival will soon become available from the International Steering Committee..More information is available on the "Physics on Stage" webpages at CERN , ESA and ESO ). Note also the brief account published in the December 2000 issue of the ESO Messenger. The present video clip is available in four versions: two MPEG files and two streamer-versions of different sizes; the latter require RealPlayer software. Video Clip 01/01 may be freely reproduced. Tapes of this video clip and the 18-min video, suitable for transmission and in full professional quality (Betacam, etc.), are available for broadcasters upon request ; please contact the ESO EPR Department for more details. Most of the ESO PR Video Clips at the ESO website provide "animated" illustrations of the ongoing work and events at the European Southern Observatory. The most recent clip was: ESO PR Video Clip 06/00 about Fourth Light at Paranal! (4 September 2000) . General information is available on the web about ESO videos.
Park, Eun Jung; Yoon, Young Tak; Hong, Chong Kun; Ha, Young Rock; Ahn, Jung Hwan
2017-07-01
This study evaluated the efficacy of a teaching method using simulated B-lines of hand ultrasound with a wet foam dressing material. This prospective, randomized, noninferiority study was conducted on emergency medical technician students without any relevant training in ultrasound. Following a lecture including simulated (SG) or real video clips (RG) of B-lines, a posttest was conducted and a retention test was performed after 2 months. The test consisted of questions about B-lines in 40 randomly mixed video clips (20 simulated and 20 real videos) with 4 answer scores (R-1 [the correct answer score for the real video clips] vs S-1 [the correct answer score for the simulated video clips] in the posttest, R-2 [the correct answer score for the real video clips] vs S-2 [the correct answer score for the simulated video clips] in the retention test). A total of 77 and 73 volunteers participated in the posttest (RG, 38; SG, 39) and retention test (RG, 36; SG, 37), respectively. There was no significant (P > .05) difference in scores of R-1, S-1, R-2, or S-2 between RG and SG. The mean score differences between RG and SG were -0.6 (95% confidence interval [CI]: -1.49 to 0.11) in R-1, -0.1 (95% CI: -1.04 to 0.86) in S-1, 0 (95% CI: -1.57 to 1.50) in R-2, and -0.2 (95% CI: -1.52 to 0.25) in S-2. The mean differences and 95% CIs for all parameters fell within the noninferiority margin of 2 points (10%). Simulated B-lines of hand ultrasound with a wet foam dressing material were not inferior to real B-lines. They were effective for teaching and simulations. The study was registered with the Clinical Trial Registry of Korea: https://cris.nih.go.kr/cris/index.jsp (KCT0002144).
Alleviating travel anxiety through virtual reality and narrated video technology.
Ahn, J C; Lee, O
2013-01-01
This study presents an empirical evidence of benefit of narrative video clips in embedded virtual reality websites of hotels for relieving travel anxiety. Even though it was proven that virtual reality functions do provide some relief in travel anxiety, a stronger virtual reality website can be built when narrative video clips that show video clips with narration about important aspects of the hotel. We posit that these important aspects are 1. Escape route and 2. Surrounding neighborhood information, which are derived from the existing research on anxiety disorder as well as travel anxiety. Thus we created a video clip that showed and narrated about the escape route from the hotel room, another video clip that showed and narrated about surrounding neighborhood. We then conducted experiments with this enhanced virtual reality website of a hotel by having human subjects play with the website and fill out a questionnaire. The result confirms our hypothesis that there is a statistically significant relationship between the degree of travel anxiety and psychological relief caused by the use of embedded virtual reality functions with narrative video clips of a hotel website (Tab. 2, Fig. 3, Ref. 26).
Speed Biases With Real-Life Video Clips
Rossi, Federica; Montanaro, Elisa; de’Sperati, Claudio
2018-01-01
We live almost literally immersed in an artificial visual world, especially motion pictures. In this exploratory study, we asked whether the best speed for reproducing a video is its original, shooting speed. By using adjustment and double staircase methods, we examined speed biases in viewing real-life video clips in three experiments, and assessed their robustness by manipulating visual and auditory factors. With the tested stimuli (short clips of human motion, mixed human-physical motion, physical motion and ego-motion), speed underestimation was the rule rather than the exception, although it depended largely on clip content, ranging on average from 2% (ego-motion) to 32% (physical motion). Manipulating display size or adding arbitrary soundtracks did not modify these speed biases. Estimated speed was not correlated with estimated duration of these same video clips. These results indicate that the sense of speed for real-life video clips can be systematically biased, independently of the impression of elapsed time. Measuring subjective visual tempo may integrate traditional methods that assess time perception: speed biases may be exploited to develop a simple, objective test of reality flow, to be used for example in clinical and developmental contexts. From the perspective of video media, measuring speed biases may help to optimize video reproduction speed and validate “natural” video compression techniques based on sub-threshold temporal squeezing. PMID:29615875
Speed Biases With Real-Life Video Clips.
Rossi, Federica; Montanaro, Elisa; de'Sperati, Claudio
2018-01-01
We live almost literally immersed in an artificial visual world, especially motion pictures. In this exploratory study, we asked whether the best speed for reproducing a video is its original, shooting speed. By using adjustment and double staircase methods, we examined speed biases in viewing real-life video clips in three experiments, and assessed their robustness by manipulating visual and auditory factors. With the tested stimuli (short clips of human motion, mixed human-physical motion, physical motion and ego-motion), speed underestimation was the rule rather than the exception, although it depended largely on clip content, ranging on average from 2% (ego-motion) to 32% (physical motion). Manipulating display size or adding arbitrary soundtracks did not modify these speed biases. Estimated speed was not correlated with estimated duration of these same video clips. These results indicate that the sense of speed for real-life video clips can be systematically biased, independently of the impression of elapsed time. Measuring subjective visual tempo may integrate traditional methods that assess time perception: speed biases may be exploited to develop a simple, objective test of reality flow, to be used for example in clinical and developmental contexts. From the perspective of video media, measuring speed biases may help to optimize video reproduction speed and validate "natural" video compression techniques based on sub-threshold temporal squeezing.
Surgical gesture classification from video and kinematic data.
Zappella, Luca; Béjar, Benjamín; Hager, Gregory; Vidal, René
2013-10-01
Much of the existing work on automatic classification of gestures and skill in robotic surgery is based on dynamic cues (e.g., time to completion, speed, forces, torque) or kinematic data (e.g., robot trajectories and velocities). While videos could be equally or more discriminative (e.g., videos contain semantic information not present in kinematic data), they are typically not used because of the difficulties associated with automatic video interpretation. In this paper, we propose several methods for automatic surgical gesture classification from video data. We assume that the video of a surgical task (e.g., suturing) has been segmented into video clips corresponding to a single gesture (e.g., grabbing the needle, passing the needle) and propose three methods to classify the gesture of each video clip. In the first one, we model each video clip as the output of a linear dynamical system (LDS) and use metrics in the space of LDSs to classify new video clips. In the second one, we use spatio-temporal features extracted from each video clip to learn a dictionary of spatio-temporal words, and use a bag-of-features (BoF) approach to classify new video clips. In the third one, we use multiple kernel learning (MKL) to combine the LDS and BoF approaches. Since the LDS approach is also applicable to kinematic data, we also use MKL to combine both types of data in order to exploit their complementarity. Our experiments on a typical surgical training setup show that methods based on video data perform equally well, if not better, than state-of-the-art approaches based on kinematic data. In turn, the combination of both kinematic and video data outperforms any other algorithm based on one type of data alone. Copyright © 2013 Elsevier B.V. All rights reserved.
Cook, Christian J; Crewther, Blair T
2012-01-01
Previous studies have shown that visual images can produce rapid changes in testosterone concentrations. We explored the acute effects of video clips on salivary testosterone and cortisol concentrations and subsequent voluntary squat performance in highly trained male athletes (n=12). Saliva samples were collected on 6 occasions immediately before and 15 min after watching a brief video clip (approximately 4 min in duration) on a computer screen. The watching of a sad, erotic, aggressive, training motivational, humorous or a neutral control clip was randomised. Subjects then performed a squat workout aimed at producing a 3 repetition maximum (3RM) lift. Significant (P<0.001) relative (%) increases in testosterone concentrations were noted with watching the erotic, humorous, aggressive and training videos (versus control and sad), with testosterone decreasing significantly (versus control) after the sad clip. The aggressive video also produced an elevated cortisol response (% change) and more so than the control and humorous videos (P<0.001). A significant (P<0.003) improvement in 3RM performance was noted after the erotic, aggressive and training clips (versus control). A strong within-individual correlation (mean r=0.85) was also noted between the relative changes in testosterone and the 3RM squats across all video sessions (P<0.001). In conclusion, different video clips were associated with different changes in salivary free hormone concentrations and the relative changes in testosterone closely mapped 3RM squat performance in a group of highly trained males. Thus, speculatively, using short video presentations in the pre-workout environment offers an opportunity for understanding the outcomes of hormonal change, athlete behaviour and subsequent voluntary performance. Copyright © 2011 Elsevier Inc. All rights reserved.
Automated UAV-based mapping for airborne reconnaissance and video exploitation
NASA Astrophysics Data System (ADS)
Se, Stephen; Firoozfam, Pezhman; Goldstein, Norman; Wu, Linda; Dutkiewicz, Melanie; Pace, Paul; Naud, J. L. Pierre
2009-05-01
Airborne surveillance and reconnaissance are essential for successful military missions. Such capabilities are critical for force protection, situational awareness, mission planning, damage assessment and others. UAVs gather huge amount of video data but it is extremely labour-intensive for operators to analyse hours and hours of received data. At MDA, we have developed a suite of tools towards automated video exploitation including calibration, visualization, change detection and 3D reconstruction. The on-going work is to improve the robustness of these tools and automate the process as much as possible. Our calibration tool extracts and matches tie-points in the video frames incrementally to recover the camera calibration and poses, which are then refined by bundle adjustment. Our visualization tool stabilizes the video, expands its field-of-view and creates a geo-referenced mosaic from the video frames. It is important to identify anomalies in a scene, which may include detecting any improvised explosive devices (IED). However, it is tedious and difficult to compare video clips to look for differences manually. Our change detection tool allows the user to load two video clips taken from two passes at different times and flags any changes between them. 3D models are useful for situational awareness, as it is easier to understand the scene by visualizing it in 3D. Our 3D reconstruction tool creates calibrated photo-realistic 3D models from video clips taken from different viewpoints, using both semi-automated and automated approaches. The resulting 3D models also allow distance measurements and line-of- sight analysis.
Trans-Pacific tele-ultrasound image transmission of fetal central nervous system structures.
Ferreira, Adilson Cunha; Araujo Júnior, Edward; Martins, Wellington P; Jordão, João Francisco; Oliani, Antônio Hélio; Meagher, Simon E; Da Silva Costa, Fabricio
2015-01-01
To assess the quality of images and video clips of fetal central nervous (CNS) structures obtained by ultrasound and transmitted via tele-ultrasound from Brazil to Australia. In this cross-sectional study, 15 normal singleton pregnant women between 20 and 26 weeks were selected. Fetal CNS structures were obtained by images and video clips. The exams were transmitted in real-time using a broadband internet and an inexpensive video streaming device. Four blinded examiners evaluated the quality of the exams using the Likert scale. We calculated the mean, standard deviation, mean difference, and p values were obtained from paired t tests. The quality of the original video clips was slightly better than that observed by the transmitted video clips; mean difference considering all observers = 0.23 points. In 47/60 comparisons (78.3%; 95% CI = 66.4-86.9%) the quality of the video clips were judged to be the same. In 182/240 still images (75.8%; 95% CI = 70.0-80.8%) the scores of transmitted image were considered the same as the original. We demonstrated that long distance tele-ultrasound transmission of fetal CNS structures using an inexpensive video streaming device provided images of subjective good quality.
A spatiotemporal decomposition strategy for personal home video management
NASA Astrophysics Data System (ADS)
Yi, Haoran; Kozintsev, Igor; Polito, Marzia; Wu, Yi; Bouguet, Jean-Yves; Nefian, Ara; Dulong, Carole
2007-01-01
With the advent and proliferation of low cost and high performance digital video recorder devices, an increasing number of personal home video clips are recorded and stored by the consumers. Compared to image data, video data is lager in size and richer in multimedia content. Efficient access to video content is expected to be more challenging than image mining. Previously, we have developed a content-based image retrieval system and the benchmarking framework for personal images. In this paper, we extend our personal image retrieval system to include personal home video clips. A possible initial solution to video mining is to represent video clips by a set of key frames extracted from them thus converting the problem into an image search one. Here we report that a careful selection of key frames may improve the retrieval accuracy. However, because video also has temporal dimension, its key frame representation is inherently limited. The use of temporal information can give us better representation for video content at semantic object and concept levels than image-only based representation. In this paper we propose a bottom-up framework to combine interest point tracking, image segmentation and motion-shape factorization to decompose the video into spatiotemporal regions. We show an example application of activity concept detection using the trajectories extracted from the spatio-temporal regions. The proposed approach shows good potential for concise representation and indexing of objects and their motion in real-life consumer video.
ERIC Educational Resources Information Center
Ryan, Christian; Furley, Philip; Mulhall, Kathleen
2016-01-01
Typically developing children are able to judge who is winning or losing from very short clips of video footage of behaviour between active match play across a number of sports. Inferences from "thin slices" (short video clips) allow participants to make complex judgments about the meaning of posture, gesture and body language. This…
Do medical students watch video clips in eLearning and do these facilitate learning?
Romanov, Kalle; Nevgi, Anne
2007-06-01
There is controversial evidence of the impact of individual learning style on students' performance in computer-aided learning. We assessed the association between the use of multimedia materials, such as video clips, and collaborative communication tools with learning outcome among medical students. One hundred and twenty-one third-year medical students attended a course in medical informatics (0.7 credits) consisting of lectures, small group sessions and eLearning material. The eLearning material contained six learning modules with integrated video clips and collaborative learning tools in WebCT. Learning outcome was measured with a course exam. Approximately two-thirds of students (68.6%) viewed two or more videos. Female students were significantly more active video-watchers. No significant associations were found between video-watching and self-test scores or the time used in eLearning. Video-watchers were more active in WebCT; they loaded more pages and more actively participated in discussion forums. Video-watching was associated with a better course grade. Students who watched video clips were more active in using collaborative eLearning tools and achieved higher course grades.
iDIY: Video-Based Instruction Using Ipads
ERIC Educational Resources Information Center
Weng, Pei-Lin; Savage, Melissa N.; Bouck, Emily C.
2014-01-01
Video-based instruction is technology-based instruction delivered through video clips in which a human model demonstrates target behaviors (Rayner, Denholm, & Sigafoos, 2009). It can be used to teach a variety of skills, including social communication and behavioral and functional skills (Cihak & Schrader, 2008). Despite the advantages,…
The role of laryngoscopy in the diagnosis of spasmodic dysphonia.
Daraei, Pedram; Villari, Craig R; Rubin, Adam D; Hillel, Alexander T; Hapner, Edie R; Klein, Adam M; Johns, Michael M
2014-03-01
Spasmodic dysphonia (SD) can be difficult to diagnose, and patients often see multiple physicians for many years before diagnosis. Improving the speed of diagnosis for individuals with SD may decrease the time to treatment and improve patient quality of life more quickly. To assess whether the diagnosis of SD can be accurately predicted through auditory cues alone without the assistance of visual cues offered by laryngoscopic examination. Single-masked, case-control study at a specialized referral center that included patients who underwent laryngoscopic examination as part of a multidisciplinary workup for dysphonia. Twenty-two patients were selected in total: 10 with SD, 5 with vocal tremor, and 7 controls without SD or vocal tremor. The laryngoscopic examination was recorded, deidentified, and edited to make 3 media clips for each patient: video alone, audio alone, and combined video and audio. These clips were randomized and presented to 3 fellowship-trained laryngologist raters (A.D.R., A.T.H., and A.M.K.), who established the most probable diagnosis for each clip. Intrarater and interrater reliability were evaluated using repeat clips incorporated in the presentations. We measured diagnostic accuracy for video-only, audio-only, and combined multimedia clips. These measures were established before data collection. Data analysis was accomplished with analysis of variance and Tukey honestly significant differences. Of patients with SD, diagnostic accuracy was 10%, 73%, and 73% for video-only, audio-only, and combined, respectively (P < .001, df = 2). Of patients with vocal tremor, diagnostic accuracy was 93%, 73%, and 100% for video-only, audio-only, and combined, respectively (P = .05, df = 2). Of the controls, diagnostic accuracy was 81%, 19%, and 62% for video-only, audio-only, and combined, respectively (P < .001, df = 2). The diagnosis of SD during examination is based primarily on auditory cues. Viewing combined audio and video clips afforded no change in diagnostic accuracy compared with audio alone. Laryngoscopy serves an important role in the diagnosis of SD by excluding other pathologic causes and identifying vocal tremor.
ERIC Educational Resources Information Center
Arya, Poonam; Christ, Tanya; Chiu, Ming
2015-01-01
This study examined how characteristics of Collaborative Peer Video Analysis (CPVA) events are related to teachers' pedagogical outcomes. Data included 39 transcribed literacy video events, in which 14 in-service teachers engaged in discussions of their video clips. Emergent coding and Statistical Discourse Analysis were used to analyze the data.…
Video-Based Test Questions: A Novel Means of Evaluation
ERIC Educational Resources Information Center
Hertenstein, Matthew J.; Wayand, Joseph F.
2008-01-01
Many psychology instructors present videotaped examples of behavior at least occasionally during their courses. However, few include video clips during examinations. We provide examples of video-based questions, offer guidelines for their use, and discuss their benefits and drawbacks. In addition, we provide empirical evidence to support the use…
NASA Astrophysics Data System (ADS)
Lee, Feifei; Kotani, Koji; Chen, Qiu; Ohmi, Tadahiro
2010-02-01
In this paper, a fast search algorithm for MPEG-4 video clips from video database is proposed. An adjacent pixel intensity difference quantization (APIDQ) histogram is utilized as the feature vector of VOP (video object plane), which had been reliably applied to human face recognition previously. Instead of fully decompressed video sequence, partially decoded data, namely DC sequence of the video object are extracted from the video sequence. Combined with active search, a temporal pruning algorithm, fast and robust video search can be realized. The proposed search algorithm has been evaluated by total 15 hours of video contained of TV programs such as drama, talk, news, etc. to search for given 200 MPEG-4 video clips which each length is 15 seconds. Experimental results show the proposed algorithm can detect the similar video clip in merely 80ms, and Equal Error Rate (ERR) of 2 % in drama and news categories are achieved, which are more accurately and robust than conventional fast video search algorithm.
First- and third-party ground truth for key frame extraction from consumer video clips
NASA Astrophysics Data System (ADS)
Costello, Kathleen; Luo, Jiebo
2007-02-01
Extracting key frames (KF) from video is of great interest in many applications, such as video summary, video organization, video compression, and prints from video. KF extraction is not a new problem. However, current literature has been focused mainly on sports or news video. In the consumer video space, the biggest challenges for key frame selection from consumer videos are the unconstrained content and lack of any preimposed structure. In this study, we conduct ground truth collection of key frames from video clips taken by digital cameras (as opposed to camcorders) using both first- and third-party judges. The goals of this study are: (1) to create a reference database of video clips reasonably representative of the consumer video space; (2) to identify associated key frames by which automated algorithms can be compared and judged for effectiveness; and (3) to uncover the criteria used by both first- and thirdparty human judges so these criteria can influence algorithm design. The findings from these ground truths will be discussed.
Estimating contact rates at a mass gathering by using video analysis: a proof-of-concept project
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rainey, Jeanette J.; Cheriyadat, Anil; Radke, Richard J.
Current approaches for estimating social mixing patterns and infectious disease transmission at mass gatherings have been limited by various constraints, including low participation rates for volunteer-based research projects and challenges in quantifying spatially and temporally accurate person-to-person interactions. We developed a proof-of-concept project to assess the use of automated video analysis for estimating contact rates of attendees of the GameFest 2013 event at Rensselaer Polytechnic Institute (RPI) in Troy, New York. Video tracking and analysis algorithms were used to estimate the number and duration of contacts for 5 attendees during a 3-minute clip from the RPI video. Attendees were consideredmore » to have a contact event if the distance between them and another person was ≤1 meter. Contact duration was estimated in seconds. We also simulated 50 attendees assuming random mixing using a geospatially accurate representation of the same GameFest location. The 5 attendees had an overall median of 2 contact events during the 3-minute video clip (range: 0 6). Contact events varied from less than 5 seconds to the full duration of the 3- minute clip. The random mixing simulation was visualized and presented as a contrasting example. We were able to estimate the number and duration of contacts for five GameFest attendees from a 3-minute video clip that can be compared to a random mixing simulation model at the same location. In conclusion, the next phase will involve scaling the system for simultaneous analysis of mixing patterns from hours-long videos and comparing our results with other approaches for collecting contact data from mass gathering attendees.« less
Estimating contact rates at a mass gathering by using video analysis: a proof-of-concept project
Rainey, Jeanette J.; Cheriyadat, Anil; Radke, Richard J.; ...
2014-10-24
Current approaches for estimating social mixing patterns and infectious disease transmission at mass gatherings have been limited by various constraints, including low participation rates for volunteer-based research projects and challenges in quantifying spatially and temporally accurate person-to-person interactions. We developed a proof-of-concept project to assess the use of automated video analysis for estimating contact rates of attendees of the GameFest 2013 event at Rensselaer Polytechnic Institute (RPI) in Troy, New York. Video tracking and analysis algorithms were used to estimate the number and duration of contacts for 5 attendees during a 3-minute clip from the RPI video. Attendees were consideredmore » to have a contact event if the distance between them and another person was ≤1 meter. Contact duration was estimated in seconds. We also simulated 50 attendees assuming random mixing using a geospatially accurate representation of the same GameFest location. The 5 attendees had an overall median of 2 contact events during the 3-minute video clip (range: 0 6). Contact events varied from less than 5 seconds to the full duration of the 3- minute clip. The random mixing simulation was visualized and presented as a contrasting example. We were able to estimate the number and duration of contacts for five GameFest attendees from a 3-minute video clip that can be compared to a random mixing simulation model at the same location. In conclusion, the next phase will involve scaling the system for simultaneous analysis of mixing patterns from hours-long videos and comparing our results with other approaches for collecting contact data from mass gathering attendees.« less
Automatic attention-based prioritization of unconstrained video for compression
NASA Astrophysics Data System (ADS)
Itti, Laurent
2004-06-01
We apply a biologically-motivated algorithm that selects visually-salient regions of interest in video streams to multiply-foveated video compression. Regions of high encoding priority are selected based on nonlinear integration of low-level visual cues, mimicking processing in primate occipital and posterior parietal cortex. A dynamic foveation filter then blurs (foveates) every frame, increasingly with distance from high-priority regions. Two variants of the model (one with continuously-variable blur proportional to saliency at every pixel, and the other with blur proportional to distance from three independent foveation centers) are validated against eye fixations from 4-6 human observers on 50 video clips (synthetic stimuli, video games, outdoors day and night home video, television newscast, sports, talk-shows, etc). Significant overlap is found between human and algorithmic foveations on every clip with one variant, and on 48 out of 50 clips with the other. Substantial compressed file size reductions by a factor 0.5 on average are obtained for foveated compared to unfoveated clips. These results suggest a general-purpose usefulness of the algorithm in improving compression ratios of unconstrained video.
Next-Generation Image and Sound Processing Strategies: Exploiting the Biological Model
2007-05-01
several video game clips which were recorded while observers interactively played the games. The feature vectors may be derived from either: the...phase, we use a different video game clip to test the model. Frames from the test clip are passed in parallel to a bottom-up saliency model, as well as... video games (Figure 6). We found that the TD model alone predicts where humans look about twice as well as does the BU model alone; in addition, a
Preliminary Investigation of a Video-Based Stimulus Preference Assessment
ERIC Educational Resources Information Center
Snyder, Katie; Higbee, Thomas S.; Dayton, Elizabeth
2012-01-01
Video clips may be an effective format for presenting complex stimuli in preference assessments. In this preliminary study, we evaluated the correspondence between preference hierarchies generated from preference assessments that included either toys or videos of the toys. The top-ranked item corresponded in both assessments for 5 of the 6…
Fostering Curiosity in Science Classrooms: Inquiring into Practice Using Cogenerative Dialoguing
ERIC Educational Resources Information Center
Higgins, Joanna; Moeed, Azra
2017-01-01
Developing students' scientific literacy requires teachers to use a variety of pedagogical approaches including video as a form of instruction. In addition, using video is a way of engaging students in science ideas not otherwise accessible to them. This study investigated the merit of video clips representing scientific ideas in a secondary…
The Allocation of Visual Attention in Multimedia Search Interfaces
ERIC Educational Resources Information Center
Hughes, Edith Allen
2017-01-01
Multimedia analysts are challenged by the massive numbers of unconstrained video clips generated daily. Such clips can include any possible scene and events, and generally have limited quality control. Analysts who must work with such data are overwhelmed by its volume and lack of computational tools to probe it effectively. Even with advances…
Video diaries on social media: Creating online communities for geoscience research and education
NASA Astrophysics Data System (ADS)
Tong, V.
2013-12-01
Making video clips is an engaging way to learn and teach geoscience. As smartphones become increasingly common, it is relatively straightforward for students to produce ';video diaries' by recording their research and learning experience over the course of a science module. Instead of keeping the video diaries for themselves, students may use the social media such as Facebook for sharing their experience and thoughts. There are some potential benefits to link video diaries and social media in pedagogical contexts. For example, online comments on video clips offer useful feedback and learning materials to the students. Students also have the opportunity to engage in geoscience outreach by producing authentic scientific contents at the same time. A video diary project was conducted to test the pedagogical potential of using video diaries on social media in the context of geoscience outreach, undergraduate research and teaching. This project formed part of a problem-based learning module in field geophysics at an archaeological site in the UK. The project involved i) the students posting video clips about their research and problem-based learning in the field on a daily basis; and ii) the lecturer building an online outreach community with partner institutions. In this contribution, I will discuss the implementation of the project and critically evaluate the pedagogical potential of video diaries on social media. My discussion will focus on the following: 1) Effectiveness of video diaries on social media; 2) Student-centered approach of producing geoscience video diaries as part of their research and problem-based learning; 3) Learning, teaching and assessment based on video clips and related commentaries posted on Facebook; and 4) Challenges in creating and promoting online communities for geoscience outreach through the use of video diaries. I will compare the outcomes from this study with those from other pedagogical projects with video clips on geoscience, and evaluate the concept of ';networked public engagement' based on online video diaries.
Zhang, Niu; Chawla, Sudeep
2012-01-01
This study examined the effect of implementing instructional video in ophthalmic physical examination teaching on chiropractic students' laboratory physical examination skills and written test results. Instructional video clips of ophthalmic physical examination, consisting of both standard procedures and common mistakes, were created and used for laboratory teaching. The video clips were also available for student review after class. Students' laboratory skills and written test results were analyzed and compared using one-way analysis of variance (ANOVA) and post hoc multiple comparison tests among three study cohorts: the comparison cohort who did not utilize the instructional videos as a tool, the standard video cohort who viewed only the standard procedure of video clips, and the mistake-referenced video cohort who viewed video clips containing both standard procedure and common mistakes. One-way ANOVA suggested a significant difference of lab results among the three cohorts. Post hoc multiple comparisons further revealed that the mean scores of both video cohorts were significantly higher than that of the comparison cohort (p < .001). There was, however, no significant difference of the mean scores between the two video cohorts (p > .05). However, the percentage of students having a perfect score was the highest in the mistake-referenced video cohort. There was no significant difference of written test scores among all three cohorts (p > .05). The instructional video of the standard procedure improves chiropractic students' ophthalmic physical examination skills, which may be further enhanced by implementing a mistake-referenced instructional video.
Intelligent Flight Control System and Aeronautics Research at NASA Dryden
NASA Technical Reports Server (NTRS)
Brown, Nelson A.
2009-01-01
This video presentation reviews the F-15 Intelligent Flight Control System and contains clips of flight tests and aircraft performance in the areas of target tracking, takeoff and differential stabilators. Video of the APG milestone flight 1g formation is included.
Eustachian Tube Mucosal Inflammation Scale Validation Based on Digital Video Images.
Kivekäs, Ilkka; Pöyhönen, Leena; Aarnisalo, Antti; Rautiainen, Markus; Poe, Dennis
2015-12-01
The most common cause for Eustachian tube dilatory dysfunction is mucosal inflammation. The aim of this study was to validate a scale for Eustachian tube mucosal inflammation, based on digital video clips obtained during diagnostic rigid endoscopy. A previously described four-step scale for grading the degree of inflammation of the mucosa of the Eustachian tube lumen was used for this validation study. A tutorial for use of the scale, including static images and 10 second video clips, was presented to 26 clinicians with various levels of experience. Each clinician then reviewed 35 short digital video samples of Eustachian tubes from patients and rated the degree of inflammation. A subset of the clinicians performed a second rating of the same video clips at a subsequent time. Statistical analysis of the ratings provided inter- and intrarater reliability scores. Twenty-six clinicians with various levels of experience rated a total of 35 videos. Thirteen clinicians rated the videos twice. The overall correlation coefficient for the rating of inflammation severity was relatively good (0.74, 95% confidence interval, 0.72-0.76). The intralevel correlation coefficient for intrarater reliability was high (0.86). For those who rated videos twice, the intralevel correlation coefficient improved after the first rating (0.73, to 0.76), but improvement was not statistically significant. The inflammation scale used for Eustachian tube mucosal inflammation is reliable and this scale can be used with a high level of consistency by clinicians with various levels of experience.
Music Video: An Analysis at Three Levels.
ERIC Educational Resources Information Center
Burns, Gary
This paper is an analysis of the different aspects of the music video. Music video is defined as having three meanings: an individual clip, a format, or the "aesthetic" that describes what the clips and format look like. The paper examines interruptions, the dialectical tension and the organization of the work of art, shot-scene…
Influence of a negative movie message on food perceptions.
Oakes, Michael E; Slotterback, Carole S
2007-09-01
Many food scholars have suggested that popular media messages impact perceptions of foods. In the present study, experimental participants watched a disparaging video clip concerning McDonald's foods as well as a ruse video clip, then were asked to evaluate the healthfulness of 33 named foods as well as their nutrient descriptions. Control participants judged the named foods and their descriptions but saw no videos. The movie clip influenced only named foods that are identifiable with McDonalds and not other fast foods or other named foods that are considered unhealthy.
Reading your own lips: common-coding theory and visual speech perception.
Tye-Murray, Nancy; Spehar, Brent P; Myerson, Joel; Hale, Sandra; Sommers, Mitchell S
2013-02-01
Common-coding theory posits that (1) perceiving an action activates the same representations of motor plans that are activated by actually performing that action, and (2) because of individual differences in the ways that actions are performed, observing recordings of one's own previous behavior activates motor plans to an even greater degree than does observing someone else's behavior. We hypothesized that if observing oneself activates motor plans to a greater degree than does observing others, and if these activated plans contribute to perception, then people should be able to lipread silent video clips of their own previous utterances more accurately than they can lipread video clips of other talkers. As predicted, two groups of participants were able to lipread video clips of themselves, recorded more than two weeks earlier, significantly more accurately than video clips of others. These results suggest that visual input activates speech motor activity that links to word representations in the mental lexicon.
"Life in the Universe" Final Event Video Now Available
NASA Astrophysics Data System (ADS)
2002-02-01
ESO Video Clip 01/02 is issued on the web in conjunction with the release of a 20-min documentary video from the Final Event of the "Life in the Universe" programme. This unique event took place in November 2001 at CERN in Geneva, as part of the 2001 European Science and Technology Week, an initiative by the European Commission to raise the public awareness of science in Europe. The "Life in the Universe" programme comprised competitions in 23 European countries to identify the best projects from school students. The projects could be scientific or a piece of art, a theatrical performance, poetry or even a musical performance. The only restriction was that the final work must be based on scientific evidence. Winning teams from each country were invited to a "Final Event" at CERN on 8-11 November, 2001 to present their projects to a panel of International Experts during a special three-day event devoted to understanding the possibility of other life forms existing in our Universe. This Final Event also included a spectacular 90-min webcast from CERN with the highlights of the programme. The video describes the Final Event and the enthusiastic atmosphere when more than 200 young students and teachers from all over Europe met with some of the world's leading scientific experts of the field. The present video clip, with excerpts from the film, is available in four versions: two MPEG files and two streamer-versions of different sizes; the latter require RealPlayer software. Video Clip 01/02 may be freely reproduced. The 20-min video is available on request from ESO, for viewing in VHS and, for broadcasters, in Betacam-SP format. Please contact the ESO EPR Department for more details. Life in the Universe was jointly organised by the European Organisation for Nuclear Research (CERN) , the European Space Agency (ESA) and the European Southern Observatory (ESO) , in co-operation with the European Association for Astronomy Education (EAAE). Other research organisations were associated with the programme, e.g., the European Molecular Biology Laboratory (EMBL) and the European Synchrotron Radiation Facility (ESRF). Detailed information about the "Life in the Universe" programme can be found at the website b>http://www.lifeinuniverse.org and a webcast of this 90-min closing session in one of the large experimental halls at CERN is available on the web via that page. Most of the ESO PR Video Clips at the ESO website provide "animated" illustrations of the ongoing work and events at the European Southern Observatory. The most recent clip was: ESO PR Video Clips 08a-b/01 about The Eagle's EGGs (20 December 2001) . General information is available on the web about ESO videos.
The Use of the Library of Video Excerpts (L.O.V.E.) in Personnel Preparation Programs
ERIC Educational Resources Information Center
Trief, Ellen; Rosenblum, L. Penny
2016-01-01
A three-year, grant-funded program to create an online video clip library for personnel programs preparing teachers of students with visual impairments in the United States and Canada was launched in September 2014. The first author was the developer of the Library of Video Excerpts (L.O.V.E.) and collected over 300 video clips that were 8 to 10…
Humorous Videos and Idiom Achievement: Some Pedagogical Considerations for EFL Learners
ERIC Educational Resources Information Center
Neissari, Malihe; Ashraf, Hamid; Ghorbani, Mohammad Reza
2017-01-01
Employing a quasi-experimental design, this study examined the efficacy of humorous idiom video clips on the achievement of Iranian undergraduate students studying English as a Foreign Language (EFL). Forty humorous video clips from the English Idiom Series called "The Teacher" from the BBC website were used to teach 120 idioms to 61…
Another Way of Tracking Moving Objects Using Short Video Clips
ERIC Educational Resources Information Center
Vera, Francisco; Romanque, Cristian
2009-01-01
Physics teachers have long employed video clips to study moving objects in their classrooms and instructional labs. A number of approaches exist, both free and commercial, for tracking the coordinates of a point using video. The main characteristics of the method described in this paper are: it is simple to use; coordinates can be tracked using…
Machine-assisted editing of user-generated content
NASA Astrophysics Data System (ADS)
Cremer, Markus; Cook, Randall
2009-02-01
Over recent years user-generated content has become ubiquitously available and an attractive entertainment source for millions of end-users. Particularly for larger events, where many people use their devices to capture the action, a great number of short video clips are made available through appropriate web services. The objective of this presentation is to describe a way to combine these clips by analyzing them, and automatically reconstruct the time line in which the individual video clips were captured. This will enable people to easily create a compelling multimedia experience by leveraging multiple clips taken by different users from different angles, and across different time spans. The user will be able to shift into the role of a movie director mastering a multi-camera recording of the event. To achieve this goal, the audio portion of the video clips is analyzed, and waveform characteristics are computed with high temporal granularity in order to facilitate precise time alignment and overlap computation of the user-generated clips. Special care has to be given not only to the robustness of the selected audio features against ambient noise and various distortions, but also to the matching algorithm used to align the user-generated clips properly.
Video Clip of a Rover Rock-Drilling Demonstration at JPL
2013-02-20
This frame from a video clip shows moments during a demonstration of drilling into a rock at NASA JPL, Pasadena, Calif., with a test double of the Mars rover Curiosity. The drill combines hammering and rotation motions of the bit.
The effect of music video clips on adolescent boys' body image, mood, and schema activation.
Mulgrew, Kate E; Volcevski-Kostas, Diana; Rendell, Peter G
2014-01-01
There is limited research that has examined experimentally the effects of muscular images on adolescent boys' body image, with no research specifically examining the effects of music television. The aim of the current study was to examine the effects of viewing muscular and attractive singers in music video clips on early, mid, and late adolescent boys' body image, mood, and schema activation. Participants were 180 boys in grade 7 (mean age = 12.73 years), grade 9 (mean age = 14.40 years) or grade 11 (mean age = 16.15 years) who completed pre- and post-test measures of mood and body satisfaction after viewing music videos containing male singers of muscular or average appearance. They also completed measures of schema activation and social comparison after viewing the clips. The results showed that the boys who viewed the muscular clips reported poorer upper body satisfaction, lower appearance satisfaction, lower happiness, and more depressive feelings compared to boys who viewed the clips depicting singers of average appearance. There was no evidence of increased appearance schema activation but the boys who viewed the muscular clips did report higher levels of social comparison to the singers. The results suggest that music video clips are a powerful form of media in conveying information about the male ideal body shape and that negative effects are found in boys as young as 12 years.
Activity recognition using Video Event Segmentation with Text (VEST)
NASA Astrophysics Data System (ADS)
Holloway, Hillary; Jones, Eric K.; Kaluzniacki, Andrew; Blasch, Erik; Tierno, Jorge
2014-06-01
Multi-Intelligence (multi-INT) data includes video, text, and signals that require analysis by operators. Analysis methods include information fusion approaches such as filtering, correlation, and association. In this paper, we discuss the Video Event Segmentation with Text (VEST) method, which provides event boundaries of an activity to compile related message and video clips for future interest. VEST infers meaningful activities by clustering multiple streams of time-sequenced multi-INT intelligence data and derived fusion products. We discuss exemplar results that segment raw full-motion video (FMV) data by using extracted commentary message timestamps, FMV metadata, and user-defined queries.
Yun, Seok Won; Kim, Yun Seok; Lee, Yongjik; Lim, Han Jung; Park, Soon Ik; Jung, Jong Pil; Park, Chang Ryul
2017-01-01
There are many ways to treat focal hyperhidrosis, including surgeries for palmar and axillary hyperhidrosis. However, doctors and patients tend to be reluctant to perform surgery for plantar hyperhidrosis due to misconceptions and prejudices about surgical treatment. In addition, few studies have reported the outcome of surgeries for plantar hyperhidrosis. Therefore, the objective of this study was to determine the outcome (early and late postoperative satisfaction, complication, compensatory hyperhidrosis, recurrence rate, and efficiency) of surgical treatment for plantar hyperhidrosis. From August 2014 to October 2015, lumbar sympathetic block (LSB) was performed in 82 patients with plantar hyperhidrosis using clipping method. Limited video-assisted LSB was performed using 5 mm ligamax-clip or 3 mm horizontal-clip after identifying L3-4 sympathetic ganglion through finger-touch and endoscopic vision. Of the 82 patients, 45 were male and 37 were female. Their mean age was 26.38 years (range, 14-51 years). Mean follow-up time was 6.60 ± 3.56 months. Mean early postoperative satisfaction score was 9.6 on the 10th day postoperative evaluation. At more than 1 month later, the mean late postoperative satisfaction score was 9.2. There was no significant difference in early postoperative satisfaction score between clipping level L3 and L4/5. However, late postoperative satisfaction score was significantly better in the L3 group than that in the L4/5 group. Patient's age and body mass index did not affect the satisfaction score. However, male patients and patients who had history of hyperhidrosis operation showed higher satisfaction score than others. Limited video-assisted LSB using clip provided good results with minimal complications and low compensatory hidrosis, contrary to the prejudice toward it. Therefore, surgical treatment is recommended for plantar hyperhidrosis.
Light, Sharee N; Moran, Zachary D; Swander, Lena; Le, Van; Cage, Brandi; Burghy, Cory; Westbrook, Cecilia; Greishar, Larry; Davidson, Richard J
2015-01-01
The relation between empathy subtypes and prosocial behavior was investigated in a sample of healthy adults. "Empathic concern" and "empathic happiness", defined as negative and positive vicarious emotion (respectively) combined with an other-oriented feeling of "goodwill" (i.e. a thought to do good to others/see others happy), were elicited in 68 adult participants who watched video clips extracted from the television show Extreme Makeover: Home Edition. Prosocial behavior was quantified via performance on a non-monetary altruistic decision-making task involving book selection and donation. Empathic concern and empathic happiness were measured via self-report (immediately following each video clip) and via facial electromyography recorded from corrugator (active during frowning) and zygomatic (active during smiling) facial regions. Facial electromyographic signs of (a) empathic concern (i.e. frowning) during sad video clips, and (b) empathic happiness (i.e. smiling) during happy video clips, predicted increased prosocial behavior in the form of increased goodwill-themed book selection/donation. Copyright © 2014 Elsevier B.V. All rights reserved.
Light, Sharee N.; Moran, Zachary D.; Swander, Lena; Le, Van; Cage, Brandi; Burghy, Cory; Westbrook, Cecilia; Greishar, Larry; Davidson, Richard J.
2016-01-01
The relation between empathy subtypes and prosocial behavior was investigated in a sample of healthy adults. "Empathic concern" and "empathic happiness," defined as negative and positive vicarious emotion (respectively) combined with an other-oriented feeling of “goodwill” (i.e. a thought to do good to others/see others happy), were elicited in 68 adult participants who watched video clips extracted from the television show Extreme Makeover: Home Edition. Prosocial behavior was quantified via performance on a non-monetary altruistic decision-making task involving book selection and donation. Empathic concern and empathic happiness were measured via self-report (immediately following each video clip) and via facial electromyography recorded from corrugator (active during frowning) and zygomatic (active during smiling) facial regions. Facial electromyographic signs of (a) empathic concern (i.e. frowning) during sad video clips, and (b) empathic happiness (i.e. smiling) during happy video clips, predicted increased prosocial behavior in the form of increased goodwill-themed book selection/donation. PMID:25486408
Semantic-based surveillance video retrieval.
Hu, Weiming; Xie, Dan; Fu, Zhouyu; Zeng, Wenrong; Maybank, Steve
2007-04-01
Visual surveillance produces large amounts of video data. Effective indexing and retrieval from surveillance video databases are very important. Although there are many ways to represent the content of video clips in current video retrieval algorithms, there still exists a semantic gap between users and retrieval systems. Visual surveillance systems supply a platform for investigating semantic-based video retrieval. In this paper, a semantic-based video retrieval framework for visual surveillance is proposed. A cluster-based tracking algorithm is developed to acquire motion trajectories. The trajectories are then clustered hierarchically using the spatial and temporal information, to learn activity models. A hierarchical structure of semantic indexing and retrieval of object activities, where each individual activity automatically inherits all the semantic descriptions of the activity model to which it belongs, is proposed for accessing video clips and individual objects at the semantic level. The proposed retrieval framework supports various queries including queries by keywords, multiple object queries, and queries by sketch. For multiple object queries, succession and simultaneity restrictions, together with depth and breadth first orders, are considered. For sketch-based queries, a method for matching trajectories drawn by users to spatial trajectories is proposed. The effectiveness and efficiency of our framework are tested in a crowded traffic scene.
Is perception of quality more important than technical quality in patient video cases?
Roland, Damian; Matheson, David; Taub, Nick; Coats, Tim; Lakhanpaul, Monica
2015-08-13
The use of video cases to demonstrate key signs and symptoms in patients (patient video cases or PVCs) is a rapidly expanding field. The aims of this study were to evaluate whether the technical quality, or judgement of quality, of a video clip influences a paediatrician's judgment on acuity of the case and assess the relationship between perception of quality and the technical quality of a selection of video clips. Participants (12 senior consultant paediatricians attending an examination workshop) individually categorised 28 PVCs into one of 3 possible acuities and then described the quality of the image seen. The PVCs had been converted into four different technical qualities (differing bit rates ranging from excellent to low quality). Participants' assessment of quality and the actual industry standard of the PVC were independent (333 distinct observations, spearmans rho = 0.0410, p = 0.4564). Agreement between actual acuity and participants' judgement was generally good at higher acuities but moderate at medium/low acuities of illness (overall correlation 0.664). Perception of the quality of the clip was related to correct assignment of acuity regardless of the technical quality of the clip (number of obs = 330, z = 2.07, p = 0.038). It is important to benchmark PVCs prior to use in learning resources as experts may not agree on the information within, or quality of, the clip. It appears, although PVCs may be beneficial in a pedagogical context, the perception of quality of clip may be an important determinant of an expert's decision making.
Discrimination of emotional states from scalp- and intracranial EEG using multiscale Rényi entropy.
Tonoyan, Yelena; Chanwimalueang, Theerasak; Mandic, Danilo P; Van Hulle, Marc M
2017-01-01
A data-adaptive, multiscale version of Rényi's quadratic entropy (RQE) is introduced for emotional state discrimination from EEG recordings. The algorithm is applied to scalp EEG recordings of 30 participants watching 4 emotionally-charged video clips taken from a validated public database. Krippendorff's inter-rater statistic reveals that multiscale RQE of the mid-frontal scalp electrodes best discriminates between five emotional states. Multiscale RQE is also applied to joint scalp EEG, amygdala- and occipital pole intracranial recordings of an implanted patient watching a neutral and an emotionally charged video clip. Unlike for the neutral video clip, the RQEs of the mid-frontal scalp electrodes and the amygdala-implanted electrodes are observed to coincide in the time range where the crux of the emotionally-charged video clip is revealed. In addition, also during this time range, phase synchrony between the amygdala and mid-frontal recordings is maximal, as well as our 30 participants' inter-rater agreement on the same video clip. A source reconstruction exercise using intracranial recordings supports our assertion that amygdala could contribute to mid-frontal scalp EEG. On the contrary, no such contribution was observed for the occipital pole's intracranial recordings. Our results suggest that emotional states discriminated from mid-frontal scalp EEG are likely to be mirrored by differences in amygdala activations in particular when recorded in response to emotionally-charged scenes.
Discrimination of emotional states from scalp- and intracranial EEG using multiscale Rényi entropy
Chanwimalueang, Theerasak; Mandic, Danilo P.; Van Hulle, Marc M.
2017-01-01
A data-adaptive, multiscale version of Rényi’s quadratic entropy (RQE) is introduced for emotional state discrimination from EEG recordings. The algorithm is applied to scalp EEG recordings of 30 participants watching 4 emotionally-charged video clips taken from a validated public database. Krippendorff’s inter-rater statistic reveals that multiscale RQE of the mid-frontal scalp electrodes best discriminates between five emotional states. Multiscale RQE is also applied to joint scalp EEG, amygdala- and occipital pole intracranial recordings of an implanted patient watching a neutral and an emotionally charged video clip. Unlike for the neutral video clip, the RQEs of the mid-frontal scalp electrodes and the amygdala-implanted electrodes are observed to coincide in the time range where the crux of the emotionally-charged video clip is revealed. In addition, also during this time range, phase synchrony between the amygdala and mid-frontal recordings is maximal, as well as our 30 participants’ inter-rater agreement on the same video clip. A source reconstruction exercise using intracranial recordings supports our assertion that amygdala could contribute to mid-frontal scalp EEG. On the contrary, no such contribution was observed for the occipital pole’s intracranial recordings. Our results suggest that emotional states discriminated from mid-frontal scalp EEG are likely to be mirrored by differences in amygdala activations in particular when recorded in response to emotionally-charged scenes. PMID:29099846
Teaching Shakespeare: Materials and Outcomes for Web-Based Instruction and Class Adjunct.
ERIC Educational Resources Information Center
Schwartz, Helen J.
Multimedia hypertext materials have instructional advantages when used as adjuncts in traditional classes and as the primary means of instruction, as illustrated in this case study of college-level Shakespeare classes. Plays become more accessible through use of audio and video resources, including video clips from play productions. Student work…
Developing Inquiry-as-Stance and Repertoires of Practice: Teacher Learning across Two Settings
ERIC Educational Resources Information Center
Braaten, Melissa L.
2011-01-01
Sixteen science educators joined a science teacher video club for one school year to collaboratively inquire into each other's classroom practice through the use of records of practice including classroom video clips and samples of student work. This group was focused on developing ambitious, equitable science teaching that capitalizes on…
Narrated Video Clips Improve Student Learning
ERIC Educational Resources Information Center
Stephens, Philip J.
2017-01-01
The purpose of this study is to determine whether viewing narrated video clips improves student learning. The study was conducted with undergraduate, mostly Biology majors, in an Animal Physiology course held in successive semesters. When both classes were given the same face-to-face lectures and identical online resources their performance on an…
Englander, Zoë A.; Haidt, Jonathan; Morris, James P.
2012-01-01
Background Most research investigating the neural basis of social emotions has examined emotions that give rise to negative evaluations of others (e.g. anger, disgust). Emotions triggered by the virtues and excellences of others have been largely ignored. Using fMRI, we investigated the neural basis of two “other-praising" emotions – Moral Elevation (a response to witnessing acts of moral beauty), and Admiration (which we restricted to admiration for physical skill). Methodology/Principal Findings Ten participants viewed the same nine video clips. Three clips elicited moral elevation, three elicited admiration, and three were emotionally neutral. We then performed pair-wise voxel-by-voxel correlations of the BOLD signal between individuals for each video clip and a separate resting-state run. We observed a high degree of inter-subject synchronization, regardless of stimulus type, across several brain regions during free-viewing of videos. Videos in the elevation condition evoked significant inter-subject synchronization in brain regions previously implicated in self-referential and interoceptive processes, including the medial prefrontal cortex, precuneus, and insula. The degree of synchronization was highly variable over the course of the videos, with the strongest synchrony occurring during portions of the videos that were independently rated as most emotionally arousing. Synchrony in these same brain regions was not consistently observed during the admiration videos, and was absent for the neutral videos. Conclusions/Significance Results suggest that the neural systems supporting moral elevation are remarkably consistent across subjects viewing the same emotional content. We demonstrate that model-free techniques such as inter-subject synchronization may be a useful tool for studying complex, context dependent emotions such as self-transcendent emotion. PMID:22745745
Kortelainen, Jukka; Väyrynen, Eero; Seppänen, Tapio
2015-01-01
Recent findings suggest that specific neural correlates for the key elements of basic emotions do exist and can be identified by neuroimaging techniques. In this paper, electroencephalogram (EEG) is used to explore the markers for video-induced emotions. The problem is approached from a classifier perspective: the features that perform best in classifying person's valence and arousal while watching video clips with audiovisual emotional content are searched from a large feature set constructed from the EEG spectral powers of single channels as well as power differences between specific channel pairs. The feature selection is carried out using a sequential forward floating search method and is done separately for the classification of valence and arousal, both derived from the emotional keyword that the subject had chosen after seeing the clips. The proposed classifier-based approach reveals a clear association between the increased high-frequency (15-32 Hz) activity in the left temporal area and the clips described as "pleasant" in the valence and "medium arousal" in the arousal scale. These clips represent the emotional keywords amusement and joy/happiness. The finding suggests the occurrence of a specific neural activation during video-induced pleasant emotion and the possibility to detect this from the left temporal area using EEG.
Mulgrew, K E; Volcevski-Kostas, D
2012-09-01
Viewing idealized images has been shown to reduce men's body satisfaction; however no research has examined the impact of music video clips. This was the first study to examine the effects of exposure to muscular images in music clips on men's body image, mood and cognitions. Ninety men viewed 5 min of clips containing scenery, muscular or average-looking singers, and completed pre- and posttest measures of mood and body image. Appearance schema activation was also measured. Men exposed to the muscular clips showed poorer posttest levels of anger, body and muscle tone satisfaction compared to men exposed to the scenery or average clips. No evidence of schema activation was found, although potential problems with the measure are noted. These preliminary findings suggest that even short term exposure to music clips can produce negative effects on men's body image and mood. Copyright © 2012 Elsevier Ltd. All rights reserved.
A Snapshot of the Depiction of Electronic Cigarettes in YouTube Videos.
Romito, Laura M; Hurwich, Risa A; Eckert, George J
2015-11-01
To assess the depiction of e-cigarettes in YouTube videos. The sample (N = 63) was selected from the top 20 search results for "electronic cigarette," and "e-cig" with each term searched twice by the filters "Relevance" and "View Count." Data collected included title, length, number of views, "likes," "dislikes," comments, and inferred demographics of individuals appearing in the videos. Seventy-six percent of videos included at least one man, 62% included a Caucasian, and 50% included at least one young individual. Video content connotation was coded as positive (76%), neutral (18%), or negative (6%). Videos were categorized as advertisement (33%), instructional (17%), news clip (19%), product review (13%), entertainment (11%), public health (3%), and personal testimonial (3%). Most e-cigarette YouTube videos are non-traditional or covert advertisements featuring young Caucasian men.
Student Views on Learning Environments Enriched by Video Clips
ERIC Educational Resources Information Center
Kosterelioglu, Ilker
2016-01-01
This study intended to identify student views regarding the enrichment of instructional process via video clips based on the goals of the class. The study was conducted in Educational Psychology classes at Amasya University Faculty of Education during the 2012-2013 academic year. The study was implemented on students in the Classroom Teaching and…
Fuller, G W; Kemp, S P T; Raftery, M
2017-03-01
To investigate the accuracy and reliability of side-line video review of head impact events to aid identification of concussion in elite sport. Diagnostic accuracy and inter-rater agreement study. Immediate care, match day and team doctors involved in the 2015 Rugby Union World Cup viewed 20 video clips showing broadcaster's footage of head impact events occurring during elite Rugby matches. Subjects subsequently recorded whether any criteria warranting permanent removal from play or medical room head injury assessment were present. The accuracy of these ratings were compared to consensus expert opinion by calculating mean sensitivity and specificity across raters. The reproducibility of doctor's decisions was additionally assessed using raw agreement and Gwets AC1 chance corrected agreement coefficient. Forty rugby medicine doctors were included in the study. Compared to the expert reference standard overall sensitivity and specificity of doctors decisions were 77.5% (95% CI 73.1-81.5%) and 53.3% (95% CI 48.2-58.2%) respectively. Overall there was raw agreement of 67.8% (95% CI 57.9-77.7%) between doctors across all video clips. Chance corrected Gwets AC1 agreement coefficient was 0.39 (95% CI 0.17-0.62), indicating fair agreement. Rugby World Cup doctors' demonstrated moderate accuracy and fair reproducibility in head injury event decision making when assessing video clips of head impact events. The use of real-time video may improve the identification, decision making and management of concussion in elite sports. Copyright © 2016 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Linder, C. A.; Wilbert, M.; Holmes, R. M.
2010-12-01
Multimedia video presentations, which integrate still photographs with video clips, audio interviews, ambient sounds, and music, are an effective and engaging way to tell science stories. In July 2009, Linder joined professors and undergraduates on an expedition to the Kolyma River in northeastern Siberia. This IPY science project, called The Polaris Project (http://www.thepolarisproject.org), is an undergraduate research experience where students and faculty work together to increase our understanding of climate change impacts, including thawing permafrost, in this remote corner of the world. During the summer field season, Linder conducted dozens of interviews, captured over 20,000 still photographs and hours of ambient audio and video clips. Following the 2009 expedition, Linder blended this massive archive of visual and audio information into a 10-minute overview video and five student vignettes. In 2010, Linder again traveled to Siberia as part of the Polaris Project, this time mentoring an environmental journalism student who will lead the production of a video about the 2010 field season. Using examples from the Polaris productions, we will present tips, tools, and techniques for creating compelling multimedia science stories.
Breaking the news on mobile TV: user requirements of a popular mobile content
NASA Astrophysics Data System (ADS)
Knoche, Hendrik O.; Sasse, M. Angela
2006-02-01
This paper presents the results from three lab-based studies that investigated different ways of delivering Mobile TV News by measuring user responses to different encoding bitrates, image resolutions and text quality. All studies were carried out with participants watching News content on mobile devices, with a total of 216 participants rating the acceptability of the viewing experience. Study 1 compared the acceptability of a 15-second video clip at different video and audio encoding bit rates on a 3G phone at a resolution of 176x144 and an iPAQ PDA (240x180). Study 2 measured the acceptability of video quality of full feature news clips of 2.5 minutes which were recorded from broadcast TV, encoded at resolutions ranging from 120x90 to 240x180, and combined with different encoding bit rates and audio qualities presented on an iPAQ. Study 3 improved the legibility of the text included in the video simulating a separate text delivery. The acceptability of News' video quality was greatly reduced at a resolution of 120x90. The legibility of text was a decisive factor in the participants' assessment of the video quality. Resolutions of 168x126 and higher were substantially more acceptable when they were accompanied by optimized high quality text compared to proportionally scaled inline text. When accompanied by high quality text TV news clips were acceptable to the vast majority of participants at resolutions as small as 168x126 for video encoding bitrates of 160kbps and higher. Service designers and operators can apply this knowledge to design a cost-effective mobile TV experience.
Video segmentation and camera motion characterization using compressed data
NASA Astrophysics Data System (ADS)
Milanese, Ruggero; Deguillaume, Frederic; Jacot-Descombes, Alain
1997-10-01
We address the problem of automatically extracting visual indexes from videos, in order to provide sophisticated access methods to the contents of a video server. We focus on tow tasks, namely the decomposition of a video clip into uniform segments, and the characterization of each shot by camera motion parameters. For the first task we use a Bayesian classification approach to detecting scene cuts by analyzing motion vectors. For the second task a least- squares fitting procedure determines the pan/tilt/zoom camera parameters. In order to guarantee the highest processing speed, all techniques process and analyze directly MPEG-1 motion vectors, without need for video decompression. Experimental results are reported for a database of news video clips.
Early Word Comprehension in Infants: Replication and Extension
Bergelson, Elika; Swingley, Daniel
2014-01-01
A handful of recent experimental reports have shown that infants of 6 to 9 months know the meanings of some common words. Here, we replicate and extend these findings. With a new set of items, we show that when young infants (age 6-16 months, n=49) are presented with side-by-side video clips depicting various common early words, and one clip is named in a sentence, they look at the named video at above-chance rates. We demonstrate anew that infants understand common words by 6-9 months, and that performance increases substantially around 14 months. The results imply that 6-9 month olds’ failure to understand words not referring to objects (verbs, adjectives, performatives) in a similar prior study is not attributable to the use of dynamic video depictions. Thus, 6-9 month olds’ experience of spoken language includes some understanding of common words for concrete objects, but relatively impoverished comprehension of other words. PMID:26664329
ERIC Educational Resources Information Center
Lim, Jeff
2013-01-01
"A ubiquitous English vocabulary learning system: evidence of active/passive attitudes vs. usefulness/ease-of-use" introduces and develops "Ubiquitous English Vocabulary Learning" (UEFL) system. It introduces to the memorization using the video clips. According to their paper the video clip gives a better chance for students to…
Teaching and Experiencing the Misinformation Effect: A Classroom Exercise
ERIC Educational Resources Information Center
Swenson, John Eric, III; Schneller, Gregory R.
2011-01-01
Students from four sections of Introduction to Psychology (N=82) were taught that participating in a classroom exercise may make memories vulnerable to the misinformation effect. All students were shown a short video clip of a car wreck. Students were then asked either "leading" or "non-leading" questions about the video clip. Students were also…
Comparing Audio and Video Data for Rating Communication
Williams, Kristine; Herman, Ruth; Bontempo, Daniel
2013-01-01
Video recording has become increasingly popular in nursing research, adding rich nonverbal, contextual, and behavioral information. However, benefits of video over audio data have not been well established. We compared communication ratings of audio versus video data using the Emotional Tone Rating Scale. Twenty raters watched video clips of nursing care and rated staff communication on 12 descriptors that reflect dimensions of person-centered and controlling communication. Another group rated audio-only versions of the same clips. Interrater consistency was high within each group with ICC (2,1) for audio = .91, and video = .94. Interrater consistency for both groups combined was also high with ICC (2,1) for audio and video = .95. Communication ratings using audio and video data were highly correlated. The value of video being superior to audio recorded data should be evaluated in designing studies evaluating nursing care. PMID:23579475
ERIC Educational Resources Information Center
Olsen, Walter R.; Sommers, William A.
2005-01-01
Video and DVD clips give participants an opportunity to explore values and ideas, learn about one another, and, in the process, build a stronger learning community. "Energizing Staff Development Using Film Clips" is a collection of film and television clips that staff developers can use to encourage discussion and reflection on pertinent, common…
PE on YouTube--Investigating Participation in Physical Education Practice
ERIC Educational Resources Information Center
Quennerstedt, Mikael
2013-01-01
Background: In this article, students' diverse ways of participating in physical education (PE) practice shown in clips on YouTube were investigated. YouTube is the largest user-generated video-sharing website on the Internet, where different video content is presented. The clips on YouTube, as used in this paper, can be seen as a user-generated…
ERIC Educational Resources Information Center
Lin, Che-Hung; Yen, Yu-Ren; Wu, Pai-Lu
2015-01-01
The aim of this study was to develop a store service operations practice course based on simulation-based training of video clip instruction. The action research of problem-solving strategies employed for teaching are by simulated store operations. The counter operations course unit used as an example, this study developed 4 weeks of subunits for…
ERIC Educational Resources Information Center
Pennock, Phyllis Haugabook; Schwartz, Renee' S.
2012-01-01
This action research project describes the methods an African-American female instructor used when introducing biology-related video clips with a multicultural component to predominantly white pre-service elementary students. Studies show that introducing multiculturalism into classrooms is crucial for students and teachers. Multicultural…
Objectification of perceptual image quality for mobile video
NASA Astrophysics Data System (ADS)
Lee, Seon-Oh; Sim, Dong-Gyu
2011-06-01
This paper presents an objective video quality evaluation method for quantifying the subjective quality of digital mobile video. The proposed method aims to objectify the subjective quality by extracting edgeness and blockiness parameters. To evaluate the performance of the proposed algorithms, we carried out subjective video quality tests with the double-stimulus continuous quality scale method and obtained differential mean opinion score values for 120 mobile video clips. We then compared the performance of the proposed methods with that of existing methods in terms of the differential mean opinion score with 120 mobile video clips. Experimental results showed that the proposed methods were approximately 10% better than the edge peak signal-to-noise ratio of the J.247 method in terms of the Pearson correlation.
Accommodation training in foreign workers.
Takada, Masumi; Miyao, Masaru; Matsuura, Yasuyuki; Takada, Hiroki
2013-01-01
By relaxing the contracted focus-adjustment muscles around the eyeball, known as the ciliary and extraocular muscles, the degree of pseudomyopia can be reduced. This understanding has led to accommodation training in which a visual target is presented in stereoscopic video clips. However, it has been pointed out that motion sickness can be induced by viewing stereoscopic video clips. In Measurement 1 of the present study, we verified whether the new 3D technology reduced the severity of motion sickness in accordance with stabilometry. We then evaluated the short-term effects of accommodation training using new stereoscopic video clips on foreign workers (11 females) suffering from eye fatigue in Measurement 2. The foreign workers were trained for three days. As a result, visual acuity was statistically improved by continuous accommodation training, which will help promote ciliary muscle stretching.
Expert-novice differences in brain function of field hockey players.
Wimshurst, Z L; Sowden, P T; Wright, M
2016-02-19
The aims of this study were to use functional magnetic resonance imaging to examine the neural bases for perceptual-cognitive superiority in a hockey anticipation task. Thirty participants (15 hockey players, 15 non-hockey players) lay in an MRI scanner while performing a video-based task in which they predicted the direction of an oncoming shot in either a hockey or a badminton scenario. Video clips were temporally occluded either 160 ms before the shot was made or 60 ms after the ball/shuttle left the stick/racquet. Behavioral data showed a significant hockey expertise×video-type interaction in which hockey experts were superior to novices with hockey clips but there were no significant differences with badminton clips. The imaging data on the other hand showed a significant main effect of hockey expertise and of video type (hockey vs. badminton), but the expertise×video-type interaction did not survive either a whole-brain or a small-volume correction for multiple comparisons. Further analysis of the expertise main effect revealed that when watching hockey clips, experts showed greater activation in the rostral inferior parietal lobule, which has been associated with an action observation network, and greater activation than novices in Brodmann areas 17 and 18 and middle frontal gyrus when watching badminton videos. The results provide partial support both for domain-specific and domain-general expertise effects in an action anticipation task. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
Comparing audio and video data for rating communication.
Williams, Kristine; Herman, Ruth; Bontempo, Daniel
2013-09-01
Video recording has become increasingly popular in nursing research, adding rich nonverbal, contextual, and behavioral information. However, benefits of video over audio data have not been well established. We compared communication ratings of audio versus video data using the Emotional Tone Rating Scale. Twenty raters watched video clips of nursing care and rated staff communication on 12 descriptors that reflect dimensions of person-centered and controlling communication. Another group rated audio-only versions of the same clips. Interrater consistency was high within each group with Interclass Correlation Coefficient (ICC) (2,1) for audio .91, and video = .94. Interrater consistency for both groups combined was also high with ICC (2,1) for audio and video = .95. Communication ratings using audio and video data were highly correlated. The value of video being superior to audio-recorded data should be evaluated in designing studies evaluating nursing care.
DOT National Transportation Integrated Search
2007-05-01
Subjects rated the workload of clips of forward road scenes (from the advanced collision avoidance system (ACAS) field operational test) in relation to 2 anchor clips of Level of Service (LOS) A and E (light and heavy traffic), and indicated if they ...
Legal drug content in music video programs shown on Australian television on saturday mornings.
Johnson, Rebecca; Croager, Emma; Pratt, Iain S; Khoo, Natalie
2013-01-01
To examine the extent to which legal drug references (alcohol and tobacco) are present in the music video clips shown on two music video programs broadcast in Australia on Saturday mornings. Further, to examine the music genres in which the references appeared and the dominant messages associated with the references. Music video clips shown on the music video programs 'Rage' (ABC TV) and [V] 'Music Video Chart' (Channel [V]) were viewed over 8 weeks from August 2011 to October 2011 and the number of clips containing verbal and/or visual drug references in each program was counted. The songs were classified by genre and the dominant messages associated with drug references were also classified and analysed. A considerable proportion of music videos (approximately one-third) contained drug references. Alcohol featured in 95% of the music videos that contained drug references. References to alcohol generally associated it with fun and humour, and alcohol and tobacco were both overwhelmingly presented in contexts that encouraged, rather than discouraged, their use. In Australia, Saturday morning is generally considered a children's television viewing timeslot, and several broadcaster Codes of Practice dictate that programs shown on Saturday mornings must be appropriate for viewing by audiences of all ages. Despite this, our findings show that music video programs aired on Saturday mornings contain a considerable level of drug-related content.
ERIC Educational Resources Information Center
Kolikant, Yifat Ben-David; Broza, Orit
2011-01-01
The question of how to enhance the learning of low-achieving students in mathematics presents an important challenge to researchers and teachers alike. We investigated whether and how the use of a contextual story presented in a video clip facilitated low-achieving students' understanding of the meaning of fraction expansion. To this end, we (a)…
Catering to millennial learners: assessing and improving fine-needle aspiration performance.
Rowse, Phillip G; Ruparel, Raaj K; AlJamal, Yazan N; Abdelsattar, Jad M; Heller, Stephanie F; Farley, David R
2014-01-01
Fine-needle aspiration (FNA) of a palpable cervical lymph node is a straightforward procedure that should be safely performed by educated general surgery (GS) trainees. Retention of technical skill is suspect, unless sequential learning experiences are provided. However, voluntary learning experiences are no guarantee that trainees will actually use the resource. A 3-minute objective structured assessment of technical skill-type station was created to assess GS trainee performance using FNA. Objective criteria were developed and a checklist was generated (perfect score = 24). Following abysmal performance of 11 postgraduate year (PGY)-4 trainees on the FNA station of our semiannual surgical skills assessment ("X-Games"), we provided all GS residents with electronic access to a 90-second YouTube video clip demonstrating proper FNA technique. PGY-2 (n = 11) and PGY-3 (n = 10) residents subsequently were tested on FNA technique 5 and 12 days later, respectively. All 32 trainees completed the station in less than 3 minutes. Overall scores ranged from 4 to 24 (mean = 14.9). PGY-4 residents assessed before the creation of the video clip scored lowest (range: 4-18, mean = 11.4). PGY-3 residents (range: 10-22, mean = 17.8) and PGY-2 residents (range: 10-24, mean = 15.8) subsequently scored higher (p < 0.05). Ten residents admitted watching the 90-second FNA video clip and scored higher (mean = 21.7) than the 11 residents that admitted they did not watch the clip (mean = 13.1, p < 0.001). Of the 11 trainees who did not watch the video, 6 claimed they did not have time, and 5 felt it would not be useful to them. Overall performance of FNA was poor in 32 midlevel GS residents. However, a 90-second video clip demonstrating proper FNA technique viewed less than 2 weeks before the examination significantly elevated scores. Half of trainees given the chance to learn online did not take the opportunity to view the video clip. Although preemptive learning is effective, future efforts should attempt to improve self-directed learning habits of trainees and evaluate actual long-term skill retention. Copyright © 2014. Published by Elsevier Inc.
Shaking video stabilization with content completion
NASA Astrophysics Data System (ADS)
Peng, Yi; Ye, Qixiang; Liu, Yanmei; Jiao, Jianbin
2009-01-01
A new stabilization algorithm to counterbalance the shaking motion in a video based on classical Kandade-Lucas- Tomasi (KLT) method is presented in this paper. Feature points are evaluated with law of large numbers and clustering algorithm to reduce the side effect of moving foreground. Analysis on the change of motion direction is also carried out to detect the existence of shaking. For video clips with detected shaking, an affine transformation is performed to warp the current frame to the reference one. In addition, the missing content of a frame during the stabilization is completed with optical flow analysis and mosaicking operation. Experiments on video clips demonstrate the effectiveness of the proposed algorithm.
Clinical assessment of infant colour at delivery
O'Donnell, Colm P F; Kamlin, C Omar F; Davis, Peter G; Carlin, John B; Morley, Colin J
2007-01-01
Objective Use of video recordings of newborn infants to determine: (1) if clinicians agreed whether infants were pink; and (2) the pulse oximeter oxygen saturation (Spo2) at which infants first looked pink. Methods Selected clips from video recordings of infants taken immediately after delivery were shown to medical and nursing staff. The infants received varying degrees of resuscitation (including none) and were monitored with pulse oximetry. The oximeter readings were obscured to observers but known to the investigators. A timer was visible and the sound was inaudible. The observers were asked to indicate whether each infant was pink at the beginning, became pink during the clip, or was never pink. If adjudged to turn pink during the clip, observers recorded the time this occurred and the corresponding Spo2 was determined. Results 27 clinicians assessed videos of 20 infants (mean (SD) gestation 31(4) weeks). One infant (5%) was perceived to be pink by all observers. The number of clinicians who thought each of the remaining 19 infants were never pink varied from 1 (4%) to 22 (81%). Observers determined the 10 infants with a maximum Spo2 ⩾95% never pink on 17% (46/270) of occasions. The Spo2 at which individual infants were perceived to turn pink varied from 10% to 100%. Conclusion Among clinicians observing the same videos there was disagreement about whether newborn infants looked pink with wide variation in the Spo2 when they were considered to become pink. PMID:17613535
Using Humorous Sitcom Clips in Teaching Federal Income Taxes
ERIC Educational Resources Information Center
Cecil, H. Wayne
2014-01-01
This article shares the motivation, process, and outcomes of using humorous scenes from television comedies to teach the real world of tax practice. The article advances the literature by reviewing the use of video clips in a previously unexplored discipline, discussing the process of identifying and selecting appropriate clips, and introducing…
ERIC Educational Resources Information Center
Muslem, Asnawi; Mustafa, Faisal; Usman, Bustami; Rahman, Aulia
2017-01-01
This study investigated whether the application of video clips with small groups or with individual teaching-learning activities improved the speaking skills of young EFL learners the most; accordingly a quasi-experimental study with a pre-test, post-test design was done. The instrument used in this study was a test in the form of an oral test or…
ERIC Educational Resources Information Center
Duman, Steve; Locher, Miriam A.
2008-01-01
This paper examines how two American presidential candidates, Barack Obama and Hillary Clinton, make use of a VIDEO EXCHANGE IS CONVERSATION metaphor on YouTube, a channel of communication that allows the exchange of video clips on the Internet. It is argued that the politicians exploit the metaphor for its connotations of creating involvement and…
Dynamic Emotional Faces Generalise Better to a New Expression but not to a New View.
Liu, Chang Hong; Chen, Wenfeng; Ward, James; Takahashi, Nozomi
2016-08-08
Prior research based on static images has found limited improvement for recognising previously learnt faces in a new expression after several different facial expressions of these faces had been shown during the learning session. We investigated whether non-rigid motion of facial expression facilitates the learning process. In Experiment 1, participants remembered faces that were either presented in short video clips or still images. To assess the effect of exposure to expression variation, each face was either learnt through a single expression or three different expressions. Experiment 2 examined whether learning faces in video clips could generalise more effectively to a new view. The results show that faces learnt from video clips generalised effectively to a new expression with exposure to a single expression, whereas faces learnt from stills showed poorer generalisation with exposure to either single or three expressions. However, although superior recognition performance was demonstrated for faces learnt through video clips, dynamic facial expression did not create better transfer of learning to faces tested in a new view. The data thus fail to support the hypothesis that non-rigid motion enhances viewpoint invariance. These findings reveal both benefits and limitations of exposures to moving expressions for expression-invariant face recognition.
Dynamic Emotional Faces Generalise Better to a New Expression but not to a New View
Liu, Chang Hong; Chen, Wenfeng; Ward, James; Takahashi, Nozomi
2016-01-01
Prior research based on static images has found limited improvement for recognising previously learnt faces in a new expression after several different facial expressions of these faces had been shown during the learning session. We investigated whether non-rigid motion of facial expression facilitates the learning process. In Experiment 1, participants remembered faces that were either presented in short video clips or still images. To assess the effect of exposure to expression variation, each face was either learnt through a single expression or three different expressions. Experiment 2 examined whether learning faces in video clips could generalise more effectively to a new view. The results show that faces learnt from video clips generalised effectively to a new expression with exposure to a single expression, whereas faces learnt from stills showed poorer generalisation with exposure to either single or three expressions. However, although superior recognition performance was demonstrated for faces learnt through video clips, dynamic facial expression did not create better transfer of learning to faces tested in a new view. The data thus fail to support the hypothesis that non-rigid motion enhances viewpoint invariance. These findings reveal both benefits and limitations of exposures to moving expressions for expression-invariant face recognition. PMID:27499252
Consolidation of Complex Events via Reinstatement in Posterior Cingulate Cortex
Keidel, James L.; Ing, Leslie P.; Horner, Aidan J.
2015-01-01
It is well-established that active rehearsal increases the efficacy of memory consolidation. It is also known that complex events are interpreted with reference to prior knowledge. However, comparatively little attention has been given to the neural underpinnings of these effects. In healthy adults humans, we investigated the impact of effortful, active rehearsal on memory for events by showing people several short video clips and then asking them to recall these clips, either aloud (Experiment 1) or silently while in an MRI scanner (Experiment 2). In both experiments, actively rehearsed clips were remembered in far greater detail than unrehearsed clips when tested a week later. In Experiment 1, highly similar descriptions of events were produced across retrieval trials, suggesting a degree of semanticization of the memories had taken place. In Experiment 2, spatial patterns of BOLD signal in medial temporal and posterior midline regions were correlated when encoding and rehearsing the same video. Moreover, the strength of this correlation in the posterior cingulate predicted the amount of information subsequently recalled. This is likely to reflect a strengthening of the representation of the video's content. We argue that these representations combine both new episodic information and stored semantic knowledge (or “schemas”). We therefore suggest that posterior midline structures aid consolidation by reinstating and strengthening the associations between episodic details and more generic schematic information. This leads to the creation of coherent memory representations of lifelike, complex events that are resistant to forgetting, but somewhat inflexible and semantic-like in nature. SIGNIFICANCE STATEMENT Memories are strengthened via consolidation. We investigated memory for lifelike events using video clips and showed that rehearsing their content dramatically boosts memory consolidation. Using MRI scanning, we measured patterns of brain activity while watching the videos and showed that, in a network of brain regions, similar patterns of brain activity are reinstated when rehearsing the same videos. Within the posterior cingulate, the strength of reinstatement predicted how well the videos were remembered a week later. The findings extend our knowledge of the brain regions important for creating long-lasting memories for complex, lifelike events. PMID:26511235
Schweier, Rebecca; Grande, Gesine; Richter, Cynthia; Riedel-Heller, Steffi G; Romppel, Matthias
2018-07-01
To investigate the use of lebensstil-aendern.de ("lifestyle change"), a website providing peer narratives of experiences with successful lifestyle change, and to analyze whether peer model characteristics, clip content, and media type have an influence on the number of visitors, dwell time, and exit rates. An in-depth statistical analysis of website use with multilevel regression analyses. In two years, lebensstil-aendern.de attracted 12,844 visitors. The in-depth statistical analysis of usage rates demonstrated that audio clips were less popular than video or text-only clips, longer clips attracted more visitors, and clips by younger and female interviewees were preferred. User preferences for clip content categories differed between heart and back pain patients. Clips about stress management drew the smallest numbers of visitors in both indication modules. Patients are interested in the experiences of others. Because the quality of information for user-generated content is generally low, healthcare providers should include quality-assured patient narratives in their interventions. User preferences for content, medium, and peer characteristics need to be taken into account. If healthcare providers decide to include patient experiences in their websites, they should plan their intervention according to the different needs and preferences of users. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Davey, B.; Davis, H. B.; Harper-Neely, J.; Bowers, S.
2017-12-01
NASA eClips™ is a multi-media educational program providing educational resources relevant to the formal K-12 classroom. Science content for the NASA eClips™ 4D elements is drawn from all four divisions of the Science Mission Directorate (SMD) as well as cross-divisional topics. The suite of elements fulfills the following SMD education objectives: Enable STEM education, Improve U.S. scientific literacy, Advance national education goals (CoSTEM), and Leverage efforts through partnerships. A component of eClips™ was the development of NASA Spotlite videos (student developed videos designed to increase student literacy and address misconceptions of other students) by digital media students. While developing the Sptolite videos, the students gained skills in teamwork, working in groups to accomplish a task, and how to convey specific concepts in a video. The teachers felt the video project was a good fit for their courses and enhanced what the students were already learning. Teachers also reported that the students learned knowledge and skills that would help them in future careers including how to gain a better understanding of a project and the importance of being knowledgeable about the topic. The student developed eClips videos were then used as part of interactive lessons to help other students learn about key science concepts. As part of our research, we established a quasi-experimental design where one group of students received the intervention including the Spotlite videos (intervention group) and one group did not receive the intervention (comparison group). An overall comparison of post scores between intervention group and comparison group students showed intervention groups had significantly higher scores in three of the four content areas - Ozone, Clouds, and Phase Change.
ERIC Educational Resources Information Center
Dudley, Albert P.; And Others
1997-01-01
Presents various tips that are useful in the classroom for teaching second languages. These tips focus on teaching basic computer operations; using annotations to foster error corrections in language; using video clips as a part of a U.S. history or culture-based English-as-a-Second-Language lesson; using karaoke to speak with less inhibition; and…
NASA Astrophysics Data System (ADS)
2011-01-01
WE RECOMMEND Online Graphing Calculator Calculator plots online graphs Challenge and Change: A History of the Nuffield A-Level Physics Project Book delves deep into the history of Nuffield physics SEP Sound Booklet has ideas for teaching sound but lacks some basics Reinventing Schools, Reforming Teaching Fascinating book shows how politics impacts on the classroom Physics and Technology for Future Presidents A great book for teaching physics for the modern world iSeismometer iPhone app teaches students about seismic waves WORTH A LOOK Teachers TV Video Clip Lesson plan uses video clip to explore new galaxies Graphing Calculator App A phone app that handles formulae and graphs WEB WATCH Physics.org competition finds the best websites
Cyberbullying: another main type of bullying?
Slonje, Robert; Smith, Peter K
2008-04-01
Cyberbullying has recently emerged as a new form of bullying and harassment. 360 adolescents (12-20 years), were surveyed to examine the nature and extent of cyberbullying in Swedish schools. Four categories of cyberbullying (by text message, email, phone call and picture/video clip) were examined in relation to age and gender, perceived impact, telling others, and perception of adults becoming aware of such bullying. There was a significant incidence of cyberbullying in lower secondary schools, less in sixth-form colleges. Gender differences were few. The impact of cyberbullying was perceived as highly negative for picture/video clip bullying. Cybervictims most often chose to either tell their friends or no one at all about the cyberbullying, so adults may not be aware of cyberbullying, and (apart from picture/video clip bullying) this is how it was perceived by pupils. Findings are discussed in relation to similarities and differences between cyberbullying and the more traditional forms of bullying.
Kutsuna, Kenichiro; Matsuura, Yasuyuki; Fujikake, Kazuhiro; Miyao, Masaru; Takada, Hiroki
2013-01-01
Visually induced motion sickness (VIMS) is caused by sensory conflict, the disagreement between vergence and visual accommodation while observing stereoscopic images. VIMS can be measured by psychological and physiological methods. We propose a mathematical methodology to measure the effect of three-dimensional (3D) images on the equilibrium function. In this study, body sway in the resting state is compared with that during exposure to 3D video clips on a liquid crystal display (LCD) and on a head mounted display (HMD). In addition, the Simulator Sickness Questionnaire (SSQ) was completed immediately afterward. Based on the statistical analysis of the SSQ subscores and each index for stabilograms, we succeeded in determining the quantity of the VIMS during exposure to the stereoscopic images. Moreover, we discuss the metamorphism in the potential functions to control the standing posture during the exposure to stereoscopic video clips.
Streamed video clips to reduce anxiety in children during inhaled induction of anesthesia.
Mifflin, Katherine A; Hackmann, Thomas; Chorney, Jill Maclaren
2012-11-01
Anesthesia induction in children is frequently achieved by inhalation of nitrous oxide and sevoflurane. Pediatric anesthesiologists commonly use distraction techniques such as humor or nonprocedural talk to reduce anxiety and facilitate a smooth transition at this critical phase. There is a large body of successful distraction research that explores the use of video and television distraction methods for minor medical and dental procedures, but little research on the use of this method for ambulatory surgery. In this randomized control trial study we examined whether video distraction is effective in reducing the anxiety of children undergoing inhaled induction before ambulatory surgery. Children (control = 47, video = 42) between 2 and 10 years old undergoing ambulatory surgery were randomly assigned to a video distraction or control group. In the video distraction group a video clip of the child's preference was played during induction, and the control group received traditional distraction methods during induction. The modified Yale Preoperative Anxiety Scale was used to assess the children's anxiety before and during the process of receiving inhalation anesthetics. All subjects were similar in their age and anxiety scores before entering the operating rooms. Children in the video distraction group were significantly less anxious at induction and showed a significantly smaller change in anxiety from holding to induction than did children in the control group. Playing video clips during the inhaled induction of children undergoing ambulatory surgery is an effective method of reducing anxiety. Therefore, pediatric anesthesiologists may consider using video distraction as a useful, valid, alternative strategy for achieving a smooth transition to the anesthetized state.
USEPA SEMINARS ON INDOOR AIR VAPOR INTRUSION
This interactive CD has been developed to introduce you to the seminar speakers and their presentation topics. It includes introduction and overview video clips, an interactive class exercise that explains how to interpret and use the new EPA IAVI Guidance, a scrolling seminar vi...
NASA Astrophysics Data System (ADS)
Ilgner, Justus F. R.; Kawai, Takashi; Shibata, Takashi; Yamazoe, Takashi; Westhofen, Martin
2006-02-01
Introduction: An increasing number of surgical procedures are performed in a microsurgical and minimally-invasive fashion. However, the performance of surgery, its possibilities and limitations become difficult to teach. Stereoscopic video has evolved from a complex production process and expensive hardware towards rapid editing of video streams with standard and HDTV resolution which can be displayed on portable equipment. This study evaluates the usefulness of stereoscopic video in teaching undergraduate medical students. Material and methods: From an earlier study we chose two clips each of three different microsurgical operations (tympanoplasty type III of the ear, endonasal operation of the paranasal sinuses and laser chordectomy for carcinoma of the larynx). This material was added by 23 clips of a cochlear implantation, which was specifically edited for a portable computer with an autostereoscopic display (PC-RD1-3D, SHARP Corp., Japan). The recording and synchronization of left and right image was performed at the University Hospital Aachen. The footage was edited stereoscopically at the Waseda University by means of our original software for non-linear editing of stereoscopic 3-D movies. Then the material was converted into the streaming 3-D video format. The purpose of the conversion was to present the video clips by a file type that does not depend on a television signal such as PAL or NTSC. 25 4th year medical students who participated in the general ENT course at Aachen University Hospital were asked to estimate depth clues within the six video clips plus cochlear implantation clips. Another 25 4th year students who were shown the material monoscopically on a conventional laptop served as control. Results: All participants noted that the additional depth information helped with understanding the relation of anatomical structures, even though none had hands-on experience with Ear, Nose and Throat operations before or during the course. The monoscopic group generally estimated resection depth to much lesser values than in reality. Although this was the case with some participants in the stereoscopic group, too, the estimation of depth features reflected the enhanced depth impression provided by stereoscopy. Conclusion: Following first implementation of stereoscopic video teaching, medical students who are inexperienced with ENT surgical procedures are able to reproduce depth information and therefore anatomically complex structures to a greater extent following stereoscopic video teaching. Besides extending video teaching to junior doctors, the next evaluation step will address its effect on the learning curve during the surgical training program.
Saying What You're Looking For: Linguistics Meets Video Search.
Barrett, Daniel Paul; Barbu, Andrei; Siddharth, N; Siskind, Jeffrey Mark
2016-10-01
We present an approach to searching large video corpora for clips which depict a natural-language query in the form of a sentence. Compositional semantics is used to encode subtle meaning differences lost in other approaches, such as the difference between two sentences which have identical words but entirely different meaning: The person rode the horse versus The horse rode the person. Given a sentential query and a natural-language parser, we produce a score indicating how well a video clip depicts that sentence for each clip in a corpus and return a ranked list of clips. Two fundamental problems are addressed simultaneously: detecting and tracking objects, and recognizing whether those tracks depict the query. Because both tracking and object detection are unreliable, our approach uses the sentential query to focus the tracker on the relevant participants and ensures that the resulting tracks are described by the sentential query. While most earlier work was limited to single-word queries which correspond to either verbs or nouns, we search for complex queries which contain multiple phrases, such as prepositional phrases, and modifiers, such as adverbs. We demonstrate this approach by searching for 2,627 naturally elicited sentential queries in 10 Hollywood movies.
Microsurgical Clipping of an Unruptured Carotid Cave Aneurysm: 3-Dimensional Operative Video.
Tabani, Halima; Yousef, Sonia; Burkhardt, Jan-Karl; Gandhi, Sirin; Benet, Arnau; Lawton, Michael T
2017-08-01
Most aneurysms originating from the clinoidal segment of the internal carotid artery (ICA) are nowadays managed conservatively, treated endovascularly with coiling (with or without stenting) or flow diverters. However, microsurgical clip occlusion remains an alternative. This video demonstrates clip occlusion of an unruptured right carotid cave aneurysm measuring 7 mm in a 39-year-old woman. The patient opted for surgery because of concerns about prolonged antiplatelet use associated with endovascular therapy. After patient consent, a standard pterional craniotomy was performed followed by extradural anterior clinoidectomy. After dural opening and sylvian fissure split, a clinoidal flap was opened to enter the extradural space around the clinoidal segment. The dural ring was dissected circumferentially, freeing the medial wall of the ICA down to the sellar region and mobilizing the ICA out of its canal of the clinoidal segment. With the aneurysm neck in view, the aneurysm was clipped with a 45° angled fenestrated clip over the ICA. Indocyanine green angiography confirmed no further filling of the aneurysm and patency of the ICA. Complete aneurysm occlusion was confirmed with postoperative angiography, and the patient had no neurologic deficits (Video 1). This case demonstrates the importance of anterior clinoidectomy and thorough distal dural ring dissection for effective clipping of carotid cave aneurysms. Control of venous bleeding from the cavernous sinus with fibrin glue injection simplifies the dissection, which should minimize manipulation of the optic nerve. Knowledge of this anatomy and proficiency with these techniques is important in an era of declining open aneurysm cases. Copyright © 2017 Elsevier Inc. All rights reserved.
Enhance Video Film using Retnix method
NASA Astrophysics Data System (ADS)
Awad, Rasha; Al-Zuky, Ali A.; Al-Saleh, Anwar H.; Mohamad, Haidar J.
2018-05-01
An enhancement technique used to improve the studied video quality. Algorithms like mean and standard deviation are used as a criterion within this paper, and it applied for each video clip that divided into 80 images. The studied filming environment has different light intensity (315, 566, and 644Lux). This different environment gives similar reality to the outdoor filming. The outputs of the suggested algorithm are compared with the results before applying it. This method is applied into two ways: first, it is applied for the full video clip to get the enhanced film; second, it is applied for every individual image to get the enhanced image then compiler them to get the enhanced film. This paper shows that the enhancement technique gives good quality video film depending on a statistical method, and it is recommended to use it in different application.
NASA Astrophysics Data System (ADS)
Riendeau, Diane
2011-05-01
As we finish this publishing cycle, I'd like to thank all the readers who sent in video clips. If you have a YouTube clip that you use in class, please send the link and a brief description to driendeau@dist113.org.
Moriuchi, Takefumi; Iso, Naoki; Sagari, Akira; Ogahara, Kakuya; Kitajima, Eiji; Tanaka, Koji; Tabira, Takayuki; Higashi, Toshio
2014-01-01
Introduction The aim of the present study was to investigate how the speed of observed action affects the excitability of the primary motor cortex (M1), as assessed by the size of motor evoked potentials (MEPs) induced by transcranial magnetic stimulation (TMS). Methods Eighteen healthy subjects watched a video clip of a person catching a ball, played at three different speeds (normal-, half-, and quarter-speed). MEPs were induced by TMS when the model's hand had opened to the widest extent just before catching the ball (“open”) and when the model had just caught the ball (“catch”). These two events were locked to specific frames of the video clip (“phases”), rather than occurring at specific absolute times, so that they could easily be compared across different speeds. MEPs were recorded from the thenar (TH) and abductor digiti minimi (ADM) muscles of the right hand. Results The MEP amplitudes were higher when the subjects watched the video clip at low speed than when they watched the clip at normal speed. A repeated-measures ANOVA, with the factor VIDEO-SPEED, showed significant main effects. Bonferroni's post hoc test showed that the following MEP amplitude differences were significant: TH, normal vs. quarter; ADM, normal vs. half; and ADM, normal vs. quarter. Paired t-tests showed that the significant MEP amplitude differences between TMS phases under each speed condition were TH, “catch” higher than “open” at quarter speed; ADM, “catch” higher than “open” at half speed. Conclusions These results indicate that the excitability of M1 was higher when the observed action was played at low speed. Our findings suggest that the action observation system became more active when the subjects observed the video clip at low speed, because the subjects could then recognize the elements of action and intention in others. PMID:25479161
Imizu, S; Kato, Y; Sangli, A; Oguri, D; Sano, H
2008-08-01
The objective of this article was to assess the clinical use and the completeness of clipping with total occlusion of the aneurysmal lumen, real-time assessment of vascular patency in the parent, branching and perforating vessels, intraoperative assessment of blood flow, image quality, spatial resolution and clinical value in difficult aneurysms using near infrared indocyanine green video angiography integrated on to an operative Pentero neurosurgical microscope (Carl Zeiss, Oberkochen Germany). Thirteen patients with aneurysms were operated upon. An infrared camera with near infrared technology was adapted on to the OPMI Pentero microscope with a special filter and infrared excitation light to illuminate the operating field which was designed to allow passage of the near infrared light required for excitation of indocyanine green (ICG) which was used as the intravascular marker. The intravascular fluorescence was imaged with a video camera attached to the microscope. ICG fluorescence (700-850 nm) from a modified microscope light source on to the surgical field and passage of ICG fluorescence (780-950 nm) from the surgical field, back into the optical path of the microscope was used to detect the completeness of aneurysmal clipping Incomplete clipping in three patients (1 female and 2 males) with unruptured complicated aneurysms was detected using indocyanine green video angiography. There were no adverse effects after injection of indocyanine green. The completeness of clipping was inadequately detected by Doppler ultrasound miniprobe and rigid endoscopy and was thus complemented by indocyanine green video angiography. The operative microscope-integrated ICG video angiography as a new intraoperative method for detecting vascular flow, was found to be quick, reliable, cost-effective and possibly a substitute or adjunct for Doppler ultrasonography or intraoperative DSA, which is presently the gold standard. The simplicity of the method, the speed with which the investigation can be performed, the quality of the images, and the outcome of surgical procedures have all reduced the need for angiography. This technique may be useful during routine aneurysm surgery as an independent form of angiography and/or as an adjunct to intraoperative or postoperative DSA.
Software tools for developing an acoustics multimedia CD-ROM
NASA Astrophysics Data System (ADS)
Bigelow, Todd W.; Wheeler, Paul A.
2003-10-01
A multimedia CD-ROM was developed to accompany the textbook, Science of Sound, by Tom Rossing. This paper discusses the multimedia elements included in the CD-ROM and the various software packages used to create them. PowerPoint presentations with an audio-track background were converted to web pages using Impatica. Animations of acoustic examples and quizzes were developed using Flash by Macromedia. Vegas Video and Sound Forge by Sonic Foundry were used for editing video and audio clips while Cleaner by Discreet was used to compress the clips for use over the internet. Math tutorials were presented as whiteboard presentations using Hitachis Starboard to create the graphics and TechSmiths Camtasia Studio to record the presentations. The CD-ROM is in a web-page format created with Macromedias Dreamweaver. All of these elements are integrated into a single course supplement that can be viewed by any computer with a web browser.
Video in the Middle: Purposeful Design of Video-Based Mathematics Professional Development
ERIC Educational Resources Information Center
Seago, Nanette; Koellner, Karen; Jacobs, Jennifer
2018-01-01
In this article the authors described their exploration of a particular design element they labeled "video in the middle." As part of the video in the middle design, the viewing of carefully selected video clips from teachers' classrooms is sandwiched between pre- and postviewing activities that are expected to support teachers'…
NASA Astrophysics Data System (ADS)
Corten-Gualtieri, Pascale; Ritter, Christian; Plumat, Jim; Keunings, Roland; Lebrun, Marcel; Raucent, Benoit
2016-07-01
Most students enter their first university physics course with a system of beliefs and intuitions which are often inconsistent with the Newtonian frame of reference. This article presents an experiment of collaborative learning aiming at helping first-year students in an engineering programme to transition from their naïve intuition about dynamics to the Newtonian way of thinking. In a first activity, students were asked to critically analyse the contents of two video clips from the point of view of Newtonian mechanics. In a second activity, students had to design and realise their own video clip to illustrate a given aspect of Newtonian mechanics. The preparation of the scenario for the second activity required looking up and assimilating scientific knowledge. The efficiency of the activity was assessed on an enhanced version of the statistical analysis method proposed by Hestenes and Halloun, which relies on a pre-test and a post-test to measure individual learning.
The Power of Creativity: Enhancing Academic and Personal Growth for Gifted Learners
ERIC Educational Resources Information Center
McCollister, Karen; Sayler, Micheal F.
2010-01-01
In order for students to learn well, someone or something must capture their interest. Novelty and intellectual challenges are good approaches for gaining attention. Imaginative strategies include storytelling, discrepant events, dressing in costumes, music, dynamic video clips, comic strips, humor, models, puppets, the element of surprise,…
Counterfactual Thinking as a Mechanism in Narrative Persuasion
ERIC Educational Resources Information Center
Tal-Or, Nurit; Boninger, David S.; Poran, Amir; Gleicher, Faith
2004-01-01
Two experiments examined the impact of counterfactual thinking on persuasion. Participants in both experiments were exposed to short video clips in which an actor described a car accident that resulted in serious injury. In the narrative description, the salience of a counterfactual was manipulated by either explicitly including the counterfactual…
Liteplo, Andrew S; Noble, Vicki E; Attwood, Ben H C
2011-11-01
As the use of point-of-care sonography spreads, so too does the need for remote expert over-reading via telesonogrpahy. We sought to assess the feasibility of using familiar, widespread, and cost-effective existent technology to allow remote over-reading of sonograms in real time and to compare 4 different methods of transmission and communication for both the feasibility of transmission and image quality. Sonographic video clips were transmitted using 2 different connections (WiFi and 3G) and via 2 different videoconferencing modalities (iChat [Apple Inc, Cupertino, CA] and Skype [Skype Software Sàrl, Luxembourg]), for a total of 4 different permutations. The clips were received at a remote location and recorded and then scored by expert reviewers for image quality, resolution, and detail. Wireless transmission of sonographic clips was feasible in all cases when WiFi was used and when Skype was used over a 3G connection. Images transmitted via a WiFi connection were statistically superior to those transmitted via 3G in all parameters of quality (average P = .031), and those sent by iChat were superior to those sent by Skype but not statistically so (average P = .057). Wireless transmission of sonographic video clips using inexpensive hardware, free videoconferencing software, and domestic Internet networks is feasible with retention of image quality sufficient for interpretation. WiFi transmission results in greater image quality than transmission by a 3G network.
ERIC Educational Resources Information Center
Foster, Andrea L.
2006-01-01
American college students are increasingly posting videos of their lives online, due to Web sites like Vimeo and Google Video that host video material free and the ubiquity of camera phones and other devices that can take video-clips. However, the growing popularity of online socializing has many safety experts worried that students could be…
Measuring Mathematics Teachers' Professional Competence by Using Video Clips (COACTIV Video)
ERIC Educational Resources Information Center
Bruckmaier, G.; Krauss, S.; Blum, W.; Leiss, D.
2016-01-01
The COACTIV video study is part of the COACTIV research program in which secondary mathematics teachers whose students participated in PISA 03/04 were examined, with respect to their professional knowledge, motivational orientations, beliefs, and self-regulation. In the video study, 284 German secondary mathematics teachers were asked to specify…
ERIC Educational Resources Information Center
Zahn, Carmen; Schaeffeler, Norbert; Giel, Katrin Elisabeth; Wessel, Daniel; Thiel, Ansgar; Zipfel, Stephan; Hesse, Friedrich W.
2014-01-01
Mobile phones and advanced web-based video tools have pushed forward new paradigms for using video in education: Today, students can readily create and broadcast their own digital videos for others and create entirely new patterns of video-based information structures for modern online-communities and multimedia environments. This paradigm shift…
Visual Analytics and Storytelling through Video
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wong, Pak C.; Perrine, Kenneth A.; Mackey, Patrick S.
2005-10-31
This paper supplements a video clip submitted to the Video Track of IEEE Symposium on Information Visualization 2005. The original video submission applies a two-way storytelling approach to demonstrate the visual analytics capabilities of a new visualization technique. The paper presents our video production philosophy, describes the plot of the video, explains the rationale behind the plot, and finally, shares our production experiences with our readers.
Mobile-Based Video Learning Outcomes in Clinical Nursing Skill Education
Lee, Nam-Ju; Chae, Sun-Mi; Kim, Haejin; Lee, Ji-Hye; Min, Hyojin Jennifer; Park, Da-Eun
2016-01-01
Mobile devices are a regular part of daily life among the younger generations. Thus, now is the time to apply mobile device use to nursing education. The purpose of this study was to identify the effects of a mobile-based video clip on learning motivation, competence, and class satisfaction in nursing students using a randomized controlled trial with a pretest and posttest design. A total of 71 nursing students participated in this study: 36 in the intervention group and 35 in the control group. A video clip of how to perform a urinary catheterization was developed, and the intervention group was able to download it to their own mobile devices for unlimited viewing throughout 1 week. All of the students participated in a practice laboratory to learn urinary catheterization and were blindly tested for their performance skills after participation in the laboratory. The intervention group showed significantly higher levels of learning motivation and class satisfaction than did the control. Of the fundamental nursing competencies, the intervention group was more confident in practicing catheterization than their counterparts. Our findings suggest that video clips using mobile devices are useful tools that educate student nurses on relevant clinical skills and improve learning outcomes. PMID:26389858
Lee, Nam-Ju; Chae, Sun-Mi; Kim, Haejin; Lee, Ji-Hye; Min, Hyojin Jennifer; Park, Da-Eun
2016-01-01
Mobile devices are a regular part of daily life among the younger generations. Thus, now is the time to apply mobile device use to nursing education. The purpose of this study was to identify the effects of a mobile-based video clip on learning motivation, competence, and class satisfaction in nursing students using a randomized controlled trial with a pretest and posttest design. A total of 71 nursing students participated in this study: 36 in the intervention group and 35 in the control group. A video clip of how to perform a urinary catheterization was developed, and the intervention group was able to download it to their own mobile devices for unlimited viewing throughout 1 week. All of the students participated in a practice laboratory to learn urinary catheterization and were blindly tested for their performance skills after participation in the laboratory. The intervention group showed significantly higher levels of learning motivation and class satisfaction than did the control. Of the fundamental nursing competencies, the intervention group was more confident in practicing catheterization than their counterparts. Our findings suggest that video clips using mobile devices are useful tools that educate student nurses on relevant clinical skills and improve learning outcomes.
Assessment of colon polyp morphology: Is education effective?
Kim, Jae Hyun; Nam, Kyoung Sik; Kwon, Hye Jung; Choi, Youn Jung; Jung, Kyoungwon; Kim, Sung Eun; Moon, Won; Park, Moo In; Park, Seun Ja
2017-01-01
AIM To determine the inter-observer variability for colon polyp morphology and to identify whether education can improve agreement among observers. METHODS For purposes of the tests, we recorded colonoscopy video clips that included scenes visualizing the polyps. A total of 15 endoscopists and 15 nurses participated in the study. Participants watched 60 video clips of the polyp morphology scenes and then estimated polyp morphology (pre-test). After education for 20 min, participants performed a second test in which the order of 60 video clips was changed (post-test). To determine if the effectiveness of education was sustained, four months later, a third, follow-up test was performed with the same participants. RESULTS The overall Fleiss’ kappa value of the inter-observer agreement was 0.510 in the pre-test, 0.618 in the post-test, and 0.580 in the follow-up test. The overall diagnostic accuracy of the estimation for polyp morphology in the pre-, post-, and follow-up tests was 0.662, 0.797, and 0.761, respectively. After education, the inter-observer agreement and diagnostic accuracy of all participants improved. However, after four months, the inter-observer agreement and diagnostic accuracy of expert groups were markedly decreased, and those of beginner and nurse groups remained similar to pre-test levels. CONCLUSION The education program used in this study can improve inter-observer agreement and diagnostic accuracy in assessing the morphology of colon polyps; it is especially effective when first learning endoscopy. PMID:28974894
Assessment of colon polyp morphology: Is education effective?
Kim, Jae Hyun; Nam, Kyoung Sik; Kwon, Hye Jung; Choi, Youn Jung; Jung, Kyoungwon; Kim, Sung Eun; Moon, Won; Park, Moo In; Park, Seun Ja
2017-09-14
To determine the inter-observer variability for colon polyp morphology and to identify whether education can improve agreement among observers. For purposes of the tests, we recorded colonoscopy video clips that included scenes visualizing the polyps. A total of 15 endoscopists and 15 nurses participated in the study. Participants watched 60 video clips of the polyp morphology scenes and then estimated polyp morphology (pre-test). After education for 20 min, participants performed a second test in which the order of 60 video clips was changed (post-test). To determine if the effectiveness of education was sustained, four months later, a third, follow-up test was performed with the same participants. The overall Fleiss' kappa value of the inter-observer agreement was 0.510 in the pre-test, 0.618 in the post-test, and 0.580 in the follow-up test. The overall diagnostic accuracy of the estimation for polyp morphology in the pre-, post-, and follow-up tests was 0.662, 0.797, and 0.761, respectively. After education, the inter-observer agreement and diagnostic accuracy of all participants improved. However, after four months, the inter-observer agreement and diagnostic accuracy of expert groups were markedly decreased, and those of beginner and nurse groups remained similar to pre-test levels. The education program used in this study can improve inter-observer agreement and diagnostic accuracy in assessing the morphology of colon polyps; it is especially effective when first learning endoscopy.
2010-01-01
Background In problem-based learning (PBL), tutors play an essential role in facilitating and efficiently structuring tutorials to enable students to construct individual cognitive networks, and have a significant impact on students' performance in subsequent assessments. The necessity of elaborate training to fulfil this complex role is undeniable. In the plethora of data on PBL however, little attention has been paid to tutor training which promotes competence in the moderation of specific difficult situations commonly encountered in PBL tutorials. Methods Major interactive obstacles arising in PBL tutorials were identified from prior publications. Potential solutions were defined by an expert group. Video clips were produced addressing the tutor's role and providing exemplary solutions. These clips were embedded in a PBL tutor-training course at our medical faculty combining PBL self-experience with a non-medical case. Trainees provided pre- and post-intervention self-efficacy ratings regarding their PBL-related knowledge, skills, and attitudes, as well as their acceptance and the feasibility of integrating the video clips into PBL tutor-training (all items: 100 = completely agree, 0 = don't agree at all). Results An interactive online tool for PBL tutor training was developed comprising 18 video clips highlighting difficult situations in PBL tutorials to encourage trainees to develop and formulate their own intervention strategies. In subsequent sequences, potential interventions are presented for the specific scenario, with a concluding discussion which addresses unresolved issues. The tool was well accepted and considered worth the time spent on it (81.62 ± 16.91; 62.94 ± 16.76). Tutors considered the videos to prepare them well to respond to specific challenges in future tutorials (75.98 ± 19.46). The entire training, which comprised PBL self-experience and video clips as integral elements, improved tutor's self-efficacy with respect to dealing with problematic situations (pre: 36.47 ± 26.25, post: 66.99 ± 21.01; p < .0001) and significantly increased appreciation of PBL as a method (pre: 61.33 ± 24.84, post: 76.20 ± 20.12; p < .0001). Conclusions The interactive tool with instructional video clips is designed to broaden the view of future PBL tutors in terms of recognizing specific obstacles to functional group dynamics and developing individual intervention strategies. We show that this tool is well accepted and can be successfully integrated into PBL tutor-training. Free access is provided to the entire tool at http://www.medizinische-fakultaet-hd.uni-heidelberg.de/fileadmin/PBLTutorTraining/player.swf. PMID:20604927
Kasturi, Rangachar; Goldgof, Dmitry; Soundararajan, Padmanabhan; Manohar, Vasant; Garofolo, John; Bowers, Rachel; Boonstra, Matthew; Korzhova, Valentina; Zhang, Jing
2009-02-01
Common benchmark data sets, standardized performance metrics, and baseline algorithms have demonstrated considerable impact on research and development in a variety of application domains. These resources provide both consumers and developers of technology with a common framework to objectively compare the performance of different algorithms and algorithmic improvements. In this paper, we present such a framework for evaluating object detection and tracking in video: specifically for face, text, and vehicle objects. This framework includes the source video data, ground-truth annotations (along with guidelines for annotation), performance metrics, evaluation protocols, and tools including scoring software and baseline algorithms. For each detection and tracking task and supported domain, we developed a 50-clip training set and a 50-clip test set. Each data clip is approximately 2.5 minutes long and has been completely spatially/temporally annotated at the I-frame level. Each task/domain, therefore, has an associated annotated corpus of approximately 450,000 frames. The scope of such annotation is unprecedented and was designed to begin to support the necessary quantities of data for robust machine learning approaches, as well as a statistically significant comparison of the performance of algorithms. The goal of this work was to systematically address the challenges of object detection and tracking through a common evaluation framework that permits a meaningful objective comparison of techniques, provides the research community with sufficient data for the exploration of automatic modeling techniques, encourages the incorporation of objective evaluation into the development process, and contributes useful lasting resources of a scale and magnitude that will prove to be extremely useful to the computer vision research community for years to come.
This Rock 'n' Roll Video Teaches Math
ERIC Educational Resources Information Center
Niess, Margaret L.; Walker, Janet M.
2009-01-01
Mathematics is a discipline that has significantly advanced through the use of digital technologies with improved computational, graphical, and symbolic capabilities. Digital videos can be used to present challenging mathematical questions for students. Video clips offer instructional possibilities for moving students from a passive mode of…
Pulse-train Stimulation of Primary Somatosensory Cortex Blocks Pain Perception in Tail Clip Test
Lee, Soohyun; Hwang, Eunjin; Lee, Dongmyeong
2017-01-01
Human studies of brain stimulation have demonstrated modulatory effects on the perception of pain. However, whether the primary somatosensory cortical activity is associated with antinociceptive responses remains unknown. Therefore, we examined the antinociceptive effects of neuronal activity evoked by optogenetic stimulation of primary somatosensory cortex. Optogenetic transgenic mice were subjected to continuous or pulse-train optogenetic stimulation of the primary somatosensory cortex at frequencies of 15, 30, and 40 Hz, during a tail clip test. Reaction time was measured using a digital high-speed video camera. Pulse-train optogenetic stimulation of primary somatosensory cortex showed a delayed pain response with respect to a tail clip, whereas no significant change in reaction time was observed with continuous stimulation. In response to the pulse-train stimulation, video monitoring and local field potential recording revealed associated paw movement and sensorimotor rhythms, respectively. Our results show that optogenetic stimulation of primary somatosensory cortex at beta and gamma frequencies blocks transmission of pain signals in tail clip test. PMID:28442945
A scheme for racquet sports video analysis with the combination of audio-visual information
NASA Astrophysics Data System (ADS)
Xing, Liyuan; Ye, Qixiang; Zhang, Weigang; Huang, Qingming; Yu, Hua
2005-07-01
As a very important category in sports video, racquet sports video, e.g. table tennis, tennis and badminton, has been paid little attention in the past years. Considering the characteristics of this kind of sports video, we propose a new scheme for structure indexing and highlight generating based on the combination of audio and visual information. Firstly, a supervised classification method is employed to detect important audio symbols including impact (ball hit), audience cheers, commentator speech, etc. Meanwhile an unsupervised algorithm is proposed to group video shots into various clusters. Then, by taking advantage of temporal relationship between audio and visual signals, we can specify the scene clusters with semantic labels including rally scenes and break scenes. Thirdly, a refinement procedure is developed to reduce false rally scenes by further audio analysis. Finally, an exciting model is proposed to rank the detected rally scenes from which many exciting video clips such as game (match) points can be correctly retrieved. Experiments on two types of representative racquet sports video, table tennis video and tennis video, demonstrate encouraging results.
Early retreatment after surgical clipping of ruptured intracranial aneurysms.
Ito, Yoshiro; Yamamoto, Tetsuya; Ikeda, Go; Tsuruta, Wataro; Uemura, Kazuya; Komatsu, Yoji; Matsumura, Akira
2017-09-01
Although a rerupture after surgical clipping of ruptured intracranial aneurysms is rare, it is associated with high morbidity and mortality. The causes for retreatment and rupture after surgical clipping are not clearly defined. From a prospectively maintained database of 244 patients who had undergone surgical clipping of ruptured intracranial aneurysms, we selected patients who experienced retreatment or rerupture within 30 days after surgical clipping. Aneurysm occlusions were examined by microvascular Doppler ultrasonography and indocyanine green video-angiography. Indications for retreatment included rerupture and partial occlusion. We analyzed the characteristics and causes of early retreatment. Six patients (2.5%, 95% CI 0.9 to 5.3%) were retreated within 30 days after surgical clipping, including two patients (0.8%, 95% CI 0.1 to 2.9%) who experienced a rerupture. The retreated aneurysms were found in the anterior communicating artery (AcomA) (n = 5) and basilar artery (n = 1). Retreatment of the AcomA (7.5%) was performed significantly more frequently than that of other arteries (0.56%) (p < 0.01). A laterally projected AcomA aneurysm (17.4%) was more frequently retreated than were other aneurysm types (2.3%). Cases of laterally projecting AcomA aneurysms tended to result from an incomplete clip placed using a pterional approach from the opposite side of the aneurysm projection. Despite developments, the rates of retreatment and rerupture after surgical clipping remain similar to those reported previously. Retreatment of the AcomA was significantly more frequent than was retreatment of other arteries. Patients underwent retreatment more frequently when they were originally treated for lateral type aneurysms using a pterional approach from the opposite side of the aneurysm projection. The treatment method and evaluation modalities should be considered carefully for AcomA aneurysms in particular.
What Leadership Looks Like: Videos Help Aspiring Leaders Get the Picture
ERIC Educational Resources Information Center
Clark, Lynn V.
2012-01-01
Finding out what instructional leadership looks like is at the center of a new trend in leadership development: videos of practice. These range from minimally edited videos of a leader's own practice to highly edited clips that focus on successful leadership actions in authentic school settings. While videos of practice are widely used in teacher…
ERIC Educational Resources Information Center
Lo, Ya-yu; Burk, Bradley; Burk, Bradley; Anderson, Adrienne L.
2014-01-01
The current study examined the effects of a modified video prompting procedure, namely progressive video prompting, to increase technique accuracy of shooting a basketball in the school gymnasium of three 11th-grade students with moderate intellectual disability. The intervention involved participants viewing video clips of an adult model who…
ERIC Educational Resources Information Center
Cihak, David F.; Bowlin, Tammy
2009-01-01
The researchers examined the use of video modeling by means of a handheld computer as an alternative instructional delivery system for learning basic geometry skills. Three high school students with learning disabilities participated in this study. Through video modeling, teacher-developed video clips showing step-by-step problem solving processes…
A Powerful Teaching Tool: Self-Produced Videos
ERIC Educational Resources Information Center
Case, Patty; Hino, Jeff
2010-01-01
Video--once complex and expensive to create with high distribution costs--has become more affordable and highly accessible in addition to being a powerful teaching tool. Self-produced videos are one way educators can connect with a growing number of on-line learners. The authors describe a pilot project in which a series of video clips were…
Adventure Racing and Organizational Behavior: Using Eco Challenge Video Clips to Stimulate Learning
ERIC Educational Resources Information Center
Kenworthy-U'Ren, Amy; Erickson, Anthony
2009-01-01
In this article, the Eco Challenge race video is presented as a teaching tool for facilitating theory-based discussion and application in organizational behavior (OB) courses. Before discussing the intricacies of the video series itself, the authors present a pedagogically based rationale for using reality TV-based video segments in a classroom…
MANHATTAN: The View From Los Alamos of History's Most Secret Project
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carr, Alan Brady
This presentation covers the political and scientific events leading up to the creation of the Manhattan Project. The creation of the Manhattan Project’s three most significant sites--Los Alamos, Oak Ridge, and Hanford--is also discussed. The lecture concludes by exploring the use of the atomic bombs at the end of World War II. The presentation slides include three videos. The first is a short clip of the 100-ton Test. The 100-Ton Test was history’s largest measured blast at that point in time; it was a pre-test for Trinity, the world’s first nuclear detonation. The second clip features views of Trinity followedmore » a short statement by the Laboratory’s first director, J. Robert Oppenheimer. The final clip shows Norris Bradbury talking about arms control.« less
Validity for an integrated laboratory analogue of sexual aggression and bystander intervention.
Parrott, Dominic J; Tharp, Andra Teten; Swartout, Kevin M; Miller, Cameron A; Hall, Gordon C Nagayama; George, William H
2012-01-01
This study sought to develop and validate an integrated laboratory paradigm of sexual aggression and bystander intervention. Participants were a diverse community sample (54% African American) of heterosexual males (N = 156) between 21 and 35 years of age who were recruited to complete the study with a male friend and an ostensibly single, heterosexual female who reported a strong dislike of sexual content in the media. Participants viewed a sexually explicit or nonsexually explicit film clip as part of contrived media rating task and made individual choices of which film clip to show the female confederate. Immediately thereafter, participants were required to reach consensus on a group decision of which film clip to show the female confederate. Subjecting a target to an unwanted experience with a sexual connotation was operationalized as selection of the sexually explicit video, whereas successful bystander intervention was operationalized as the event of one partner individually selecting the sexually explicit video but then selecting the nonsexually explicit video for the group choice. Results demonstrated that a 1-year history of sexual aggression and endorsement of pertinent misogynistic attitudes significantly predicted selection of the sexually-explicit video. In addition, bystander efficacy significantly predicted men's successful prevention of their male peer's intent to show the female confederate a sexually explicit video. Discussion focused on how these data inform future research and bystander intervention programming for sexual aggression. © 2012 Wiley Periodicals, Inc.
Learning about Palau. [CD-ROM].
ERIC Educational Resources Information Center
Pacific Resources for Education and Learning, Honolulu, HI.
This CD-ROM contains information about the Republic of Palau, a sovereign state in the Pacific Ocean. The CD-ROM contains full-color photos and video clips of selected sites and events on Palau; interactive maps of Palau, including land forms, places of cultural significance, and public schools; and a glossary of geographic, geological, and…
YouTube as a Qualitative Research Asset: Reviewing User Generated Videos as Learning Resources
ERIC Educational Resources Information Center
Chenail, Ronald J.
2011-01-01
YouTube, the video hosting service, offers students, teachers, and practitioners of qualitative researchers a unique reservoir of video clips introducing basic qualitative research concepts, sharing qualitative data from interviews and field observations, and presenting completed research studies. This web-based site also affords qualitative…
Approaches to Interactive Video Anchors in Problem-based Science Learning
NASA Astrophysics Data System (ADS)
Kumar, David Devraj
2010-02-01
This paper is an invited adaptation of the IEEE Education Society Distinguished Lecture Approaches to Interactive Video Anchors in Problem-Based Science Learning. Interactive video anchors have a cognitive theory base, and they help to enlarge the context of learning with information-rich real-world situations. Carefully selected movie clips and custom-developed regular videos and virtual simulations have been successfully used as anchors in problem-based science learning. Examples discussed include a range of situations such as Indiana Jones tackling a trap, a teenager misrepresenting lead for gold, an agriculture inspection at the US border, counterintuitive events, analyzing a river ecosystem for pollution, and finding the cause of illness in a nineteenth century river city. Suggestions for teachers are provided.
Emotional processing modulates attentional capture of irrelevant sound input in adolescents.
Gulotta, B; Sadia, G; Sussman, E
2013-04-01
The main goal of this study was to investigate how emotional processing modulates the allocation of attention to irrelevant background sound events in adolescence. We examined the effect of viewing positively and negatively valenced video clips on components of event-related brain potentials (ERPs), while irrelevant sounds were presented to the ears. All sounds evoked the P1, N1, P2, and N2 components. The infrequent, randomly occurring novel environmental sounds evoked the P3a component in all trial types. The main finding was that the P3a component was larger in amplitude when evoked by salient, distracting background sound events when participants were watching negatively charged video clips, compared to when viewing of the positive or neutral video clips. The results suggest that the threshold for involuntary attention to the novel sounds was lowered during viewing of the negative movie contexts. This indicates a survival mechanism, which would be needed for more automatic processing of irrelevant sounds to monitor the unattended environment in situations perceived as more threatening. Copyright © 2012 Elsevier B.V. All rights reserved.
Prediction of transmission distortion for wireless video communication: analysis.
Chen, Zhifeng; Wu, Dapeng
2012-03-01
Transmitting video over wireless is a challenging problem since video may be seriously distorted due to packet errors caused by wireless channels. The capability of predicting transmission distortion (i.e., video distortion caused by packet errors) can assist in designing video encoding and transmission schemes that achieve maximum video quality or minimum end-to-end video distortion. This paper is aimed at deriving formulas for predicting transmission distortion. The contribution of this paper is twofold. First, we identify the governing law that describes how the transmission distortion process evolves over time and analytically derive the transmission distortion formula as a closed-form function of video frame statistics, channel error statistics, and system parameters. Second, we identify, for the first time, two important properties of transmission distortion. The first property is that the clipping noise, which is produced by nonlinear clipping, causes decay of propagated error. The second property is that the correlation between motion-vector concealment error and propagated error is negative and has dominant impact on transmission distortion, compared with other correlations. Due to these two properties and elegant error/distortion decomposition, our formula provides not only more accurate prediction but also lower complexity than the existing methods.
Standardized access, display, and retrieval of medical video
NASA Astrophysics Data System (ADS)
Bellaire, Gunter; Steines, Daniel; Graschew, Georgi; Thiel, Andreas; Bernarding, Johannes; Tolxdorff, Thomas; Schlag, Peter M.
1999-05-01
The system presented here enhances documentation and data- secured, second-opinion facilities by integrating video sequences into DICOM 3.0. We present an implementation for a medical video server extended by a DICOM interface. Security mechanisms conforming with DICOM are integrated to enable secure internet access. Digital video documents of diagnostic and therapeutic procedures should be examined regarding the clip length and size necessary for second opinion and manageable with today's hardware. Image sources relevant for this paper include 3D laparoscope, 3D surgical microscope, 3D open surgery camera, synthetic video, and monoscopic endoscopes, etc. The global DICOM video concept and three special workplaces of distinct applications are described. Additionally, an approach is presented to analyze the motion of the endoscopic camera for future automatic video-cutting. Digital stereoscopic video sequences are especially in demand for surgery . Therefore DSVS are also integrated into the DICOM video concept. Results are presented describing the suitability of stereoscopic display techniques for the operating room.
Deconstructing "Good Practice" Teaching Videos: An Analysis of Pre-Service Teachers' Reflections
ERIC Educational Resources Information Center
Ineson, Gwen; Voutsina, Chronoula; Fielding, Helen; Barber, Patti; Rowland, Tim
2015-01-01
Video clips of mathematics lessons are used extensively in pre-service teacher education and continuing professional development activities. Given course time constraints, an opportunity to critique these videos is not always possible. Because of this, and because pre-service teachers make extensive use of material found during internet searches,…
The Role of Fantasy-Reality Distinctions in Preschoolers' Learning from Educational Video
ERIC Educational Resources Information Center
Richert, Rebekah A.; Schlesinger, Molly A.
2017-01-01
The current study examined if preschoolers' understanding of fantasy and reality are related to their learning from educational videos. Forty-nine 3- to 6-year-old children watched short clips of popular educational programs in which animated characters solved problems. Following video viewing, children attempted to solve real-world problems…
Approaches to Interactive Video Anchors in Problem-Based Science Learning
ERIC Educational Resources Information Center
Kumar, David Devraj
2010-01-01
This paper is an invited adaptation of the IEEE Education Society Distinguished Lecture Approaches to Interactive Video Anchors in Problem-Based Science Learning. Interactive video anchors have a cognitive theory base, and they help to enlarge the context of learning with information-rich real-world situations. Carefully selected movie clips and…
ERIC Educational Resources Information Center
Watters, Christopher
2002-01-01
Some impressive video movies have appeared as supplemental material in recent issues of the "Journal of Cell Biology (JCB)", and in this article, the author reviews several of them. In general, the JCB format provides each video clip with its own caption, in addition to any contextual references in the article itself, and a separate descriptive…
ERIC Educational Resources Information Center
Guy, Richard,; Byrne, Bruce; Dobos, Marian
2018-01-01
Anatomy and physiology interactive video clips were introduced into a blended learning environment, as an optional resource, and were accessed by ~50% of the cohort. Student feedback indicated that clips were engaging, assisted understanding of course content, and provided lecture support. Students could also access two other optional online…
On the Complexity of Digital Video Cameras in/as Research: Perspectives and Agencements
ERIC Educational Resources Information Center
Bangou, Francis
2014-01-01
The goal of this article is to consider the potential for digital video cameras to produce as part of a research agencement. Our reflection will be guided by the current literature on the use of video recordings in research, as well as by the rhizoanalysis of two vignettes. The first of these vignettes is associated with a short video clip shot by…
NASA Astrophysics Data System (ADS)
Peng, Yahui; Ma, Xiao; Gao, Xinyu; Zhou, Fangxu
2015-12-01
Computer vision is an important tool for sports video processing. However, its application in badminton match analysis is very limited. In this study, we proposed a straightforward but robust histogram-based background estimation and player detection methods for badminton video clips, and compared the results with the naive averaging method and the mixture of Gaussians methods, respectively. The proposed method yielded better background estimation results than the naive averaging method and more accurate player detection results than the mixture of Gaussians player detection method. The preliminary results indicated that the proposed histogram-based method could estimate the background and extract the players accurately. We conclude that the proposed method can be used for badminton player tracking and further studies are warranted for automated match analysis.
A Sieving ANN for Emotion-Based Movie Clip Classification
NASA Astrophysics Data System (ADS)
Watanapa, Saowaluk C.; Thipakorn, Bundit; Charoenkitkarn, Nipon
Effective classification and analysis of semantic contents are very important for the content-based indexing and retrieval of video database. Our research attempts to classify movie clips into three groups of commonly elicited emotions, namely excitement, joy and sadness, based on a set of abstract-level semantic features extracted from the film sequence. In particular, these features consist of six visual and audio measures grounded on the artistic film theories. A unique sieving-structured neural network is proposed to be the classifying model due to its robustness. The performance of the proposed model is tested with 101 movie clips excerpted from 24 award-winning and well-known Hollywood feature films. The experimental result of 97.8% correct classification rate, measured against the collected human-judges, indicates the great potential of using abstract-level semantic features as an engineered tool for the application of video-content retrieval/indexing.
Evaluation of the effectiveness of color attributes for video indexing
NASA Astrophysics Data System (ADS)
Chupeau, Bertrand; Forest, Ronan
2001-10-01
Color features are reviewed and their effectiveness assessed in the application framework of key-frame clustering for abstracting unconstrained video. Existing color spaces and associated quantization schemes are first studied. Description of global color distribution by means of histograms is then detailed. In our work, 12 combinations of color space and quantization were selected, together with 12 histogram metrics. Their respective effectiveness with respect to picture similarity measurement was evaluated through a query-by-example scenario. For that purpose, a set of still-picture databases was built by extracting key frames from several video clips, including news, documentaries, sports and cartoons. Classical retrieval performance evaluation criteria were adapted to the specificity of our testing methodology.
Evaluation of the effectiveness of color attributes for video indexing
NASA Astrophysics Data System (ADS)
Chupeau, Bertrand; Forest, Ronan
2001-01-01
Color features are reviewed and their effectiveness assessed in the application framework of key-frame clustering for abstracting unconstrained video. Existing color spaces and associated quantization schemes are first studied. Description of global color distribution by means of histograms is then detailed. In our work, twelve combinations of color space and quantization were selected, together with twelve histogram metrics. Their respective effectiveness with respect to picture similarity measurement was evaluated through a query-be-example scenario. For that purpose, a set of still-picture databases was built by extracting key-frames from several video clips, including news, documentaries, sports and cartoons. Classical retrieval performance evaluation criteria were adapted to the specificity of our testing methodology.
Evaluation of the effectiveness of color attributes for video indexing
NASA Astrophysics Data System (ADS)
Chupeau, Bertrand; Forest, Ronan
2000-12-01
Color features are reviewed and their effectiveness assessed in the application framework of key-frame clustering for abstracting unconstrained video. Existing color spaces and associated quantization schemes are first studied. Description of global color distribution by means of histograms is then detailed. In our work, twelve combinations of color space and quantization were selected, together with twelve histogram metrics. Their respective effectiveness with respect to picture similarity measurement was evaluated through a query-be-example scenario. For that purpose, a set of still-picture databases was built by extracting key-frames from several video clips, including news, documentaries, sports and cartoons. Classical retrieval performance evaluation criteria were adapted to the specificity of our testing methodology.
NASA Astrophysics Data System (ADS)
Urbano, L.
2005-12-01
We have developed and tested an internet based application that facilitates the creation of animations for use in lectures and permits movie production by students in laboratory classes. Animation have been found to be extremely useful educational aids in the geosciences, particularly relating to topics requiring comprehension of geospatial relationships. With this program, instructors are able to assemble and caption animations using an online video clip catalogue and present these movies through a standard internet browser. Captioning increases student comprehension by increasing the multimodality of information delivery. For student use, we developed an exercise for introductory, undergraduate, laboratory class sections that was informed by learning pedagogy, particularly as related to game-based learning. Students were asked to assemble video clips and captions into a coherent movie to explain geospatial concepts, with questions such as "Explain why we have seasons?" The affinity of students to digital technology, particularly computer games and digital media, makes this type of exercise particularly captivating to the typical undergraduate. The opportunity to select and arrange video clips (and add background music) into a unique production offers students a greater degree of ownership of the learning process and allows unique non-linear pathways for accomplishing learning objectives. Use in a laboratory section permitted rapid feedback from the instructor. The application was created using open-sourced software and the database populated with video clips and music contributed by faculty and students under a non-commercial-use license. This tool has the potential to permit the wider dissemination of scientific research results given the increasing use animations for scientific visualization, because it eases the creation of multiple presentations targeted to various audiences and allows user participation in the creation of multimedia.
Cross-Modal Multivariate Pattern Analysis
Meyer, Kaspar; Kaplan, Jonas T.
2011-01-01
Multivariate pattern analysis (MVPA) is an increasingly popular method of analyzing functional magnetic resonance imaging (fMRI) data1-4. Typically, the method is used to identify a subject's perceptual experience from neural activity in certain regions of the brain. For instance, it has been employed to predict the orientation of visual gratings a subject perceives from activity in early visual cortices5 or, analogously, the content of speech from activity in early auditory cortices6. Here, we present an extension of the classical MVPA paradigm, according to which perceptual stimuli are not predicted within, but across sensory systems. Specifically, the method we describe addresses the question of whether stimuli that evoke memory associations in modalities other than the one through which they are presented induce content-specific activity patterns in the sensory cortices of those other modalities. For instance, seeing a muted video clip of a glass vase shattering on the ground automatically triggers in most observers an auditory image of the associated sound; is the experience of this image in the "mind's ear" correlated with a specific neural activity pattern in early auditory cortices? Furthermore, is this activity pattern distinct from the pattern that could be observed if the subject were, instead, watching a video clip of a howling dog? In two previous studies7,8, we were able to predict sound- and touch-implying video clips based on neural activity in early auditory and somatosensory cortices, respectively. Our results are in line with a neuroarchitectural framework proposed by Damasio9,10, according to which the experience of mental images that are based on memories - such as hearing the shattering sound of a vase in the "mind's ear" upon seeing the corresponding video clip - is supported by the re-construction of content-specific neural activity patterns in early sensory cortices. PMID:22105246
Li, Benjamin J; Bailenson, Jeremy N; Pines, Adam; Greenleaf, Walter J; Williams, Leanne M
2017-01-01
Virtual reality (VR) has been proposed as a methodological tool to study the basic science of psychology and other fields. One key advantage of VR is that sharing of virtual content can lead to more robust replication and representative sampling. A database of standardized content will help fulfill this vision. There are two objectives to this study. First, we seek to establish and allow public access to a database of immersive VR video clips that can act as a potential resource for studies on emotion induction using virtual reality. Second, given the large sample size of participants needed to get reliable valence and arousal ratings for our video, we were able to explore the possible links between the head movements of the observer and the emotions he or she feels while viewing immersive VR. To accomplish our goals, we sourced for and tested 73 immersive VR clips which participants rated on valence and arousal dimensions using self-assessment manikins. We also tracked participants' rotational head movements as they watched the clips, allowing us to correlate head movements and affect. Based on past research, we predicted relationships between the standard deviation of head yaw and valence and arousal ratings. Results showed that the stimuli varied reasonably well along the dimensions of valence and arousal, with a slight underrepresentation of clips that are of negative valence and highly arousing. The standard deviation of yaw positively correlated with valence, while a significant positive relationship was found between head pitch and arousal. The immersive VR clips tested are available online as supplemental material.
Li, Benjamin J.; Bailenson, Jeremy N.; Pines, Adam; Greenleaf, Walter J.; Williams, Leanne M.
2017-01-01
Virtual reality (VR) has been proposed as a methodological tool to study the basic science of psychology and other fields. One key advantage of VR is that sharing of virtual content can lead to more robust replication and representative sampling. A database of standardized content will help fulfill this vision. There are two objectives to this study. First, we seek to establish and allow public access to a database of immersive VR video clips that can act as a potential resource for studies on emotion induction using virtual reality. Second, given the large sample size of participants needed to get reliable valence and arousal ratings for our video, we were able to explore the possible links between the head movements of the observer and the emotions he or she feels while viewing immersive VR. To accomplish our goals, we sourced for and tested 73 immersive VR clips which participants rated on valence and arousal dimensions using self-assessment manikins. We also tracked participants' rotational head movements as they watched the clips, allowing us to correlate head movements and affect. Based on past research, we predicted relationships between the standard deviation of head yaw and valence and arousal ratings. Results showed that the stimuli varied reasonably well along the dimensions of valence and arousal, with a slight underrepresentation of clips that are of negative valence and highly arousing. The standard deviation of yaw positively correlated with valence, while a significant positive relationship was found between head pitch and arousal. The immersive VR clips tested are available online as supplemental material. PMID:29259571
Automated Music Video Generation Using Multi-level Feature-based Segmentation
NASA Astrophysics Data System (ADS)
Yoon, Jong-Chul; Lee, In-Kwon; Byun, Siwoo
The expansion of the home video market has created a requirement for video editing tools to allow ordinary people to assemble videos from short clips. However, professional skills are still necessary to create a music video, which requires a stream to be synchronized with pre-composed music. Because the music and the video are pre-generated in separate environments, even a professional producer usually requires a number of trials to obtain a satisfactory synchronization, which is something that most amateurs are unable to achieve.
ERIC Educational Resources Information Center
König, Johannes
2015-01-01
The study aims at developing and exploring a novel video-based assessment that captures classroom management expertise (CME) of teachers and for which statistical results are provided. CME measurement is conceptualized by using four video clips that refer to typical classroom management situations in which teachers are heavily challenged…
ERIC Educational Resources Information Center
Finkbeiner, Claudia; Schluer, Jennifer
2017-01-01
This paper contains a collaborative video-based approach to foster prospective teachers' diagnostic skills with respect to pupils' L2 reading processes. Together with a peer, the prospective teachers watched, systematically selected, analysed and commented on clips from a comprehensive video corpus on L2 reading strategies. In order to assist the…
ERIC Educational Resources Information Center
Bowes, David R.
2014-01-01
Video clips are an excellent way to enhance lecture material. Television commercials are a source of video examples that should not be overlooked and they are readily available on the internet. They are familiar, short, self-contained, constantly being created, and often funny. This paper describes several examples of television commercials that…
YouTube Video Project: A "Cool" Way to Learn Communication Ethics
ERIC Educational Resources Information Center
Lehman, Carol M.; DuFrene, Debbie D.; Lehman, Mark W.
2010-01-01
The millennial generation embraces new technologies as a natural way of accessing and exchanging information, staying connected, and having fun. YouTube, a video-sharing site that allows users to upload, view, and share video clips, is among the latest "cool" technologies for enjoying quick laughs, employing a wide variety of corporate activities,…
Peng, Jinye; Babaguchi, Noboru; Luo, Hangzai; Gao, Yuli; Fan, Jianping
2010-07-01
Digital video now plays an important role in supporting more profitable online patient training and counseling, and integration of patient training videos from multiple competitive organizations in the health care network will result in better offerings for patients. However, privacy concerns often prevent multiple competitive organizations from sharing and integrating their patient training videos. In addition, patients with infectious or chronic diseases may not want the online patient training organizations to identify who they are or even which video clips they are interested in. Thus, there is an urgent need to develop more effective techniques to protect both video content privacy and access privacy . In this paper, we have developed a new approach to construct a distributed Hippocratic video database system for supporting more profitable online patient training and counseling. First, a new database modeling approach is developed to support concept-oriented video database organization and assign a degree of privacy of the video content for each database level automatically. Second, a new algorithm is developed to protect the video content privacy at the level of individual video clip by filtering out the privacy-sensitive human objects automatically. In order to integrate the patient training videos from multiple competitive organizations for constructing a centralized video database indexing structure, a privacy-preserving video sharing scheme is developed to support privacy-preserving distributed classifier training and prevent the statistical inferences from the videos that are shared for cross-validation of video classifiers. Our experiments on large-scale video databases have also provided very convincing results.
Teaching Surgical Procedures with Movies: Tips for High-quality Video Clips.
Jacquemart, Mathieu; Bouletreau, Pierre; Breton, Pierre; Mojallal, Ali; Sigaux, Nicolas
2016-09-01
Video must now be considered as a precious tool for learning surgery. However, the medium does present production challenges, and currently, quality movies are not always accessible. We developed a series of 7 surgical videos and made them available on a publicly accessible internet website. Our videos have been viewed by thousands of people worldwide. High-quality educational movies must respect strategic and technical points to be reliable.
Rampersad, Sally E; Martin, Lizabeth D; Geiduschek, Jeremy M; Weiss, Gillian K; Bates, Shelly W; Martin, Lynn D
2013-07-01
Patients with central venous catheters who are transferred out of the Intensive Care Unit to the care of an anesthesiology team for an operation or interventional radiology procedure had excessive rates of catheter associated blood stream infection (CABSI). We convened a multi-disciplinary team to audit anesthesia practice and to develop countermeasures for those aspects of practice that were thought to be contributing to CABSI's. It was noted that provider behavior changed in the presence of an auditor (Hawthorne effect) and so videorecordings were used, in the hope that this Hawthorne effect would be reduced. Clips were chosen from the hours of video (without audio) recordings that showed medication administration, airway management and touching the anesthesia cart of equipment/supplies. These clips were viewed by three observers and measurements were made to assess intra-rater and inter-rater reliability. The clips were then viewed to quantify differences in practice before and after our bundle of "best practices" was introduced. Although video recording has been used to evaluate adherence to resuscitation protocols in both trauma and in neonatal resuscitation, (Pediatric Emergency Care, 26, 2010, 803; Pediatrics, 117, 2006, 658; Pediatrics, 106, 2000, 654) we believe this is the first time that video has been used to record before and after behaviors for an anesthesia quality improvement initiative. © 2013 John Wiley & Sons Ltd.
Parker, Alton; Rubinfeld, Ilan; Azuh, Ogochukwu; Blyden, Dionne; Falvo, Anthony; Horst, Mathilda; Velanovich, Vic; Patton, Pat
2010-03-01
Technology currently exists for the application of remote guidance in the laparoscopic operating suite. However, these solutions are costly and require extensive preparation and reconfiguration of current hardware. We propose a solution from existing technology, to send video of laparoscopic cholecystectomy to the Blackberry Pearl device (RIM Waterloo, ON, Canada) for remote guidance purposes. This technology is time- and cost-efficient, as well as reliable. After identification of the critical maneuver during a laparoscopic cholecystectomy as the division of the cystic duct, we captured a segment of video before it's transection. Video was captured using the laparoscopic camera input sent via DVI2USB Solo Frame Grabber (Epiphan Ottawa, Canada) to a video recording application on a laptop. Seven- to 40-second video clips were recorded. The video clip was then converted to an .mp4 file and was uploaded to our server and a link was then sent to the consultant via e-mail. The consultant accessed the file via Blackberry for viewing. After reviewing the video, the consultant was able to confidently comment on the operation. Approximately 7 to 40 seconds of 10 laparoscopic cholecystectomies were recorded and transferred to the consultant using our method. All 10 video clips were reviewed and deemed adequate for decision making. Remote guidance for laparoscopic cholecystectomy with existing technology can be accomplished with relatively low cost and minimal setup. Additional evaluation of our methods will aim to identify reliability, validity, and accuracy. Using our method, other forms of remote guidance may be feasible, such as other laparoscopic procedures, diagnostic ultrasonography, and remote intensive care unit monitoring. In addition, this method of remote guidance may be extended to centers with smaller budgets, allowing ubiquitous use of neighboring consultants and improved safety for our patients. Copyright (c) 2010 Elsevier Inc. All rights reserved.
Springer, Kristen S; George, Steven Z; Robinson, Michael E
2016-08-01
Previous studies have not examined the assessment of chronic low back pain (CLBP) and pain-related anxiety from a fear avoidance model through the use of motion-capture software and virtual human technologies. The aim of this study was to develop and assess the psychometric properties of an interactive, technologically based hierarchy that can be used to assess patients with pain and pain-related anxiety. We enrolled 30 licensed physical therapists and 30 participants with CLBP. Participants rated 21 video clips of a 3-D animated character (avatar) engaging in activities that are typically feared by patients with CLBP. The results of the study indicate that physical therapists found the virtual hierarchy clips acceptable and depicted realistic patient experiences. Most participants with CLBP reported at least 1 video clip as being sufficiently anxiety-provoking for use clinically. Therefore, this study suggests a hierarchy of fears can be created out of 21 virtual patient video clips paving the way for future clinical use in patients with CLBP. This report describes the development of a computer-based virtual patient system for the assessment of back pain-related fear and anxiety. Results show that people with back pain as well as physical therapists found the avatar to be realistic, and the depictions of behavior anxiety- and fear-provoking. Copyright © 2016 American Pain Society. Published by Elsevier Inc. All rights reserved.
Knowledge-based approach to video content classification
NASA Astrophysics Data System (ADS)
Chen, Yu; Wong, Edward K.
2001-01-01
A framework for video content classification using a knowledge-based approach is herein proposed. This approach is motivated by the fact that videos are rich in semantic contents, which can best be interpreted and analyzed by human experts. We demonstrate the concept by implementing a prototype video classification system using the rule-based programming language CLIPS 6.05. Knowledge for video classification is encoded as a set of rules in the rule base. The left-hand-sides of rules contain high level and low level features, while the right-hand-sides of rules contain intermediate results or conclusions. Our current implementation includes features computed from motion, color, and text extracted from video frames. Our current rule set allows us to classify input video into one of five classes: news, weather, reporting, commercial, basketball and football. We use MYCIN's inexact reasoning method for combining evidences, and to handle the uncertainties in the features and in the classification results. We obtained good results in a preliminary experiment, and it demonstrated the validity of the proposed approach.
Knowledge-based approach to video content classification
NASA Astrophysics Data System (ADS)
Chen, Yu; Wong, Edward K.
2000-12-01
A framework for video content classification using a knowledge-based approach is herein proposed. This approach is motivated by the fact that videos are rich in semantic contents, which can best be interpreted and analyzed by human experts. We demonstrate the concept by implementing a prototype video classification system using the rule-based programming language CLIPS 6.05. Knowledge for video classification is encoded as a set of rules in the rule base. The left-hand-sides of rules contain high level and low level features, while the right-hand-sides of rules contain intermediate results or conclusions. Our current implementation includes features computed from motion, color, and text extracted from video frames. Our current rule set allows us to classify input video into one of five classes: news, weather, reporting, commercial, basketball and football. We use MYCIN's inexact reasoning method for combining evidences, and to handle the uncertainties in the features and in the classification results. We obtained good results in a preliminary experiment, and it demonstrated the validity of the proposed approach.
Identifying sports videos using replay, text, and camera motion features
NASA Astrophysics Data System (ADS)
Kobla, Vikrant; DeMenthon, Daniel; Doermann, David S.
1999-12-01
Automated classification of digital video is emerging as an important piece of the puzzle in the design of content management systems for digital libraries. The ability to classify videos into various classes such as sports, news, movies, or documentaries, increases the efficiency of indexing, browsing, and retrieval of video in large databases. In this paper, we discuss the extraction of features that enable identification of sports videos directly from the compressed domain of MPEG video. These features include detecting the presence of action replays, determining the amount of scene text in vide, and calculating various statistics on camera and/or object motion. The features are derived from the macroblock, motion,and bit-rate information that is readily accessible from MPEG video with very minimal decoding, leading to substantial gains in processing speeds. Full-decoding of selective frames is required only for text analysis. A decision tree classifier built using these features is able to identify sports clips with an accuracy of about 93 percent.
ERIC Educational Resources Information Center
Webel, Corey
2013-01-01
In this article I explore high school students' perspectives on working together in a mathematics class in which they spent a significant amount of time solving problems in small groups. The data included viewing session interviews with eight students in the class, where each student watched video clips of their own participation, explaining and…
Reduced spontaneous but relatively normal deliberate vicarious representations in psychopathy
Meffert, Harma; Gazzola, Valeria; den Boer, Johan A.; Bartels, Arnold A. J.
2013-01-01
Psychopathy is a personality disorder associated with a profound lack of empathy. Neuroscientists have associated empathy and its interindividual variation with how strongly participants activate brain regions involved in their own actions, emotions and sensations while viewing those of others. Here we compared brain activity of 18 psychopathic offenders with 26 control subjects while viewing video clips of emotional hand interactions and while experiencing similar interactions. Brain regions involved in experiencing these interactions were not spontaneously activated as strongly in the patient group while viewing the video clips. However, this group difference was markedly reduced when we specifically instructed participants to feel with the actors in the videos. Our results suggest that psychopathy is not a simple incapacity for vicarious activations but rather reduced spontaneous vicarious activations co-existing with relatively normal deliberate counterparts. PMID:23884812
Audiologist-driven versus patient-driven fine tuning of hearing instruments.
Boymans, Monique; Dreschler, Wouter A
2012-03-01
Two methods of fine tuning the initial settings of hearing aids were compared: An audiologist-driven approach--using real ear measurements and a patient-driven fine-tuning approach--using feedback from real-life situations. The patient-driven fine tuning was conducted by employing the Amplifit(®) II system using audiovideo clips. The audiologist-driven fine tuning was based on the NAL-NL1 prescription rule. Both settings were compared using the same hearing aids in two 6-week trial periods following a randomized blinded cross-over design. After each trial period, the settings were evaluated by insertion-gain measurements. Performance was evaluated by speech tests in quiet, in noise, and in time-reversed speech, presented at 0° and with spatially separated sound sources. Subjective results were evaluated using extensive questionnaires and audiovisual video clips. A total of 73 participants were included. On average, higher gain values were found for the audiologist-driven settings than for the patient-driven settings, especially at 1000 and 2000 Hz. Better objective performance was obtained for the audiologist-driven settings for speech perception in quiet and in time-reversed speech. This was supported by better scores on a number of subjective judgments and in the subjective ratings of video clips. The perception of loud sounds scored higher than when patient-driven, but the overall preference was in favor of the audiologist-driven settings for 67% of the participants.
Reliability of smartphone-based teleradiology for evaluating thoracolumbar spine fractures.
Stahl, Ido; Dreyfuss, Daniel; Ofir, Dror; Merom, Lior; Raichel, Michael; Hous, Nir; Norman, Doron; Haddad, Elias
2017-02-01
Timely interpretation of computed tomography (CT) scans is of paramount importance in diagnosing and managing spinal column fractures, which can be devastating. Out-of-hospital, on-call spine surgeons are often asked to evaluate CT scans of patients who have sustained trauma to the thoracolumbar spine to make diagnosis and to determine the appropriate course of urgent treatment. Capturing radiographic scans and video clips from computer screens and sending them as instant messages have become common means of communication between physicians, aiding in triaging and transfer decision-making in orthopedic and neurosurgical emergencies. The present study aimed to compare the reliability of interpreting CT scans viewed by orthopedic surgeons in two ways for diagnosing, classifying, and treatment planning for thoracolumbar spine fractures: (1) captured as video clips from standard workstation-based picture archiving and communication system (PACS) and sent via a smartphone-based instant messaging application for viewing on a smartphone; and (2) viewed directly on a PACS. Reliability and agreement study. Thirty adults with thoracolumbar spine fractures who had been consecutively admitted to the Division of Orthopedic Surgery of a Level I trauma center during 2014. Intraobserver agreement. CT scans were captured by use of an iPhone 6 smartphone from a computer screen displaying PACS. Then by use of the WhatsApp instant messaging application, video clips of the scans were sent to the personal smartphones of five spine surgeons. These evaluators were asked to diagnose, classify, and determine the course of treatment for each case. Evaluation of the cases was repeated 4 weeks later, this time using the standard method of workstation-based PACS. Intraobserver agreement was interpreted based on the value of Cohen's kappa statistic. The study did not receive any outside funding. Intraobserver agreement for determining fracture level was near perfect (κ=0.94). Intraobserver agreement for AO classification, proposed treatment, neural canal penetration, and Denis classification were substantial (κ values, 0.75, 0.73, 0.71, and 0.69, respectively). Intraobserver agreement for loss of vertebral height and kyphosis were moderate (κ values, 0.55 and 0.45, respectively) CONCLUSIONS: Video clips of CT scans can be readily captured by a smartphone from a workstation-based PACS and then transmitted by use of the WhatsApp instant messaging application. Diagnosing, classifying, and proposing treatment of fractures of the thoracic and lumbar spine can be made with equal reliability by evaluating video clips of CT scans transmitted to a smartphone or by the standard method of viewing the CT scan on a workstation-based PACS. Evaluating video clips of CT scans transmitted to a smartphone is a readily accessible, simple, and inexpensive method. We believe that it can be reliably used for consultations between the emergency physicians or orthopedic or neurosurgical residents with offsite, on-call specialists. It might also enable rural orcommunity emergency department physicians to communicate more efficiently and effectively with surgeons in tertiary referral centers. Copyright © 2016 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Ljubojevic, Milos; Vaskovic, Vojkan; Stankovic, Srecko; Vaskovic, Jelena
2014-01-01
The main objective of this research is to investigate efficiency of use of supplementary video content in multimedia teaching. Integrating video clips in multimedia lecture presentations may increase students' perception of important information and motivation for learning. Because of that, students can better understand and remember key points of…
Digital Storytelling: A Tool for Teaching and Learning in the YouTube Generation
ERIC Educational Resources Information Center
Dreon, Oliver; Kerper, Richard M.; Landis, Jon
2011-01-01
Say the phrase "Charlie bit my finger," and just about every human being with Internet access visualizes the viral video clip of baby Charlie precociously biting the finger of his brother. With almost 200 million views, this video represents just one of thousands of viral videos that form a core component of modern entertainment, news,…
Effects of Captioning on Video Comprehension and Incidental Vocabulary Learning
ERIC Educational Resources Information Center
Perez, Maribel Montero; Peters, Elke; Clarebout, Geraldine; Desmet, Piet
2014-01-01
This study examines how three captioning types (i.e., on-screen text in the same language as the video) can assist L2 learners in the incidental acquisition of target vocabulary words and in the comprehension of L2 video. A sample of 133 Flemish undergraduate students watched three French clips twice. The control group (n = 32) watched the clips…
ERIC Educational Resources Information Center
Zisimopoulos, Dimitrios; Sigafoos, Jeff; Koutromanos, George
2011-01-01
We evaluated a video prompting and a constant time delay procedure for teaching three primary school students with moderate intellectual disabilities to access the Internet and download pictures related to participation in a classroom History project. Video clips were used as an antecedent prompt and as an error correction technique within a…
Fujiwara, Takeo; Mizuki, Rie; Miki, Takahiro; Chemtob, Claude
2015-01-01
"Emotional numbing" is a symptom of post-traumatic stress disorder (PTSD) characterized by a loss of interest in usually enjoyable activities, feeling detached from others, and an inability to express a full range of emotions. Emotional numbing is usually assessed through self-report, and is particularly difficult to ascertain among young children. We conducted a pilot study to explore the use of facial expression ratings in response to a comedy video clip to assess emotional reactivity among preschool children directly exposed to the Great East Japan Earthquake. This study included 23 child participants. Child PTSD symptoms were measured using a modified version of the Parent's Report of the Child's Reaction to Stress scale. Children were filmed while watching a 2-min video compilation of natural scenes ('baseline video') followed by a 2-min video clip from a television comedy ('comedy video'). Children's facial expressions were processed the using Noldus FaceReader software, which implements the Facial Action Coding System (FACS). We investigated the association between PTSD symptom scores and facial emotion reactivity using linear regression analysis. Children with higher PTSD symptom scores showed a significantly greater proportion of neutral facial expressions, controlling for sex, age, and baseline facial expression (p < 0.05). This pilot study suggests that facial emotion reactivity, measured using facial expression recognition software, has the potential to index emotional numbing in young children. This pilot study adds to the emerging literature on using experimental psychopathology methods to characterize children's reactions to disasters.
Raabe, Ellen A.; D'Anjou, Robert; Pope, Domonique K.; Robbins, Lisa L.
2011-01-01
This project combines underwater video with maps and descriptions to illustrate diverse seafloor habitats from Tampa Bay, Florida, to Mobile Bay, Alabama. A swath of seafloor was surveyed with underwater video to 100 meters (m) water depth in 1999 and 2000 as part of the Gulfstream Natural Gas System Survey. The U.S. Geological Survey (USGS) in St. Petersburg, Florida, in cooperation with Eckerd College and the Florida Department of Environmental Protection (FDEP), produced an archive of analog-to-digital underwater movies. Representative clips of seafloor habitats were selected from hundreds of hours of underwater footage. The locations of video clips were mapped to show the distribution of habitat and habitat transitions. The numerous benthic habitats in the northeastern Gulf of Mexico play a vital role in the region's economy, providing essential resources for tourism, natural gas, recreational water sports (fishing, boating, scuba diving), materials, fresh food, energy, a source of sand for beach renourishment, and more. These submerged natural resources are important to the economy but are often invisible to the general public. This product provides a glimpse of the seafloor with sample underwater video, maps, and habitat descriptions. It was developed to depict the range and location of seafloor habitats in the region but is limited by depth and by the survey track. It should not be viewed as comprehensive, but rather as a point of departure for inquiries and appreciation of marine resources and seafloor habitats. Further information is provided in the Resources section.
Inoue, M; Uchida, A; Shinoda, K; Taira, Y; Noda, T; Ohnuma, K; Bissen-Miyajima, H; Hirakata, A
2014-01-01
Purpose To evaluate the images created in a model eye during simulated cataract surgery. Patients and methods This study was conducted as a laboratory investigation and interventional case series. An artificial opaque lens, a clear intraocular lens (IOL), or an irrigation/aspiration (I/A) tip was inserted into the ‘anterior chamber' of a model eye with the frosted posterior surface corresponding to the retina. Video images were recorded of the posterior surface of the model eye from the rear during simulated cataract surgery. The video clips were shown to 20 patients before cataract surgery, and the similarity of their visual perceptions to these images was evaluated postoperatively. Results The images of the moving lens fragments and I/A tip and the insertion of the IOL were seen from the rear. The image through the opaque lens and the IOL without moving objects was the light of the surgical microscope from the rear. However, when the microscope light was turned off after IOL insertion, the images of the microscope and operating room were observed by the room illumination from the rear. Seventy percent of the patients answered that the visual perceptions of moving lens fragments were similar to the video clips and 55% reported similarity with the IOL insertion. Eighty percent of the patients recommended that patients watch the video clip before their scheduled cataract surgery. Conclusions The patients' visual perceptions during cataract surgery can be reproduced in the model eye. Watching the video images preoperatively may help relax the patients during surgery. PMID:24788007
Early Word Comprehension in Infants: Replication and Extension
ERIC Educational Resources Information Center
Bergelson, Elika; Swingley, Daniel
2015-01-01
A handful of recent experimental reports have shown that infants of 6-9 months know the meanings of some common words. Here, we replicate and extend these findings. With a new set of items, we show that when young infants (age 6-16 months, n = 49) are presented with side-by-side video clips depicting various common early words, and one clip is…
Teaching Surgical Procedures with Movies: Tips for High-quality Video Clips
Jacquemart, Mathieu; Bouletreau, Pierre; Breton, Pierre; Mojallal, Ali
2016-01-01
Summary: Video must now be considered as a precious tool for learning surgery. However, the medium does present production challenges, and currently, quality movies are not always accessible. We developed a series of 7 surgical videos and made them available on a publicly accessible internet website. Our videos have been viewed by thousands of people worldwide. High-quality educational movies must respect strategic and technical points to be reliable. PMID:27757342
Public online information about tinnitus: A cross-sectional study of YouTube videos.
Basch, Corey H; Yin, Jingjing; Kollia, Betty; Adedokun, Adeyemi; Trusty, Stephanie; Yeboah, Felicia; Fung, Isaac Chun-Hai
2018-01-01
To examine the information about tinnitus contained in different video sources on YouTube. The 100 most widely viewed tinnitus videos were manually coded. Firstly, we identified the sources of upload: consumer, professional, television-based clip, and internet-based clip. Secondly, the videos were analyzed to ascertain what pertinent information they contained from a current National Institute on Deafness and Other Communication Disorders fact sheet. Of the videos, 42 were consumer-generated, 33 from media, and 25 from professionals. Collectively, the 100 videos were viewed almost 9 million times. The odds of mentioning "objective tinnitus" in professional videos were 9.58 times those from media sources [odds ratio (OR) = 9.58; 95% confidence interval (CI): 1.94, 47.42; P = 0.01], whereas these odds in consumer videos were 51% of media-generated videos (OR = 0.51; 95% CI: 0.20, 1.29; P = 0.16). The odds that the purpose of a video was to sell a product or service were nearly the same for both consumer and professional videos. Consumer videos were found to be 4.33 times as likely to carry a theme about an individual's own experience with tinnitus (OR = 4.33; 95% CI: 1.62, 11.63; P = 0.004) as media videos. Of the top 100 viewed videos on tinnitus, most were uploaded by consumers, sharing individuals' experiences. Actions are needed to make scientific medical information more prominently available and accessible on YouTube and other social media.
Public Online Information About Tinnitus: A Cross-Sectional Study of YouTube Videos
Basch, Corey H.; Yin, Jingjing; Kollia, Betty; Adedokun, Adeyemi; Trusty, Stephanie; Yeboah, Felicia; Fung, Isaac Chun-Hai
2018-01-01
Purpose: To examine the information about tinnitus contained in different video sources on YouTube. Materials and Methods: The 100 most widely viewed tinnitus videos were manually coded. Firstly, we identified the sources of upload: consumer, professional, television-based clip, and internet-based clip. Secondly, the videos were analyzed to ascertain what pertinent information they contained from a current National Institute on Deafness and Other Communication Disorders fact sheet. Results: Of the videos, 42 were consumer-generated, 33 from media, and 25 from professionals. Collectively, the 100 videos were viewed almost 9 million times. The odds of mentioning “objective tinnitus” in professional videos were 9.58 times those from media sources [odds ratio (OR) = 9.58; 95% confidence interval (CI): 1.94, 47.42; P = 0.01], whereas these odds in consumer videos were 51% of media-generated videos (OR = 0.51; 95% CI: 0.20, 1.29; P = 0.16). The odds that the purpose of a video was to sell a product or service were nearly the same for both consumer and professional videos. Consumer videos were found to be 4.33 times as likely to carry a theme about an individual’s own experience with tinnitus (OR = 4.33; 95% CI: 1.62, 11.63; P = 0.004) as media videos. Conclusions: Of the top 100 viewed videos on tinnitus, most were uploaded by consumers, sharing individuals’ experiences. Actions are needed to make scientific medical information more prominently available and accessible on YouTube and other social media. PMID:29457600
Klinger, Daniel R; Reinard, Kevin A; Ajayi, Olaide O; Delashaw, Johnny B
2018-01-01
The binocular operating microscope has been the visualization instrument of choice for microsurgical clipping of intracranial aneurysms for many decades. To discuss recent technological advances that have provided novel visualization tools, which may prove to be superior to the binocular operating microscope in many regards. We present an operative video and our operative experience with the BrightMatterTM Servo System (Synaptive Medical, Toronto, Ontario, Canada) during the microsurgical clipping of an anterior communicating artery aneurysm. To the best of our knowledge, the use of this device for the microsurgical clipping of an intracranial aneurysm has never been described in the literature. The BrightMatterTM Servo System (Synaptive Medical) is a surgical exoscope which avoids many of the ergonomic constraints of the binocular operating microscope, but is associated with a steep learning curve. The BrightMatterTM Servo System (Synaptive Medical) is a maneuverable surgical exoscope that is positioned with a directional aiming device and a surgeon-controlled foot pedal. While utilizing this device comes with a steep learning curve typical of any new technology, the BrightMatterTM Servo System (Synaptive Medical) has several advantages over the conventional surgical microscope, which include a relatively unobstructed surgical field, provision of high-definition images, and visualization of difficult angles/trajectories. This device can easily be utilized as a visualization tool for a variety of cranial and spinal procedures in lieu of the binocular operating microscope. We anticipate that this technology will soon become an integral part of the neurosurgeon's armamentarium. Copyright © 2017 by the Congress of Neurological Surgeons
Learning Computational Models of Video Memorability from fMRI Brain Imaging.
Han, Junwei; Chen, Changyuan; Shao, Ling; Hu, Xintao; Han, Jungong; Liu, Tianming
2015-08-01
Generally, various visual media are unequally memorable by the human brain. This paper looks into a new direction of modeling the memorability of video clips and automatically predicting how memorable they are by learning from brain functional magnetic resonance imaging (fMRI). We propose a novel computational framework by integrating the power of low-level audiovisual features and brain activity decoding via fMRI. Initially, a user study experiment is performed to create a ground truth database for measuring video memorability and a set of effective low-level audiovisual features is examined in this database. Then, human subjects' brain fMRI data are obtained when they are watching the video clips. The fMRI-derived features that convey the brain activity of memorizing videos are extracted using a universal brain reference system. Finally, due to the fact that fMRI scanning is expensive and time-consuming, a computational model is learned on our benchmark dataset with the objective of maximizing the correlation between the low-level audiovisual features and the fMRI-derived features using joint subspace learning. The learned model can then automatically predict the memorability of videos without fMRI scans. Evaluations on publically available image and video databases demonstrate the effectiveness of the proposed framework.
Facial Attractiveness Ratings from Video-Clips and Static Images Tell the Same Story
Rhodes, Gillian; Lie, Hanne C.; Thevaraja, Nishta; Taylor, Libby; Iredell, Natasha; Curran, Christine; Tan, Shi Qin Claire; Carnemolla, Pia; Simmons, Leigh W.
2011-01-01
Most of what we know about what makes a face attractive and why we have the preferences we do is based on attractiveness ratings of static images of faces, usually photographs. However, several reports that such ratings fail to correlate significantly with ratings made to dynamic video clips, which provide richer samples of appearance, challenge the validity of this literature. Here, we tested the validity of attractiveness ratings made to static images, using a substantial sample of male faces. We found that these ratings agreed very strongly with ratings made to videos of these men, despite the presence of much more information in the videos (multiple views, neutral and smiling expressions and speech-related movements). Not surprisingly, given this high agreement, the components of video-attractiveness were also very similar to those reported previously for static-attractiveness. Specifically, averageness, symmetry and masculinity were all significant components of attractiveness rated from videos. Finally, regression analyses yielded very similar effects of attractiveness on success in obtaining sexual partners, whether attractiveness was rated from videos or static images. These results validate the widespread use of attractiveness ratings made to static images in evolutionary and social psychological research. We speculate that this validity may stem from our tendency to make rapid and robust judgements of attractiveness. PMID:22096491
A P300 event related potential technique for assessment of sexually oriented interest.
Vardi, Yoram; Volos, Michal; Sprecher, Elliot; Granovsky, Yelena; Gruenwald, Ilan; Yarnitsky, David
2006-12-01
Despite all of the modern, sophisticated tests that exist for diagnosing and assessing male and female sexual disorders, to our knowledge there is no objective psychophysiological test to evaluate sexual arousal and interest. We provide preliminary data showing a decrease in auditory P300 wave amplitude during exposure to sexually explicit video clips and a significant correlation between the auditory P300 amplitude decrease and self-reported scores of sexual arousal and interest in the clips. A total of 30 healthy subjects were exposed to several blocks of auditory stimuli administered using an oddball paradigm. Baseline auditory P300 amplitudes were obtained and auditory stimuli were then delivered while viewing visual clips with 3 types of content, including sport, scenery and sex. Auditory P300 amplitude significantly decreased during viewing clips of all contents. Viewing sexual content clips caused a maximal decrease in P300 amplitude (p <0.0001). In addition, a high correlation was found between the amplitude decrease and scores on the sexual arousal questionnaire regarding the viewed clips (r = 0.61, p <0.001). In addition, the P300 amplitude decrease was significantly related to the sexual interest score (r = 0.37, p = 0.042) but not to interest in clips of nonsexual content. The change in auditory P300 amplitude during exposure to visual stimuli with sexual context seems to be an objective measure of subject sexual interest. This method might be applied to assess therapeutic intervention and as a diagnostic tool for assessing disorders of impaired libido or psychogenic sexual dysfunction.
Naumann, David N; Mellis, Clare; Smith, Iain M; Mamuza, Jasna; Skene, Imogen; Harris, Tim; Midwinter, Mark J; Hutchings, Sam D
2016-01-01
Objectives Sublingual microcirculatory monitoring for traumatic haemorrhagic shock (THS) may predict clinical outcomes better than traditional blood pressure and cardiac output, but is not usually performed until the patient reaches the intensive care unit (ICU), missing earlier data of potential importance. This pilot study assessed for the first time the feasibility and safety of sublingual video-microscopy for THS in the emergency department (ED), and whether it yields useable data for analysis. Setting A safety and feasibility assessment was undertaken as part of the prospective observational MICROSHOCK study; sublingual video-microscopy was performed at the UK-led Role 3 medical facility at Camp Bastion, Afghanistan, and in the ED in 3 UK Major Trauma Centres. Participants There were 15 casualties (2 military, 13 civilian) who presented with traumatic haemorrhagic shock with a median injury severity score of 26. The median age was 41; the majority (n=12) were male. The most common injury mechanism was road traffic accident. Primary and secondary outcome measures Safety and feasibility were the primary outcomes, as measured by lack of adverse events or clinical interruptions, and successful acquisition and storage of data. The secondary outcome was the quality of acquired video clips according to validated criteria, in order to determine whether useful data could be obtained in this emergency context. Results Video-microscopy was successfully performed and stored for analysis for all patients, yielding 161 video clips. There were no adverse events or episodes where clinical management was affected or interrupted. There were 104 (64.6%) video clips from 14 patients of sufficient quality for analysis. Conclusions Early sublingual microcirculatory monitoring in the ED for patients with THS is safe and feasible, even in a deployed military setting, and yields videos of satisfactory quality in a high proportion of cases. Further investigations of early microcirculatory behaviour in this context are warranted. Trial registration number NCT02111109. PMID:28003301
Multimodal Sparse Coding for Event Detection
2015-10-13
classification tasks based on single modality. We present multimodal sparse coding for learning feature representations shared across multiple modalities...The shared representa- tions are applied to multimedia event detection (MED) and evaluated in compar- ison to unimodal counterparts, as well as other...and video tracks from the same multimedia clip, we can force the two modalities to share a similar sparse representation whose benefit includes robust
ERIC Educational Resources Information Center
Wildeboer, Andrea; Thijssen, Sandra; Bakermans-Kranenburg, Marian J.; Jaddoe, Vincent W. V.; White, Tonya; Tiemeier, Henning; Van IJzendoorn, Marinus H.
2017-01-01
This study examined dispositional and situational correlates of donating behavior in a sample of 221 eight-year-old children. Children were shown a promotional clip for a charity, including a donation call. For a random half of the children, the video fragment ended with a probe of a same-sex peer donating money to the charity. Seeing a peer…
NASA Astrophysics Data System (ADS)
2001-01-01
Last year saw very good progress at ESO's Paranal Observatory , the site of the Very Large Telescope (VLT). The third and fourth 8.2-m Unit Telescopes, MELIPAL and YEPUN had "First Light" (cf. PR 01/00 and PR 18/00 ), while the first two, ANTU and KUEYEN , were busy collecting first-class data for hundreds of astronomers. Meanwhile, work continued towards the next phase of the VLT project, the combination of the telescopes into the VLT Interferometer. The test instrument, VINCI (cf. PR 22/00 ) is now being installed in the VLTI Laboratory at the centre of the observing platform on the top of Paranal. Below is a new collection of video sequences and photos that illustrate the latest developments at the Paranal Observatory. The were obtained by the EPR Video Team in December 2000. The photos are available in different formats, including "high-resolution" that is suitable for reproduction purposes. A related ESO Video News Reel for professional broadcasters will soon become available and will be announced via the usual channels. Overview Paranal Observatory (Dec. 2000) Video Clip 02a/01 [MPEG - 4.5Mb] ESO PR Video Clip 02a/01 "Paranal Observatory (December 2000)" (4875 frames/3:15 min) [MPEG Video+Audio; 160x120 pix; 4.5Mb] [MPEG Video+Audio; 320x240 pix; 13.5 Mb] [RealMedia; streaming; 34kps] [RealMedia; streaming; 200kps] ESO Video Clip 02a/01 shows some of the construction activities at the Paranal Observatory in December 2000, beginning with a general view of the site. Then follow views of the Residencia , a building that has been designed by Architects Auer and Weber in Munich - it integrates very well into the desert, creating a welcome recreational site for staff and visitors in this harsh environment. The next scenes focus on the "stations" for the auxiliary telescopes for the VLTI and the installation of two delay lines in the 140-m long underground tunnel. The following part of the video clip shows the start-up of the excavation work for the 2.6-m VLT Survey Telescope (VST) as well as the location known as the "NTT Peak", now under consideration for the installation of the 4-m VISTA telescope. The last images are from to the second 8.2-m Unit Telescope, KUEYEN, that has been in full use by the astronomers with the UVES and FORS2 instruments since April 2000. ESO PR Photo 04a/01 ESO PR Photo 04a/01 [Preview - JPEG: 466 x 400 pix - 58k] [Normal - JPEG: 931 x 800 pix - 688k] [Hires - JPEG: 3000 x 2577 pix - 7.6M] Caption : PR Photo 04a/01 shows an afternoon view from the Paranal summit towards East, with the Base Camp and the new Residencia on the slope to the right, above the valley in the shadow of the mountain. ESO PR Photo 04b/01 ESO PR Photo 04b/01 [Preview - JPEG: 791 x 400 pix - 89k] [Normal - JPEG: 1582 x 800 pix - 1.1Mk] [Hires - JPEG: 3000 x 1517 pix - 3.6M] PR Photo 04b/01 shows the ramp leading to the main entrance to the partly subterranean Residencia , with the steel skeleton for the dome over the central area in place. ESO PR Photo 04c/01 ESO PR Photo 04c/01 [Preview - JPEG: 498 x 400 pix - 65k] [Normal - JPEG: 995 x 800 pix - 640k] [Hires - JPEG: 3000 x 2411 pix - 6.6M] PR Photo 04c/01 is an indoor view of the reception hall under the dome, looking towards the main entrance. ESO PR Photo 04d/01 ESO PR Photo 04d/01 [Preview - JPEG: 472 x 400 pix - 61k] [Normal - JPEG: 944 x 800 pix - 632k] [Hires - JPEG: 3000 x 2543 pix - 5.8M] PR Photo 04d/01 shows the ramps from the reception area towards the rooms. The VLT Interferometer The Delay Lines consitute a most important element of the VLT Interferometer , cf. PR Photos 26a-e/00. At this moment, two Delay Lines are operational on site. A third system will be integrated early this year. The VLTI Delay Line is located in an underground tunnel that is 168 metres long and 8 metres wide. This configuration has been designed to accommodate up to eight Delay Lines, including their transfer optics in an ideal environment: stable temperature, high degree of cleanliness, low levels of straylight, low air turbulence. The positions of the Delay Line carriages are computed to adjust the Optical Path Lengths requested for the fringe pattern observation. The positions are controlled in real time by a laser metrology system, specially developed for this purpose. The position precision is about 20 nm (1 nm = 10 -9 m, or 1 millionth of a millimetre) over a distance of 120 metres. The maximum velocity is 0.50 m/s in position mode and maximum 0.05 m/s in operation. The system is designed for 25 year of operation and to survive earthquake up to 8.6 magnitude on the Richter scale. The VLTI Delay Line is a three-year project, carried out by ESO in collaboration with Dutch Space Holdings (formerly Fokker Space) and TPD-TNO . VLTI Delay Lines (December 2000) - ESO PR Video Clip 02b/01 [MPEG - 3.6Mb] ESO PR Video Clip 02b/01 "VLTI Delay Lines (December 2000)" (2000 frames/1:20 min) [MPEG Video+Audio; 160x120 pix; 3.6Mb] [MPEG Video+Audio; 320x240 pix; 13.7 Mb] [RealMedia; streaming; 34kps] [RealMedia; streaming; 200kps] ESO Video Clip 02b/00 shows the Delay Lines of the VLT Interferometer facility at Paranal during tests. One of the carriages is moving on 66-metre long rectified rails, driven by a linear motor. The carriage is equipped with three wheels in order to preserve high guidance accuracy. Another important element is the Cat's Eye that reflects the light from the telescope to the VLT instrumentation. This optical system is made of aluminium (including the mirrors) to avoid thermo-mechanical problems. ESO PR Photo 04e/01 ESO PR Photo 04e/01 [Preview - JPEG: 400 x 402 pix - 62k] [Normal - JPEG: 800 x 804 pix - 544k] [Hires - JPEG: 3000 x 3016 pix - 6.2M] Caption : PR Photo 04e/01 shows one of the 30 "stations" for the movable 1.8-m Auxiliary Telescopes. When one of these telescopes is positioned ("parked") on top of it, The light will be guided through the hole towards the Interferometric Tunnel and the Delay Lines. ESO PR Photo 04f/01 ESO PR Photo 04f/01 [Preview - JPEG: 568 x 400 pix - 96k] [Normal - JPEG: 1136 x 800 pix - 840k] [Hires - JPEG: 3000 x 2112 pix - 4.6M] PR Photo 04f/01 shows a general view of the Interferometric Tunnel and the Delay Lines. ESO PR Photo 04g/01 ESO PR Photo 04g/01 [Preview - JPEG: 406 x 400 pix - 62k] [Normal - JPEG: 812 x 800 pix - 448k] [Hires - JPEG: 3000 x 2956 pix - 5.5M] PR Photo 04g/01 shows one of the Delay Line carriages in parking position. The "NTT Peak" The "NTT Peak" is a mountain top located about 2 km to the north of Paranal. It received this name when ESO considered to move the 3.58-m New Technology Telescope from La Silla to this peak. The possibility of installing the 4-m VISTA telescope (cf. PR 03/00 ) on this peak is now being discussed. ESO PR Photo 04h/01 ESO PR Photo 04h/01 [Preview - JPEG: 630 x 400 pix - 89k] [Normal - JPEG: 1259 x 800 pix - 1.1M] [Hires - JPEG: 3000 x 1907 pix - 5.2M] PR Photo 04h/01 shows the view from the "NTT Peak" towards south, vith the Paranal mountain and the VLT enclosures in the background. ESO PR Photo 04i/01 ESO PR Photo 04i/01 [Preview - JPEG: 516 x 400 pix - 50k] [Normal - JPEG: 1031 x 800 pix - 664k] [Hires - JPEG: 3000 x 2328 pix - 6.0M] PR Photo 04i/01 is a view towards the "NTT Peak" from the top of the Paranal mountain. The access road and the concrete pillar that was used to support a site testing telescope at the top of this peak are seen This is the caption to ESO PR Photos 04a-1/01 and PR Video Clips 02a-b/01 . They may be reproduced, if credit is given to the European Southern Observatory. The ESO PR Video Clips service to visitors to the ESO website provides "animated" illustrations of the ongoing work and events at the European Southern Observatory. The most recent clip was: ESO PR Video Clip 01/01 about the Physics On Stage Festival (11 January 2001) . Information is also available on the web about other ESO videos.
Lineup identification by children: effects of clothing bias.
Freire, Alejo; Lee, Kang; Williamson, Karen S; Stuart, Sarah J E; Lindsay, R C L
2004-06-01
This study examined effects of clothing cues on children's identification accuracy from lineups. Four- to 14-year-olds (n = 228) saw 12 video clips of individuals, each wearing a distinctly colored shirt. After watching each clip children were presented with a target-present or target-absent photo lineup. Three clothing conditions were included. In 2 conditions all lineup members wore the same colored shirt; in the third, biased condition, the shirt color of only one individual matched that seen in the preceding clip (the target in target-present trials and the replacement in target-absent trials). Correct identifications of the target in target-present trials were most frequent in the biased condition, whereas in target-absent trials the biased condition led to more false identifications of the target replacement. Older children were more accurate than younger children, both in choosing the target from target-present lineups and rejecting target-absent lineups. These findings suggest that a simple clothing cue such as shirt color can have a significant impact on children's lineup identification accuracy.
Lineup Identification by Children: Effects of Clothing Bias
Freire, Alejo; Lee, Kang; Williamson, Karen S.; Stuart, Sarah J. E.; Lindsay, R. C. L.
2008-01-01
This study examined effects of clothing cues on children's identification accuracy from lineups. Four- to 14-year-olds (n = 228) saw 12 video clips of individuals, each wearing a distinctly colored shirt. After watching each clip children were presented with a target-present or target-absent photo lineup. Three clothing conditions were included. In 2 conditions all lineup members wore the same colored shirt; in the third, biased condition, the shirt color of only one individual matched that seen in the preceding clip (the target in target-present trials and the replacement in target-absent trials). Correct identifications of the target in target-present trials were most frequent in the biased condition, whereas in target-absent trials the biased condition led to more false identifications of the target replacement. Older children were more accurate than younger children, both in choosing the target from target-present lineups and rejecting target-absent lineups. These findings suggest that a simple clothing cue such as shirt color can have a significant impact on children's lineup identification accuracy. PMID:15264450
SOFTWARE REVIEW: Multimedia Motion II. CD-ROM and Teacher's Guide
NASA Astrophysics Data System (ADS)
Scaife, Jon A.
2000-05-01
This CD is the second edition of Multimedia Motion. It is an excellent resource for learning about kinematics. To run the software requires at least a good 486 processor with Windows 9x or 3.1. I used a P166 with Win 98. The CD contains 36 video clips, or `movies', of events ranging from a space shuttle launch, sporting activities and vehicle crash tests to familiar laboratory air-track experiments. For each event there are options to access concise on-screen notes and an audio commentary. The emphasis is strongly on user activity. The software design is very good, stimulating and supporting the user to think and make decisions about physics. `Inauthentic labour' is largely dealt with by the programme, and the result is that there is a high thinking-time to using-time ratio. The application is easy to install from scratch (it took under three minutes from first opening the box to running it for real). Basic use is as follows: you play a video clip through to get an idea of its contents. This takes a few seconds. Then you play it again, step by step, using the mouse to track a particular point of interest as it moves in the video (e.g. the tip of the space shuttle or the centre of a tennis ball). This results in a data set of x, y and t coordinates. The data can be plotted immediately, not only as x or y versus time but also with velocity or acceleration as the ordinate. There is an option for curve fitting, and if a linear or quadratic fit is chosen, the equation is displayed, from which it is simple to obtain gradients and intercepts, terms that often have significance in the physical system. It might be inferred from above that the package is designed for A-level and upper GCSE. While this is so, it could also be used with younger secondary students. The software is transparent and data gathering and graph plotting are very easy. The difference in the use of the application with younger students will be in the level of interpretation that the teacher would expect. This is an issue that is in the teacher's hands; the package is flexible enough to accommodate a wide range of learning aims. Audio clips are brief and to the point, with both female and male speakers. The commentary goes beyond the descriptive, raising interesting questions and issues and setting challenges for the user. The optional screen text is pitched at A-level. Video clips can be viewed at full screen, which is a big improvement over the first edition. There are some new clips and a few omissions from the first edition; further CDs of movie data are planned. Clips may be played forwards or backwards at various speeds using a scroll bar. This allows close control and analysis, which is useful for critical events in which changes are rapid, such as collisions. Printing of data and graphs is straightforward and the results are clear. Unlike the first edition, Multimedia Motion II only gives direct support to graphs with time as the abscissa. For graphs such as v(x) versus v(y), however, it is a simple matter to save the data directly into Excel or another spreadsheet. I encountered only one snag in using the CD: if I forgot to save my data before moving to a new clip I could not recover it. This failing could be mine, but I did try hard. The Teacher's Guide contains photocopiable worksheets and detailed discussion and graphs of the video clips. For each clip there is a section of `useful data and formulae'. The guide is not a necessity for using the CD-ROM but it would be a very handy A-level teaching resource, allowing the option of independent use by students. It is accompanied by a floppy disc containing sets of experimental data for the video clips, a useful option for quick demonstrations. In addition, the guide explains the interesting and creative option of making your own movie sequences from videotape, though access to a video card would be needed to do this. Imagine the motivational value of students recording themselves playing various sports, or even just riding a bike, and then analysing their own kinematic behaviour on the computer! In summary, this package has a lot to offer in the teaching of kinematics. It can be used in its full glory at A-level but has much to offer at, or even before, GCSE. It allows for the nearest realistic thing to direct experimentation in a wide range of well-chosen examples of motion.
Sophie in the Snow: A Simple Approach to Datalogging and Modelling in Physics
ERIC Educational Resources Information Center
Oldknow, Adrian; Huyton, Pip; Galloway, Ian
2010-01-01
Most students now have access to devices such as digital cameras and mobile phones that are capable of taking short video clips outdoors. Such clips can be used with powerful ICT tools, such as Tracker, Excel and TI-Nspire, to extract time and coordinate data about a moving object, to produce scattergrams and to fit models. In this article we…
Consolidation of Complex Events via Reinstatement in Posterior Cingulate Cortex.
Bird, Chris M; Keidel, James L; Ing, Leslie P; Horner, Aidan J; Burgess, Neil
2015-10-28
It is well-established that active rehearsal increases the efficacy of memory consolidation. It is also known that complex events are interpreted with reference to prior knowledge. However, comparatively little attention has been given to the neural underpinnings of these effects. In healthy adults humans, we investigated the impact of effortful, active rehearsal on memory for events by showing people several short video clips and then asking them to recall these clips, either aloud (Experiment 1) or silently while in an MRI scanner (Experiment 2). In both experiments, actively rehearsed clips were remembered in far greater detail than unrehearsed clips when tested a week later. In Experiment 1, highly similar descriptions of events were produced across retrieval trials, suggesting a degree of semanticization of the memories had taken place. In Experiment 2, spatial patterns of BOLD signal in medial temporal and posterior midline regions were correlated when encoding and rehearsing the same video. Moreover, the strength of this correlation in the posterior cingulate predicted the amount of information subsequently recalled. This is likely to reflect a strengthening of the representation of the video's content. We argue that these representations combine both new episodic information and stored semantic knowledge (or "schemas"). We therefore suggest that posterior midline structures aid consolidation by reinstating and strengthening the associations between episodic details and more generic schematic information. This leads to the creation of coherent memory representations of lifelike, complex events that are resistant to forgetting, but somewhat inflexible and semantic-like in nature. Copyright © 2015 Bird, Keidel et al.
Clerico, Andrea; Tiwari, Abhishek; Gupta, Rishabh; Jayaraman, Srinivasan; Falk, Tiago H.
2018-01-01
The quantity of music content is rapidly increasing and automated affective tagging of music video clips can enable the development of intelligent retrieval, music recommendation, automatic playlist generators, and music browsing interfaces tuned to the users' current desires, preferences, or affective states. To achieve this goal, the field of affective computing has emerged, in particular the development of so-called affective brain-computer interfaces, which measure the user's affective state directly from measured brain waves using non-invasive tools, such as electroencephalography (EEG). Typically, conventional features extracted from the EEG signal have been used, such as frequency subband powers and/or inter-hemispheric power asymmetry indices. More recently, the coupling between EEG and peripheral physiological signals, such as the galvanic skin response (GSR), have also been proposed. Here, we show the importance of EEG amplitude modulations and propose several new features that measure the amplitude-amplitude cross-frequency coupling per EEG electrode, as well as linear and non-linear connections between multiple electrode pairs. When tested on a publicly available dataset of music video clips tagged with subjective affective ratings, support vector classifiers trained on the proposed features were shown to outperform those trained on conventional benchmark EEG features by as much as 6, 20, 8, and 7% for arousal, valence, dominance and liking, respectively. Moreover, fusion of the proposed features with EEG-GSR coupling features showed to be particularly useful for arousal (feature-level fusion) and liking (decision-level fusion) prediction. Together, these findings show the importance of the proposed features to characterize human affective states during music clip watching. PMID:29367844
Pattern recall skills of talented soccer players: Two new methods applied.
van Maarseveen, Mariëtte J J; Oudejans, Raôul R D; Savelsbergh, Geert J P
2015-06-01
In this study we analyzed the pattern recall skills of talented soccer players by means of two innovative methods of analysis and gaze behavior data. Twenty-two young female soccer players watched video clips of 3 vs. 3 small-sided games and, after occlusion, had to reproduce the positions of the players. Recall performance was measured by calculating the spatial error of the recalled player positions at the moment of occlusion and at consecutive 33ms increments. We analyzed player positions relative to each other, by assessing geometric pattern features in terms of angles between players, and we transformed the data into real-world coordinates to exclude the effects of the 2D perspective in the video clips. The results showed that the participants anticipated the movements of the patterns. In real-world coordinates, the more experienced players anticipated the pattern further in advance than the less experienced players and demonstrated a higher search rate, a shorter fixation duration and a higher fixation order. The differences in recall accuracy between the defensive and offensive elements were not consistent across the methods of analysis and, therefore, we propose that perspective effects of the video clip should be taken into account in further research. Copyright © 2015 Elsevier B.V. All rights reserved.
Experimental application of simulation tools for evaluating UAV video change detection
NASA Astrophysics Data System (ADS)
Saur, Günter; Bartelsen, Jan
2015-10-01
Change detection is one of the most important tasks when unmanned aerial vehicles (UAV) are used for video reconnaissance and surveillance. In this paper, we address changes on short time scale, i.e. the observations are taken within time distances of a few hours. Each observation is a short video sequence corresponding to the near-nadir overflight of the UAV above the interesting area and the relevant changes are e.g. recently added or removed objects. The change detection algorithm has to distinguish between relevant and non-relevant changes. Examples for non-relevant changes are versatile objects like trees and compression or transmission artifacts. To enable the usage of an automatic change detection within an interactive workflow of an UAV video exploitation system, an evaluation and assessment procedure has to be performed. Large video data sets which contain many relevant objects with varying scene background and altering influence parameters (e.g. image quality, sensor and flight parameters) including image metadata and ground truth data are necessary for a comprehensive evaluation. Since the acquisition of real video data is limited by cost and time constraints, from our point of view, the generation of synthetic data by simulation tools has to be considered. In this paper the processing chain of Saur et al. (2014) [1] and the interactive workflow for video change detection is described. We have selected the commercial simulation environment Virtual Battle Space 3 (VBS3) to generate synthetic data. For an experimental setup, an example scenario "road monitoring" has been defined and several video clips have been produced with varying flight and sensor parameters and varying objects in the scene. Image registration and change mask extraction, both components of the processing chain, are applied to corresponding frames of different video clips. For the selected examples, the images could be registered, the modelled changes could be extracted and the artifacts of the image rendering considered as noise (slight differences of heading angles, disparity of vegetation, 3D parallax) could be suppressed. We conclude that these image data could be considered to be realistic enough to serve as evaluation data for the selected processing components. Future work will extend the evaluation to other influence parameters and may include the human operator for mission planning and sensor control.
Content-based TV sports video retrieval using multimodal analysis
NASA Astrophysics Data System (ADS)
Yu, Yiqing; Liu, Huayong; Wang, Hongbin; Zhou, Dongru
2003-09-01
In this paper, we propose content-based video retrieval, which is a kind of retrieval by its semantical contents. Because video data is composed of multimodal information streams such as video, auditory and textual streams, we describe a strategy of using multimodal analysis for automatic parsing sports video. The paper first defines the basic structure of sports video database system, and then introduces a new approach that integrates visual stream analysis, speech recognition, speech signal processing and text extraction to realize video retrieval. The experimental results for TV sports video of football games indicate that the multimodal analysis is effective for video retrieval by quickly browsing tree-like video clips or inputting keywords within predefined domain.
Eliciting positive, negative and mixed emotional states: A film library for affective scientists.
Samson, Andrea C; Kreibig, Sylvia D; Soderstrom, Blake; Wade, A Ayanna; Gross, James J
2016-08-01
We describe the creation of a film library designed for researchers interested in positive (amusing), negative (repulsive), mixed (amusing and repulsive) and neutral emotional states. Three hundred 20- to 33-second film clips videotaped by amateurs were selected from video-hosting websites and screened in laboratory studies by 75 female participants on self-reported amusement and repulsion (Experiments 1 and 2). On the basis of pre-defined cut-off values, 51 positive, 39 negative, 59 mixed and 50 neutral film clips were selected. These film clips were then presented to 411 male and female participants in a large online study to identify film clips that reliably induced the target emotions (Experiment 3). Depending on the goal of the study, researchers may choose positive, negative, mixed or neutral emotional film clips on the basis of Experiments 1 and 2 or Experiment 3 ratings.
Influence of audio triggered emotional attention on video perception
NASA Astrophysics Data System (ADS)
Torres, Freddy; Kalva, Hari
2014-02-01
Perceptual video coding methods attempt to improve compression efficiency by discarding visual information not perceived by end users. Most of the current approaches for perceptual video coding only use visual features ignoring the auditory component. Many psychophysical studies have demonstrated that auditory stimuli affects our visual perception. In this paper we present our study of audio triggered emotional attention and it's applicability to perceptual video coding. Experiments with movie clips show that the reaction time to detect video compression artifacts was longer when video was presented with the audio information. The results reported are statistically significant with p=0.024.
Video- Demonstration of Seltzer Tablet in Water Onboard the International Space Station (ISS)
NASA Technical Reports Server (NTRS)
2002-01-01
Saturday Morning Science, the science of opportunity series of applied experiments and demonstrations, performed aboard the International Space Station (ISS) by Expedition 6 astronaut Dr. Don Pettit, revealed some remarkable findings. In this video clip, Pettit demonstrates dropping an Alka Seltzer tablet into a film of water which becomes a floating ball of activity filled water. Watch the video to see the surprising results!
ERIC Educational Resources Information Center
Kersting, Nicole B.; Sutton, Taliesin; Kalinec-Craig, Crystal; Stoehr, Kathleen Jablon; Heshmati, Saeideh; Lozano, Guadalupe; Stigler, James W.
2016-01-01
In this article we report further explorations of the classroom video analysis instrument (CVA), a measure of usable teacher knowledge based on scoring teachers' written analyses of classroom video clips. Like other researchers, our work thus far has attempted to identify and measure separable components of teacher knowledge. In this study we take…
Surgical decision making in a teaching hospital: a linguistic analysis.
Bezemer, Jeff; Murtagh, Ged; Cope, Alexandra; Kneebone, Roger
2016-10-01
The aim of the study was to gain insight in the involvement of non-operating surgeons in intraoperative surgical decision making at a teaching hospital. The decision to proceed to clip and cut the cystic duct during laparoscopic cholecystectomy was investigated through direct observation of team work. Eleven laparoscopic cholecystectomies performed by consultant surgeons and specialty trainees at a London teaching hospital were audio and video recorded. Talk among the surgical team was transcribed and subjected to linguistic analysis, in conjunction with observational analysis of the video material, sequentially marking the unfolding operation. Two components of decision making were identified, participation and rationalization. Participation refers to the degree to which agreement was sought within the surgical team prior to clipping the cystic duct. Rationalization refers to the degree to which the evidential grounds for clipping and cutting were verbalized. The decision to clip and cut the cystic duct was jointly made by members of the surgical team, rather than a solitary surgeon in the majority of cases, involving verbal explication of clinical reasoning and verbal agreement. The extent of joint decision making appears to have been mitigated by two factors: trainee's level of training and duration of the case. © 2014 Royal Australasian College of Surgeons.
Video Dubbing Projects in the Foreign Language Curriculum
ERIC Educational Resources Information Center
Burston, Jack
2005-01-01
The dubbing of muted video clips offers an excellent opportunity to develop the skills of foreign language learners at all linguistic levels. In addition to its motivational value, soundtrack dubbing provides a rich source of activities in all language skill areas: listening, reading, writing, speaking. With advanced students, it also lends itself…
Teaching Psychology to Student Nurses: The Use of "Talking Head" Videos
ERIC Educational Resources Information Center
Snelgrove, Sherrill; Tait, Desiree J. R.; Tait, Michael
2016-01-01
Psychology is a central part of undergraduate nursing curricula in the UK. However, student nurses report difficulties recognising the relevance and value of psychology. We sought to strengthen first-year student nurses' application of psychology by developing a set of digital stories based around "Talking Head" video clips where…
ERIC Educational Resources Information Center
Bauters, Merja; Purma, Jukka; Leinonen, Teemu
2014-01-01
The aim of this short paper is to look at how mobile video recording devices could support learning related to physical practices or places and situations at work. This paper discusses particular kind of workplace learning, namely learning using short video clips that are related to physical environment and tasks preformed in situ. The paper…
ERIC Educational Resources Information Center
Llinares, Salvador; Valls, Julia
2009-01-01
This study explores how preservice primary teachers became engaged in meaning-making mathematics teaching when participating in online discussions within learning environments integrating video-clips of mathematics teaching. We identified different modes of participation in the online discussions and different levels of knowledge-building. The…
Game-based situation awareness training for child and adult cyclists
Airaksinen, Jasmiina; Kanerva, Kaisa; Rissanen, Anna; Ränninranta, Riikka; Åberg, Veera
2017-01-01
Safe cycling requires situation awareness (SA), which is the basis for recognizing and anticipating hazards. Children have poorer SA than adults, which may put them at risk. This study investigates whether cyclists' SA can be trained with a video-based learning game. The effect of executive working memory on SA was also studied. Thirty-six children (9–10 years) and 22 adults (21–48 years) played the game. The game had 30 video clips filmed from a cyclist's perspective. Each clip was suddenly masked and two or three locations were presented. The player's task was to choose locations with a potential hazard and feedback was given for their answers. Working memory capacity (WMC) was tested with a counting span task. Children's and adults' performance improved while playing the game, which suggests that playing the game trains SA. Adults performed better than children, and they also glanced at hazards more while the video was playing. Children expectedly had a lower WMC than adults, but WMC did not predict performance within the groups. This indicates that SA does not depend on WMC when passively viewing videos. PMID:28405369
Cross-modal signatures in maternal speech and singing
Trehub, Sandra E.; Plantinga, Judy; Brcic, Jelena; Nowicki, Magda
2013-01-01
We explored the possibility of a unique cross-modal signature in maternal speech and singing that enables adults and infants to link unfamiliar speaking or singing voices with subsequently viewed silent videos of the talkers or singers. In Experiment 1, adults listened to 30-s excerpts of speech followed by successively presented 7-s silent video clips, one from the previously heard speaker (different speech content) and the other from a different speaker. They successfully identified the previously heard speaker. In Experiment 2, adults heard comparable excerpts of singing followed by silent video clips from the previously heard singer (different song) and another singer. They failed to identify the previously heard singer. In Experiment 3, the videos of talkers and singers were blurred to obscure mouth movements. Adults successfully identified the talkers and they also identified the singers from videos of different portions of the song previously heard. In Experiment 4, 6− to 8-month-old infants listened to 30-s excerpts of the same maternal speech or singing followed by exposure to the silent videos on alternating trials. They looked longer at the silent videos of previously heard talkers and singers. The findings confirm the individuality of maternal speech and singing performance as well as adults' and infants' ability to discern the unique cross-modal signatures. The cues that enable cross-modal matching of talker and singer identity remain to be determined. PMID:24198805
Cross-modal signatures in maternal speech and singing.
Trehub, Sandra E; Plantinga, Judy; Brcic, Jelena; Nowicki, Magda
2013-01-01
We explored the possibility of a unique cross-modal signature in maternal speech and singing that enables adults and infants to link unfamiliar speaking or singing voices with subsequently viewed silent videos of the talkers or singers. In Experiment 1, adults listened to 30-s excerpts of speech followed by successively presented 7-s silent video clips, one from the previously heard speaker (different speech content) and the other from a different speaker. They successfully identified the previously heard speaker. In Experiment 2, adults heard comparable excerpts of singing followed by silent video clips from the previously heard singer (different song) and another singer. They failed to identify the previously heard singer. In Experiment 3, the videos of talkers and singers were blurred to obscure mouth movements. Adults successfully identified the talkers and they also identified the singers from videos of different portions of the song previously heard. In Experiment 4, 6- to 8-month-old infants listened to 30-s excerpts of the same maternal speech or singing followed by exposure to the silent videos on alternating trials. They looked longer at the silent videos of previously heard talkers and singers. The findings confirm the individuality of maternal speech and singing performance as well as adults' and infants' ability to discern the unique cross-modal signatures. The cues that enable cross-modal matching of talker and singer identity remain to be determined.
2018-05-17
This video clip shows a test of a new percussive drilling technique at NASA's Jet Propulsion Laboratory in Pasadena, California. On May 19, NASA's Curiosity rover is scheduled to test percussive drilling on Mars for the first time since December 2016. The video clip was shot on March 28, 2018. It has been sped up by 50 times. Curiosity's drill was designed to pulverize rocks samples into powder, which can then be deposited into two chemistry laboratories carried inside of the rover. Curiosity's science team is eager to the rover using percussive drilling again; it will approach a clay-enriched area later this year that could shed new light on the history of water in Gale Crater. An animation is available at https://photojournal.jpl.nasa.gov/catalog/PIA22324
She looks sad, but he looks mad: the effects of age, gender, and ambiguity on emotion perception.
Parmley, Maria; Cunningham, Joseph G
2014-01-01
This study investigated how target sex, target age, and expressive ambiguity influence emotion perception. Undergraduate participants (N = 192) watched morphed video clips of eight child and eight adult facial expressions shifting from neutral to either sadness or anger. Participants were asked to stop the video clip when they first saw an emotion appear (perceptual sensitivity) and were asked to identify the emotion that they saw (accuracy). Results indicate that female participants identified sad expressions sooner in female targets than in male targets. Participants were also more accurate identifying angry facial expressions by male children than by female children. Findings are discussed in terms of the effects of ambiguity, gender, and age on the perception of emotional expressions.
Reyes, Jorge R; Vollmer, Timothy R; Hall, Astrid
2017-01-01
We compared outcomes of arousal and preference assessments for five adult male alleged sexual offenders with intellectual disabilities. Arousal assessments involved the use of the penile plethysmograph to measure changes in penile circumference to both deviant (males and females under the age of 18) and nondeviant (males and females over the age of 18) video clips. Paired-stimulus preference assessments were arranged to present still images from the video clips used in the arousal assessments. Results showed correspondence between the assessments for four out of the five participants. Implications are discussed for the use of preference assessment methodology as a less intrusive assessment approach for sexual offender assessments. © 2016 Society for the Experimental Analysis of Behavior.
No-reference video quality measurement: added value of machine learning
NASA Astrophysics Data System (ADS)
Mocanu, Decebal Constantin; Pokhrel, Jeevan; Garella, Juan Pablo; Seppänen, Janne; Liotou, Eirini; Narwaria, Manish
2015-11-01
Video quality measurement is an important component in the end-to-end video delivery chain. Video quality is, however, subjective, and thus, there will always be interobserver differences in the subjective opinion about the visual quality of the same video. Despite this, most existing works on objective quality measurement typically focus only on predicting a single score and evaluate their prediction accuracies based on how close it is to the mean opinion scores (or similar average based ratings). Clearly, such an approach ignores the underlying diversities in the subjective scoring process and, as a result, does not allow further analysis on how reliable the objective prediction is in terms of subjective variability. Consequently, the aim of this paper is to analyze this issue and present a machine-learning based solution to address it. We demonstrate the utility of our ideas by considering the practical scenario of video broadcast transmissions with focus on digital terrestrial television (DTT) and proposing a no-reference objective video quality estimator for such application. We conducted meaningful verification studies on different video content (including video clips recorded from real DTT broadcast transmissions) in order to verify the performance of the proposed solution.
Video Analysis of a Plucked String: An Example of Problem-based Learning
NASA Astrophysics Data System (ADS)
Wentworth, Christopher D.; Buse, Eric
2009-11-01
Problem-based learning is a teaching methodology that grounds learning within the context of solving a real problem. Typically the problem initiates learning of concepts rather than simply being an application of the concept, and students take the lead in identifying what must be developed to solve the problem. Problem-based learning in upper-level physics courses can be challenging, because of the time and financial requirements necessary to generate real data. Here, we present a problem that motivates learning about partial differential equations and their solution in a mathematical methods for physics course. Students study a plucked elastic cord using high speed digital video. After creating video clips of the cord motion under different tensions they are asked to create a mathematical model. Ultimately, students develop and solve a model that includes damping effects that are clearly visible in the videos. The digital video files used in this project are available on the web at http://physics.doane.edu .
Austin, E W; Johnson, K K
1997-01-01
This article examines the immediate and delayed effects of media literacy training on third-grade children's perceptions of alcohol advertising, alcohol norms, expectancies for drinking, and behaviors toward alcohol. A Solomon four-group style experiment (N = 225) with two levels of the treatment factor assessed the effectiveness of in-school media literacy training for alcohol. The experiment compared a treatment that included the viewing of a videotape about television advertising along with the viewing of video clips of alcohol ads and discussion pertaining to alcohol advertising specifically versus one that included the viewing of the same general purpose media literacy videotape along with video clips of non-alcohol advertising and then discussion of advertising in general. The treatment had both immediate and delayed effects. Immediate effects included the children's increased understanding of persuasive intent, viewing of characters as less similar to people they knew in real life and less desirable, decreased desire to be like the characters, decreased expectation of positive consequences from drinking alcohol, and decreased likelihood to choose an alcohol-related product. Indirect effects also were found on their perceptions of television's realism and their views of social norms related to alcohol. Delayed effects were examined and confirmed on expectancies and behavior. The treatment was more effective when alcohol-specific, and it also was more effective among girls than boys.
Do You See What I See? How We Use Video as an Adjunct to General Surgery Resident Education.
Abdelsattar, Jad M; Pandian, T K; Finnesgard, Eric J; El Khatib, Moustafa M; Rowse, Phillip G; Buckarma, EeeL N H; Gas, Becca L; Heller, Stephanie F; Farley, David R
2015-01-01
Preparation of learners for surgical operations varies by institution, surgeon staff, and the trainees themselves. Often the operative environment is overwhelming for surgical trainees and the educational experience is substandard due to inadequate preparation. We sought to develop a simple, quick, and interactive tool that might assess each individual trainee's knowledge baseline before participating in minimally invasive surgery (MIS). A 4-minute video with 5 separate muted clips from laparoscopic procedures (splenectomy, gastric band removal, cholecystectomy, adrenalectomy, and inguinal hernia repair) was created and shown to medical students (MS), general surgery residents, and staff surgeons. Participants were asked to watch the video and commentate (provide facts) on the operation, body region, instruments, anatomy, pathology, and surgical technique. Comments were scored using a 100-point grading scale (100 facts agreed upon by 8 surgical staff and trainees) with points deducted for incorrect answers. All participants were video recorded. Performance was scored by 2 separate raters. An academic medical center. MS = 10, interns (n = 8), postgraduate year 2 residents (PGY)2s (n = 11), PGY3s (n = 10), PGY4s (n = 9), PGY5s (n = 7), and general surgery staff surgeons (n = 5). Scores ranged from -5 to 76 total facts offered during the 4-minute video examination. MS scored the lowest (mean, range; 5, -5 to 8); interns were better (17, 4-29), followed by PGY2s (31, 21-34), PGY3s (33, 10-44), PGY4s (44, 19-47), PGY5s (48, 28-49), and staff (48, 17-76), p < 0.001. Rater concordance was 0.98-measured using a concordance correlation coefficient (95% CI: 0.96-0.99). Only 2 of 8 interns acknowledged the critical view during the laparoscopic cholecystectomy video clip vs 10 of 11 PGY2 residents (p < 0.003). Of 8 interns, 7 misperceived the spleen as the liver in the splenectomy clip vs 2 of 7 chief residents (p = 0.02). Not surprisingly, more experienced surgeons were able to relay a larger number of laparoscopic facts during a 4-minute video clip of 5 MIS operations than inexperienced trainees. However, even tenured staff surgeons relayed very few facts on procedures they were unfamiliar with. The potential differentiating capabilities of such a quick and inexpensive effort has pushed us to generate better online learning tools (operative modules) and hands-on simulation resources for our learners. We aim to repeat this and other studies to see if our learners are better prepared for video assessment and ultimately, MIS operations. Copyright © 2015 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.
NASA's Kepler Reveals Potential New Worlds - Raw Video New File
2017-06-19
This is a video file, or a collection of unedited video clips for media usage, in support of the Kepler mission's latest discovery announcement. Launched in 2009, the Kepler space telescope is our first mission capable of identifying Earth-size planets around other stars. On Monday, June 19, 2017, scientists announced the results from the latest Kepler candidate catalog of the mission at a press conference at NASA's Ames Research Center.
Teaching child development to medical students.
Clark, Brenda; Andrews, Debra; Taghaddos, Soreh; Dinu, Irina
2012-12-01
Several published strategies on teaching the screening of normal child development were integrated into a small group learning experience for second-year medical students to address practical and logistical problems of approaches used individually. This study examines the effectiveness of this integrated approach using student evaluations. A total of 191 second-year university medical and dental students were invited to participate. Well-described learning objectives, the Ages and Stages Questionnaire (ASQ), live parent-child dyads and video backup were used. Students rotated through three small group stations. Feedback was provided using a Likert scale (from 1, low, to 5, high) and written comments. Consent was obtained. Live parent-child dyads versus video clip groups were analysed by averaging overall scores. Generalised estimating equation (GEE) analysis in stata (Stata Corporation, College Station, Texas) was used for comparing the two groups. A total of 178 students (93%) agreed to participate and filled out the evaluation forms. The overall score on the Likert scale was 4.6 (range 4-5). On two occasions video clips were substituted for live parent-child dyad presentations in one of the three stations. These students (n=43, rating 4.61/5) rated their experience as comparable with those who had three live family stations (n=135, rating 4.56/5). Student comments were grouped into broad themes, with most being positive about their learning experience. This integrated approach is highly acceptable. Video clip usage, live dyads, clear written objectives and use of a standardised screening tool preserved the interaction and immediacy of a clinical encounter, while maintaining consistency in content. © Blackwell Publishing Ltd 2012.
Development of the Use of Conversational Cues to Assess Reality Status
ERIC Educational Resources Information Center
Woolley, Jacqueline D.; Ma, Lili; Lopez-Mobilia, Gabriel
2011-01-01
In this study, the authors assessed children's ability to use information overheard in other people's conversations to judge the reality status of a novel entity. Three- to 9-year-old children (N = 101) watched video clips in which two adults conversed casually about a novel being. Videos contained statements that explicitly denied, explicitly…
An Effective Profile Based Video Browsing System for e-Learning
ERIC Educational Resources Information Center
Premaratne, S. C.; Karunaratna, D. D.; Hewagamage, K. P.
2007-01-01
E-learning has acquired a prime place in many discussions recently. A number of research efforts around the world are trying to enhance education and training through improving e-learning facilities. This paper briefly explains one such attempt aimed at designing a system to support video clips in e-learning and explains how profiles of the…
Realizing Technology Potential through TPACK
ERIC Educational Resources Information Center
Learning & Leading with Technology, 2008
2008-01-01
A participatory culture driven by user-generated content has emerged in the world outside schools. Each day, more than 100,000 videos are uploaded to YouTube alone. According to the Digital Ethnography group at Kansas State University, 80% of the two-minute video clips are created by the users who post them--teenage authors working outside school.…
"Cool" Engagements with YouTube: Part 1
ERIC Educational Resources Information Center
Trier, James
2007-01-01
This article discusses the participatory potential of YouTube, a social website that allows users to upload, view, and share video clips. The author provides examples of how YouTube was incorporated into a course as part of a "mosh-pit" pedagogy that involved both students and teachers in engaging with a variety of YouTube videos.
The Impact of Using Youtube in EFL Classroom on Enhancing EFL Students' Content Learning
ERIC Educational Resources Information Center
Alwehaibi, Huda Omar
2015-01-01
Information technology has opened up prospects for rich and innovative approaches to tackle educational issues and provide solutions to the increasing demands for learning resources. YouTube, a video-sharing website that allows users to upload, view, and share video clips, offers access to new and dynamic opportunities for effective and…
Teaching "How Science Works" by Making and Sharing Videos
ERIC Educational Resources Information Center
Ingram, Neil
2010-01-01
"Science.tv" is a website where teachers and pupils can find quality video clips on a variety of scientific topics. It enables pupils to share research ideas and adds a dynamic new dimension to practical work. It has the potential to become an innovative way of incorporating "How science works" into secondary science curricula by encouraging…
Stanley, Jennifer Tehan; Lohani, Monika; Isaacowitz, Derek M.
2014-01-01
Identifying social gaffes is important for maintaining relationships. Older adults are less able than young to discriminate between socially appropriate and inappropriate behavior in video clips. One open question is how these social appropriateness ratings relate to potential age differences in the perception of what is actually funny or not. In the present study, young, middle-aged, and older adults were equally able to discriminate between appropriate and inappropriate social behavior in a diverse set of clips relevant to both age groups. However, young and middle-aged adults rated the gaffe clips as funnier than control clips and young adults smiled more during the inappropriate clips than the control clips. Older adults did not show this pattern, suggesting that they did not find the inappropriate clips funny. Additionally, young adults endorsed a more aggressive humor style than middle-aged and older adults and aggressive humor style endorsement mediated age differences in social appropriateness ratings. Results are discussed in terms of possible mechanisms such as cohort differences in humor and developmental prioritization of certain humor styles, as well as the importance of investigating age differences in both abilities and preferences. PMID:25244473
Wöllner, Clemens; Hammerschmidt, David; Albrecht, Henning
2018-01-01
Slow motion scenes are ubiquitous in screen-based audiovisual media and are typically accompanied by emotional music. The strong effects of slow motion on observers are hypothetically related to heightened emotional states in which time seems to pass more slowly. These states are simulated in films and video clips, and seem to resemble such experiences in daily life. The current study investigated time perception and emotional response to media clips containing decelerated human motion, with or without music using psychometric and psychophysiological testing methods. Participants were presented with slow-motion scenes taken from commercial films, ballet and sports footage, as well as the same scenes converted to real-time. Results reveal that slow-motion scenes, compared to adapted real-time scenes, led to systematic underestimations of duration, lower perceived arousal but higher valence, lower respiration rates and smaller pupillary diameters. The presence of music compared to visual-only presentations strongly affected results in terms of higher accuracy in duration estimates, higher perceived arousal and valence, higher physiological activation and larger pupillary diameters, indicating higher arousal. Video genre affected responses in addition. These findings suggest that perceiving slow motion is not related to states of high arousal, but rather affects cognitive dimensions of perceived time and valence. Music influences these experiences profoundly, thus strengthening the impact of stretched time in audiovisual media.
Mori, Hirohito; Kobara, Hideki; Nishiyama, Noriko; Fujihara, Shintaro; Kobayashi, Nobuya; Ayaki, Maki; Masaki, Tsutomu
2016-11-01
Although endoscopic mucosal resection is an established colorectal polyp treatment, local recurrence occurs in 13 % of cases due to inadequate snaring. We evaluated whether pre-clipping to the muscularis propria resulted in resected specimens with negative surgical margins without thermal denaturation. Of 245 polyps from 114 patients with colorectal polyps under 20 mm, we included 188 polyps from 81 patients. We randomly allocated polyps to the conventional injection group (CG) (97 polyps) or the pre-clipping injection group (PG) (91 polyps). The PG received three-point pre-clipping to ensure ample gripping to the muscle layer on the oral and both sides of the tumor with 4 mL local injection. Endoscopic ultrasonography was performed to measure the resulting bulge. Outcomes included the number of instances of thermal denaturation of the horizontal/vertical margin (HMX/VMX) or positive horizontal/vertical margins (HM+/VM+), the shortest distance from tumor margins to resected edges, and the maximum bulge distances from tumor surface to the muscularis propria. The numbers of HMX and HM+ in the CG and PG were 27 and 6, and 9 and 2 (P = 0.001), and VMX and VM+ were 8 and 5, and 0 and 0 (P = 0.057). The shortest distance from tumor margin to resected edge [median (range), mm] in polyps in the CG and PG was 0.6 (0-2.7) and 4.7 (2.1-8.9) (P = 0.018). The maximum bulge distances were 4.6 (3.0-8.0) and 11.0 (6.8-17.0) (P = 0.005). Pre-clipping enabled surgical margin-negative resection without thermal denaturation.
Can "YouTube" help students in learning surface anatomy?
Azer, Samy A
2012-07-01
In a problem-based learning curriculum, most medical students research the Internet for information for their "learning issues." Internet sites such as "YouTube" have become a useful resource for information. This study aimed at assessing YouTube videos covering surface anatomy. A search of YouTube was conducted from November 8 to 30, 2010 using research terms "surface anatomy," "anatomy body painting," "living anatomy," "bone landmarks," and "dermatomes" for surface anatomy-related videos. Only relevant video clips in the English language were identified and related URL recorded. For each videotape the following information were collected: title, authors, duration, number of viewers, posted comments, and total number of days on YouTube. The data were statistically analyzed and videos were grouped into educationally useful and non-useful videos on the basis of major and minor criteria covering technical, content, authority, and pedagogy parameters. A total of 235 YouTube videos were screened and 57 were found to have relevant information to surface anatomy. Analysis revealed that 15 (27%) of the videos provided useful information on surface anatomy. These videos scored (mean ± SD, 14.0 ± 0.7) and mainly covered surface anatomy of the shoulder, knee, muscles of the back, leg, and ankle, carotid artery, dermatomes, and anatomical positions. The other 42 (73%) videos were not useful educationally, scoring (mean ± SD, 7.4 ± 1.8). The total viewers of all videos were 1,058,634. Useful videos were viewed by 497,925 (47% of total viewers). The total viewership per day was 750 for useful videos and 652 for non-useful videos. No video clips covering surface anatomy of the head and neck, blood vessels and nerves of upper and lower limbs, chest and abdominal organs/structures were found. Currently, YouTube is an inadequate source of information for learning surface anatomy. More work is needed from medical schools and educators to add useful videos on YouTube covering this area.
Mindfulness Dampens Cardiac Responses to Motion Scenes of Violence.
Brzozowski, Artur; Gillespie, Steven M; Dixon, Louise; Mitchell, Ian J
2018-01-01
Mindfulness is linked with improved regulatory processes of attention and emotion. The potential benefits of mindfulness are vast, including more positive emotional states and diminished arousal in response to emotional stimuli. This study aims to expand of the current knowledge of the mechanisms of mindfulness by relating the latter to cardiovascular processes. The paper describes two studies which investigated the relationship of trait mindfulness to self-report measures of emotions elicited during a violent video clip and cardiovascular responses to the clip. Both studies recruited male and female participants, mainly university undergraduate students. The clip was 5-min-long and evoked mainly feelings of tension and disgust. In study 1, we found that higher scores for trait mindfulness were associated with increased scores for valence ( r = .370, p = .009), indicating a more positive interpretation of the clip. In study 2, the average heart rate during the clip was lower than during the preceding ( p < .05) and following ( p < .01) non-exposure conditions. Higher trait mindfulness was related to diminished heart rate reactivity ( r = -.364, p = .044) and recovery ( r = -.415, p = .020). This latter effect was obtained only when trait anxiety was used as a statistical covariate. Additionally, increased trait mindfulness was accompanied by higher resting heart rate ( r = .390, p = .027). These outcomes suggest that mindfulness is linked with reductions in negative feelings evoked by violent motion stimuli.
Dr. Peter Cavanaugh Explains the Need and Operation of the FOOT Experiment
NASA Technical Reports Server (NTRS)
2003-01-01
This video clip is an interview with Dr. Peter Cavanaugh, principal investigator for the FOOT experiment. He explains the reasoning behind the experiment and shows some video clips of the FOOT experiment being calibrated and conducted in orbit. The heart of the FOOT experiment is an instrumented suit called the Lower Extremity Monitoring Suit (LEMS). This customized garment is a pair of Lycra cycling tights incorporating 20 carefully placed sensors and the associated wiring control units, and amplifiers. LEMS enables the electrical activity of the muscles, the angular motions of the hip, knee, and ankle joints, and the force under both feet to be measured continuously. Measurements are also made on the arm muscles. Information from the sensors can be recorded up to 14 hours on a small, wearable computer.
Myers, Dennis R; Sykes, Catherine; Myers, Scott
2008-01-01
This article offers practical guidance for educators as they prepare specialists to enhance the lives and communities of older persons through the strategic use of visual media in age-related courses. Advantages and disadvantages of this learning innovation are provided as well as seven approaches for enriching instruction. Resources are included for locating effective visual media, matching course content with video resources, determining fair use of copyrighted media, and inserting video clips into PowerPoint presentations. Strategies for accessing assistive services for implementing visual media in the classroom are also addressed. This article promotes the use of visual media for the purpose of enriching gerontological and geriatrics instruction for the adult learner.
The Educational Potential of YouTube.
Godwin, Haley T; Khan, Murtaza; Yellowlees, Peter
2017-12-01
The objective of this paper was to examine the educational potential and effectiveness of a 3 min video clip of a simulation of schizophrenia published online at YouTube. Researchers examined the 267 public comments published on the video-sharing website YouTube over 8 years by viewers of a schizophrenia simulation video titled "virtual hallucinations" made in the Second Life game platform. Comments were independently categorized into six groupings, then cooperatively finalized, and qualitatively analyzed. The six categories of style of comments were "Emotional" (n = 76), "Identification" (n = 62), "Educational Interest" (n = 45), "Mocking/Displeased" (n = 36), "Game Interest" (n = 32), and "Other" (n = 25). Without any advertising or marketing by the creators, over 194,400 views of the video were recorded in 8 years, an average of about 1500 views per month. The use of YouTube with its viral marketing potential has created a vastly amplified reach for this educational offering that would otherwise have been impossible. Qualitative analysis of publically posted comments in response to the video, which were generally positive, has led to a greater understanding of public reactions to such educational offerings. YouTube videos are already a rich source of data for psychiatric researchers, and psychiatric educators should consider posting high quality video clips on publically available social media platforms such as YouTube in order to reduce public stigma about psychiatric disorders and patients.
ERIC Educational Resources Information Center
Steeg, Susanna M.
2016-01-01
Professional learning communities (PLCs) constitute worthwhile spaces in which to study teacher participation in the reflective practices that have potential to shift their teaching. This qualitative case study details the interactions between dual-language and ELL teachers in a grade-level PLC as they met together to confer over video-clips of…
ERIC Educational Resources Information Center
Randler, Christoph; Demirhan, Eda; Wüst-Ackermann, Peter; Desch, Inga H.
2016-01-01
In science education, dissections of animals are an integral part of teaching, but they often evoke negative emotions. We aimed at reducing negative emotions (anxiety, negative affect [NA]) and increasing positive affect (PA) and self-efficacy by an experimental intervention using a predissection video to instruct students about fish dissection.…
Gaze Allocation in a Dynamic Situation: Effects of Social Status and Speaking
ERIC Educational Resources Information Center
Foulsham, Tom; Cheng, Joey T.; Tracy, Jessica L.; Henrich, Joseph; Kingstone, Alan
2010-01-01
Human visual attention operates in a context that is complex, social and dynamic. To explore this, we recorded people taking part in a group decision-making task and then showed video clips of these situations to new participants while tracking their eye movements. Observers spent the majority of time looking at the people in the videos, and in…
The Moving Image in Education Research: Reassembling the Body in Classroom Video Data
ERIC Educational Resources Information Center
de Freitas, Elizabeth
2016-01-01
While audio recordings and observation might have dominated past decades of classroom research, video data is now the dominant form of data in the field. Ubiquitous videography is standard practice today in archiving the body of both the teacher and the student, and vast amounts of classroom and experiment clips are stored in online archives. Yet…
Investigating Young Children's Talk about the Media
ERIC Educational Resources Information Center
Grace, Donna J.; Henward, Allison S.
2013-01-01
This study was an investigation into the ways in which two classes of six- and seven-year-old children in Hawaii talked about the media. The children were shown video clips from a variety of media and asked to respond both orally and in writing. The qualitative data gathered in this study were researcher notes, video and audio-taped focus group…
Flipped!: Want to Get Teens Excited about Summer Reading? Just Add Video
ERIC Educational Resources Information Center
Wooten, Jennifer
2009-01-01
Fully 57 percent of youth online watch videos, according to a Pew Internet & American Life study. And more and more are creating and sharing clips of their own making. With online engagement such an integral part of their world, Washington state's King County Library System (KCLS) decided to meet kids on their own turf by launching…
Which technology to investigate visual perception in sport: video vs. virtual reality.
Vignais, Nicolas; Kulpa, Richard; Brault, Sébastien; Presse, Damien; Bideau, Benoit
2015-02-01
Visual information uptake is a fundamental element of sports involving interceptive tasks. Several methodologies, like video and methods based on virtual environments, are currently employed to analyze visual perception during sport situations. Both techniques have advantages and drawbacks. The goal of this study is to determine which of these technologies may be preferentially used to analyze visual information uptake during a sport situation. To this aim, we compared a handball goalkeeper's performance using two standardized methodologies: video clip and virtual environment. We examined this performance for two response tasks: an uncoupled task (goalkeepers show where the ball ends) and a coupled task (goalkeepers try to intercept the virtual ball). Variables investigated in this study were percentage of correct zones, percentage of correct responses, radial error and response time. The results showed that handball goalkeepers were more effective, more accurate and started to intercept earlier when facing a virtual handball thrower than when facing the video clip. These findings suggested that the analysis of visual information uptake for handball goalkeepers was better performed by using a 'virtual reality'-based methodology. Technical and methodological aspects of these findings are discussed further. Copyright © 2014 Elsevier B.V. All rights reserved.
Clinical Assessment of Stereoacuity and 3-D Stereoscopic Entertainment
Tidbury, Laurence P.; Black, Robert H.; O’Connor, Anna R.
2015-01-01
Abstract Background/Aims: The perception of compelling depth is often reported in individuals where no clinically measurable stereoacuity is apparent. We aim to investigate the potential cause of this finding by varying the amount of stereopsis available to the subject, and assessing their perception of depth viewing 3-D video clips and a Nintendo 3DS. Methods: Monocular blur was used to vary interocular VA difference, consequently creating 4 levels of measurable binocular deficit from normal stereoacuity to suppression. Stereoacuity was assessed at each level using the TNO, Preschool Randot®, Frisby, the FD2, and Distance Randot®. Subjects also completed an object depth identification task using the Nintendo 3DS, a static 3DTV stereoacuity test, and a 3-D perception rating task of 6 video clips. Results: As intraocular VA differences increased, stereoacuity of the 57 subjects (aged 16–62 years) decreased (eg, 110”, 280”, 340”, and suppression). The ability to correctly identify depth on the Nintendo 3DS remained at 100% until suppression of one eye occurred. The perception of a compelling 3-D effect when viewing the video clips was rated high until suppression of one eye occurred, where the 3-D effect was still reported as fairly evident. Conclusion: If an individual has any level of measurable stereoacuity, the perception of 3-D when viewing stereoscopic entertainment is present. The presence of motion in stereoscopic video appears to provide cues to depth, where static cues are not sufficient. This suggests there is a need for a dynamic test of stereoacuity to be developed, to allow fully informed patient management decisions to be made. PMID:26669421
Video-assisted segmentation of speech and audio track
NASA Astrophysics Data System (ADS)
Pandit, Medha; Yusoff, Yusseri; Kittler, Josef; Christmas, William J.; Chilton, E. H. S.
1999-08-01
Video database research is commonly concerned with the storage and retrieval of visual information invovling sequence segmentation, shot representation and video clip retrieval. In multimedia applications, video sequences are usually accompanied by a sound track. The sound track contains potential cues to aid shot segmentation such as different speakers, background music, singing and distinctive sounds. These different acoustic categories can be modeled to allow for an effective database retrieval. In this paper, we address the problem of automatic segmentation of audio track of multimedia material. This audio based segmentation can be combined with video scene shot detection in order to achieve partitioning of the multimedia material into semantically significant segments.
An Overview of the MSFC Electrostatic Levitation Facility
NASA Technical Reports Server (NTRS)
Rogers, J. R.; Robinson, M. B.; Hyers, R. W.; Savage, L.; Rathz, T.
2000-01-01
Electrostatic levitation (ESL) provides a means to study molten materials in a contamination-free environment, including no contact with a container. Many phenomena important to materials science can be studied in the ESL. Solidification of metals, alloys and undercooled materials represent an important topic for research in the ESL. Recent studies of metals and alloys during solidification in the ESL are reported. Measurements include time, temperature and transformation of metallic glass-forming alloys, solidification velocities, and microstructure. This multimedia report includes a video clip showing processing in the ESL, with descriptions of the different segments in the text.
Teasing Apart Complex Motions using VideoPoint
NASA Astrophysics Data System (ADS)
Fischer, Mark
2002-10-01
Using video analysis software such as VideoPoint, it is possible to explore the physics of any phenomenon that can be captured on videotape. The good news is that complex motions can be filmed and analyzed. The bad news is that the motions can become very complex very quickly. An example of such a complicated motion, the 2-dimensional motion of an object as filmed by a camera that is moving and rotating in the same plane will be discussed. Methods for extracting the desired object motion will be given as well as suggestions for shooting more easily analyzable video clips.
A1297 GPR vs. hydro video clip.
DOT National Transportation Integrated Search
2014-03-01
This research has examined the use of nondestructive techniques for concrete bridge deck condition assessments. The primary nondestructive testing/evaluation (NDT/NDE) technique utilized in this research was ground-coupled ground penetrating radar (G...
Waran, Vicknes; Bahuri, Nor Faizal Ahmad; Narayanan, Vairavan; Ganesan, Dharmendra; Kadir, Khairul Azmi Abdul
2012-04-01
The purpose of this study was to validate and assess the accuracy and usefulness of sending short video clips in 3gp file format of an entire scan series of patients, using mobile telephones running on 3G-MMS technology, to enable consultation between junior doctors in a neurosurgical unit and the consultants on-call after office hours. A total of 56 consecutive patients with acute neurosurgical problems requiring urgent after-hours consultation during a 6-month period, prospectively had their images recorded and transmitted using the above method. The response to the diagnosis and the management plan by two neurosurgeons (who were not on site) based on the images viewed on a mobile telephone were reviewed by an independent observer and scored. In addition to this, a radiologist reviewed the original images directly on the hospital's Patients Archiving and Communication System (PACS) and this was compared with the neurosurgeons' response. Both neurosurgeons involved in this study were in complete agreement with their diagnosis. The radiologist disagreed with the diagnosis in only one patient, giving a kappa coefficient of 0.88, indicating an almost perfect agreement. The use of mobile telephones to transmit MPEG video clips of radiological images is very advantageous for carrying out emergency consultations in neurosurgery. The images accurately reflect the pathology in question, thereby reducing the incidence of medical errors from incorrect diagnosis, which otherwise may just depend on a verbal description.
Jang, Hye Won; Kim, Kyong-Jee
2014-03-21
Multimedia learning has been shown effective in clinical skills training. Yet, use of technology presents both opportunities and challenges to learners. The present study investigated student use and perceptions of online clinical videos for learning clinical skills and in preparing for OSCE (Objective Structured Clinical Examination). This study aims to inform us how to make more effective us of these resources. A mixed-methods study was conducted for this study. A 30-items questionnaire was administered to investigate student use and perceptions of OSCE videos. Year 3 and 4 students from 34 Korean medical schools who had access to OSCE videos participated in the online survey. Additionally, a semi-structured interview of a group of Year 3 medical students was conducted for an in-depth understanding of student experience with OSCE videos. 411 students from 31 medical schools returned the questionnaires; a majority of them found OSCE videos effective for their learning of clinical skills and in preparing for OSCE. The number of OSCE videos that the students viewed was moderately associated with their self-efficacy and preparedness for OSCE (p < 0.05). One-thirds of those surveyed accessed the video clips using mobile devices; they agreed more with the statement that it was convenient to access the video clips than their peers who accessed the videos using computers (p < 0.05). Still, students reported lack of integration into the curriculum and lack of interaction as barriers to more effective use of OSCE videos. The present study confirms the overall positive impact of OSCE videos on student learning of clinical skills. Having faculty integrate these learning resources into their teaching, integrating interactive tools into this e-learning environment to foster interactions, and using mobile devices for convenient access are recommended to help students make more effective use of these resources.
Introduction: Intradural Spinal Surgery video supplement.
McCormick, Paul C
2014-09-01
This Neurosurgical Focus video supplement contains detailed narrated videos of a broad range of intradural pathology such as neoplasms, including intramedullary, extramedullary, and dumbbell tumors, vascular malformations, functional disorders, and rare conditions that are often overlooked or misdiagnosed such as arachnoid cysts, ventral spinal cord herniation, and dorsal arachnoid web. The intent of this supplement is to provide meaningful educational and instructional content at all levels of training and practice. As such, the selected video submissions each provide a comprehensive detailed narrative description and coordinated video that contains the entire spectrum of relevant information including imaging, operative setup and positioning, and exposure, as well as surgical strategies, techniques, and sequencing toward the safe and effective achievement of the operative objective. This level of detail often necessitated a more lengthy video duration than is typically presented in oral presentations or standard video clips from peer reviewed publications. Unfortunately, space limitations precluded the inclusion of several other excellent video submissions in this supplement. While most videos in this supplement reflect standard operative approaches and techniques there are also submissions that describe innovative exposures and techniques that have expanded surgical options such as ventral approaches, stereotactic guidance, and minimally invasive exposures. There is some redundancy in both the topics and techniques both to underscore fundamental surgical principles as well as to provide complementary perspective from different surgeons. It has been my privilege to serve as guest editor for this video supplement and I would like to extend my appreciation to Mark Bilsky, Bill Krauss, and Sander Connolly for reviewing the large number submitted videos. Most of all, I would like to thank the authors for their skill and effort in the preparation of the outstanding videos that constitute this video supplement.
Amygdala activity at encoding correlated with long-term, free recall of emotional information.
Cahill, L; Haier, R J; Fallon, J; Alkire, M T; Tang, C; Keator, D; Wu, J; McGaugh, J L
1996-07-23
Positron emission tomography of cerebral glucose metabolism in adult human subjects was used to investigate amygdaloid complex (AC) activity associated with the storage of long-term memory for emotionally arousing events. Subjects viewed two videos (one in each of two separate positron emission tomography sessions, separated by 3-7 days) consisting either of 12 emotionally arousing film clips ("E" film session) or of 12 relatively emotionally neutral film clips ("N" film session), and rated their emotional reaction to each film clip immediately after viewing it. Three weeks after the second session, memory for the videos was assessed in a free recall test. As expected, the subjects' average emotional reaction to the E films was higher than that for the N films. In addition, the subjects recalled significantly more E films than N films. Glucose metabolic rate of the right AC while viewing the E films was highly correlated with the number of E films recalled. AC activity was not significantly correlated with the number of N films recalled. The findings support the view derived from both animal and human investigations that the AC is selectively involved with the formation of enhanced long-term memory associated with emotionally arousing events.
Dias, Raylene; Baliarsing, Lipika; Barnwal, Neeraj Kumar; Mogal, Shweta; Gujjar, Pinakin
2016-01-01
Background and Aims: A high incidence of anxiety has been reported in patients in the operation theatre set up. We developed a short visual clip of 206 s duration depicting the procedure of spinal anaesthesia (SAB) and aimed to compare the effect of this video on perioperative anxiety in patients undergoing procedures under SAB. Methods: A prospective randomised study of 200 patients undergoing surgery under SAB was conducted. Patients were allotted to either the nonvideo group (Group NV - those who were not shown the video) or the video group (Group V - those who were shown the video). Anxiety was assessed using the Spielberger State-Trait Anxiety Inventory during the pre-anaesthetic check-up and before surgery. Haemodynamic parameters such as heart rate (HR) and mean arterial pressure (MAP) were also noted. Student's t-test was used for normally distributed and Mann–Whitney U-test for nonnormally distributed quantitative data. Chi-square test was used for categorical data. Results: Both groups were comparable with respect to baseline anxiety scores and haemodynamic parameters. The nonvideo group showed a significant increase in state anxiety scores before administration of SAB (P < 0.001). Patients in the video group had significantly lower HR and MAP preoperatively (P < 0.001). The prevalence of ‘high anxiety’ for SAB was 81% in our study which decreased to 66% in the video group before surgery. Conclusion: Multimedia information in the form of a short audiovisual clip is an effective and feasible method to reduce perioperative anxiety related to SAB. PMID:27942059
Improving the Identification of Neonatal Encephalopathy: Utility of a Web-Based Video Tool.
Ivy, Autumn S; Clark, Catherine L; Bahm, Sarah M; Meurs, Krisa P Van; Wusthoff, Courtney J
2017-04-01
Objective This study tested the effectiveness of a video teaching tool in improving identification and classification of encephalopathy in infants. Study Design We developed an innovative video teaching tool to help clinicians improve their skills in interpreting the neonatal neurological examination for grading encephalopathy. Pediatric residents were shown 1-minute video clips demonstrating exam findings in normal neonates and neonates with various degrees of encephalopathy. Findings from five domains were demonstrated: spontaneous activity, level of alertness, posture/tone, reflexes, and autonomic responses. After each clip, subjects were asked to identify whether the exam finding was normal or consistent with mild, moderate, or severe abnormality. Subjects were then directed to a web-based teaching toolkit, containing a compilation of videos demonstrating normal and abnormal findings on the neonatal neurological examination. Immediately after training, subjects underwent posttesting, again identifying exam findings as normal, mild, moderate, or severe abnormality. Results Residents improved in their overall ability to identify and classify neonatal encephalopathy after viewing the teaching tool. In particular, the identification of abnormal spontaneous activity, reflexes, and autonomic responses were most improved. Conclusion This pretest/posttest evaluation of an educational tool demonstrates that after viewing our toolkit, pediatric residents were able to improve their overall ability to detect neonatal encephalopathy. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.
Hydroacoustic Evaluation of Fish Passage Through Bonneville Dam in 2005
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ploskey, Gene R.; Weiland, Mark A.; Zimmerman, Shon A.
2006-12-04
The Portland District of the U.S. Army Corps of Engineers requested that the Pacific Northwest National Laboratory (PNNL) conduct fish-passage studies at Bonneville Dam in 2005. These studies support the Portland District's goal of maximizing fish-passage efficiency (FPE) and obtaining 95% survival for juvenile salmon passing Bonneville Dam. Major passage routes include 10 turbines and a sluiceway at Powerhouse 1 (B1), an 18-bay spillway, and eight turbines and a sluiceway at Powerhouse 2 (B2). In this report, we present results of two studies related to juvenile salmonid passage at Bonneville Dam. The studies were conducted between April 16 and Julymore » 15, 2005, encompassing most of the spring and summer migrations. Studies included evaluations of (1) Project fish passage efficiency and other major passage metrics, and (2) smolt approach and fate at B1 Sluiceway Outlet 3C from the B1 forebay. Some of the large appendices are only presented on the compact disk (CD) that accompanies the final report. Examples include six large comma-separated-variable (.CSV) files of hourly fish passage, hourly variances, and Project operations for spring and summer from Appendix E, and large Audio Video Interleave (AVI) files with DIDSON-movie clips of the area upstream of B1 Sluiceway Outlet 3C (Appendix H). Those video clips show smolts approaching the outlet, predators feeding on smolts, and vortices that sometimes entrained approaching smolts into turbines. The CD also includes Adobe Acrobat Portable Document Files (PDF) of the entire report and appendices.« less
NASA Astrophysics Data System (ADS)
1989-10-01
This videotape was produced for hand-out to both local and national broadcast media as a prelude to the launch of the Cosmic Background Explorer. The tape consists of short clips with multi-channel sound to facilitate news media editing.
Using Video Modeling as an Anti-bullying Intervention for Children with Autism Spectrum Disorder.
Rex, Catherine; Charlop, Marjorie H; Spector, Vicki
2018-03-07
In the present study, we used a multiple baseline design across participants to assess the efficacy of a video modeling intervention to teach six children with autism spectrum disorder (ASD) to assertively respond to bullying. During baseline, the children made few appropriate responses upon viewing video clips of bullying scenarios. During the video modeling intervention, participants viewed videos of models assertively responding to three types of bullying: physical, verbal bullying, and social exclusion. Results indicated that all six children learned through video modeling to make appropriate assertive responses to bullying scenarios. Four of the six children demonstrated learning in the in situ bullying probes. The results are discussed in terms of an intervention for victims of bullying with ASD.
Geographic Video 3d Data Model And Retrieval
NASA Astrophysics Data System (ADS)
Han, Z.; Cui, C.; Kong, Y.; Wu, H.
2014-04-01
Geographic video includes both spatial and temporal geographic features acquired through ground-based or non-ground-based cameras. With the popularity of video capture devices such as smartphones, the volume of user-generated geographic video clips has grown significantly and the trend of this growth is quickly accelerating. Such a massive and increasing volume poses a major challenge to efficient video management and query. Most of the today's video management and query techniques are based on signal level content extraction. They are not able to fully utilize the geographic information of the videos. This paper aimed to introduce a geographic video 3D data model based on spatial information. The main idea of the model is to utilize the location, trajectory and azimuth information acquired by sensors such as GPS receivers and 3D electronic compasses in conjunction with video contents. The raw spatial information is synthesized to point, line, polygon and solid according to the camcorder parameters such as focal length and angle of view. With the video segment and video frame, we defined the three categories geometry object using the geometry model of OGC Simple Features Specification for SQL. We can query video through computing the spatial relation between query objects and three categories geometry object such as VFLocation, VSTrajectory, VSFOView and VFFovCone etc. We designed the query methods using the structured query language (SQL) in detail. The experiment indicate that the model is a multiple objective, integration, loosely coupled, flexible and extensible data model for the management of geographic stereo video.
NASA Astrophysics Data System (ADS)
Weiland, C.; Chadwick, W. W.; Hanshumaker, W.; Osis, V.; Hamilton, C.
2002-12-01
We have created a new interactive exhibit in which the user can sit down and simulate that they are making a dive to the seafloor with the remotely operated vehicle (ROV) named ROPOS. The exhibit immerses the user in an interactive experience that is naturally fun but also educational. This new public display is located at the Hatfield Marine Science Visitor Center in Newport, Oregon. The exhibit is designed to look like the real ROPOS control console and includes three video monitors, a PC, a DVD player, an overhead speaker, graphic panels, buttons, lights, dials, and a seat in front of a joystick. The dives are based on real seafloor settings at Axial seamount, an active submarine volcano on the Juan de Fuca Ridge (NE Pacific) that is also the location of a seafloor observatory called NeMO. The user can choose between 1 of 3 different dives sites in the caldera of Axial Volcano. Once a dive is chosen, then the user watches ROPOS being deployed and then arrives into a 3-D computer-generated seafloor environment that is based on the real world but is easier to visualize and navigate. Once on the bottom, the user is placed within a 360 degree panorama and can look in all directions by manipulating the joystick. By clicking on markers embedded in the scene, the user can then either move to other panorama locations via movies that travel through the 3-D virtual environment, or they can play video clips from actual ROPOS dives specifically related to that scene. Audio accompanying the video clips informs the user where they are going or what they are looking at. After the user is finished exploring the dive site they end the dive by leaving the bottom and watching the ROV being recovered onto the ship at the surface. The user can then choose a different dive or make the same dive again. Within the three simulated dives there are a total of 6 arrival and departure movies, 7 seafloor panoramas, 12 travel movies, and 23 ROPOS video clips. The exhibit software was created with Macromedia Director using Apple Quicktime and Quicktime VR. The exhibit is based on the NeMO Explorer web site (http://www.pmel.noaa.gov/vents/nemo/explorer.html).
Visual content highlighting via automatic extraction of embedded captions on MPEG compressed video
NASA Astrophysics Data System (ADS)
Yeo, Boon-Lock; Liu, Bede
1996-03-01
Embedded captions in TV programs such as news broadcasts, documentaries and coverage of sports events provide important information on the underlying events. In digital video libraries, such captions represent a highly condensed form of key information on the contents of the video. In this paper we propose a scheme to automatically detect the presence of captions embedded in video frames. The proposed method operates on reduced image sequences which are efficiently reconstructed from compressed MPEG video and thus does not require full frame decompression. The detection, extraction and analysis of embedded captions help to capture the highlights of visual contents in video documents for better organization of video, to present succinctly the important messages embedded in the images, and to facilitate browsing, searching and retrieval of relevant clips.
Video-Puff of Air Hits Ball of Water in Space Onboard the International Space Station (ISS)
NASA Technical Reports Server (NTRS)
2003-01-01
Saturday Morning Science, the science of opportunity series of applied experiments and demonstrations, performed aboard the International Space Station (ISS) by Expedition 6 astronaut Dr. Don Pettit, revealed some remarkable findings. In this video clip, Dr. Pettit demonstrates the phenomenon of a puff of air hitting a ball of water that is free floating in space. Watch the video to see why Dr. Pettit remarks 'I'd hate think that our planet would go through these kinds of gyrations if it got whacked by a big asteroid'.
Analysis of environmental sounds
NASA Astrophysics Data System (ADS)
Lee, Keansub
Environmental sound archives - casual recordings of people's daily life - are easily collected by MPS players or camcorders with low cost and high reliability, and shared in the web-sites. There are two kinds of user generated recordings we would like to be able to handle in this thesis: Continuous long-duration personal audio and Soundtracks of short consumer video clips. These environmental recordings contain a lot of useful information (semantic concepts) related with activity, location, occasion and content. As a consequence, the environment archives present many new opportunities for the automatic extraction of information that can be used in intelligent browsing systems. This thesis proposes systems for detecting these interesting concepts on a collection of these real-world recordings. The first system is to segment and label personal audio archives - continuous recordings of an individual's everyday experiences - into 'episodes' (relatively consistent acoustic situations lasting a few minutes or more) using the Bayesian Information Criterion and spectral clustering. The second system is for identifying regions of speech or music in the kinds of energetic and highly-variable noise present in this real-world sound. Motivated by psychoacoustic evidence that pitch is crucial in the perception and organization of sound, we develop a noise-robust pitch detection algorithm to locate speech or music-like regions. To avoid false alarms resulting from background noise with strong periodic components (such as air-conditioning), a new scheme is added in order to suppress these noises in the domain of autocorrelogram. In addition, the third system is to automatically detect a large set of interesting semantic concepts; which we chose for being both informative and useful to users, as well as being technically feasible. These 25 concepts are associated with people's activities, locations, occasions, objects, scenes and sounds, and are based on a large collection of consumer videos in conjunction with user studies. We model the soundtrack of each video, regardless of its original duration, as a fixed-sized clip-level summary feature. For each concept, an SVM-based classifier is trained according to three distance measures (Kullback-Leibler, Bhattacharyya, and Mahalanobis distance). Detecting the time of occurrence of a local object (for instance, a cheering sound) embedded in a longer soundtrack is useful and important for applications such as search and retrieval in consumer video archives. We finally present a Markov-model based clustering algorithm able to identify and segment consistent sets of temporal frames into regions associated with different ground-truth labels, and at the same time to exclude a set of uninformative frames shared in common from all clips. The labels are provided at the clip level, so this refinement of the time axis represents a variant of Multiple-Instance Learning (MIL). Quantitative evaluation shows that the performance of our proposed approaches tested on the 60h personal audio archives or 1900 YouTube video clips is significantly better than existing algorithms for detecting these useful concepts in real-world personal audio recordings.
ERIC Educational Resources Information Center
Tekin, Inan; Parmaksiz, Ramazan Sükrü
2016-01-01
The purpose of this research is to examine whether using feature films in video lessons has an effect on the development of listening skills of students or not. The research has been conducted at one of the state universities in Black Sea region of Turkey with 126 students. The students watched and listened to only the sentences taken from…
Sleep atlas and multimedia database.
Penzel, T; Kesper, K; Mayer, G; Zulley, J; Peter, J H
2000-01-01
The ENN sleep atlas and database was set up on a dedicated server connected to the internet thus providing all services such as WWW, ftp and telnet access. The database serves as a platform to promote the goals of the European Neurological Network, to exchange patient cases for second opinion between experts and to create a case-oriented multimedia sleep atlas with descriptive text, images and video-clips of all known sleep disorders. The sleep atlas consists of a small public and a large private part for members of the consortium. 20 patient cases were collected and presented with educational information similar to published case reports. Case reports are complemented with images, video-clips and biosignal recordings. A Java based viewer for biosignals provided in EDF format was installed in order to move free within the sleep recordings without the need to download the full recording on the client.
Thunborg, Charlotta; Salzmann-Erikson, Martin
2017-01-01
Communication skills are vital for successful relationships between patients and health care professionals. Failure to communicate may lead to a lack of understanding and may result in strained interactions. Our theoretical point of departure was to make use of chaos and complexity theories. To examine the features of strained interactions and to discuss their relevance for health care settings. A netnography study design was applied. Data were purposefully sampled, and video clips (122 minutes from 30 video clips) from public online venues were used. The results are presented in four categories: 1) unpredictability, 2) sensitivity dependence, 3) resistibility, and 4) iteration. They are all features of strained interactions. Strained interactions are a complex phenomenon that exists in health care settings. The findings provide health care professionals guidance to understand the complexity and the features of strained interactions.
Woolf-King, Sarah E.; Maisto, Stephen; Carey, Michael; Vanable, Peter
2013-01-01
Experimental research on sexual decision making is limited, despite the public health importance of such work. We describe formative work conducted in advance of an experimental study designed to evaluate the effects of alcohol intoxication and sexual arousal on risky sexual decision making among men who have sex with men. In Study 1, we describe the procedures for selecting and validating erotic film clips (to be used for the experimental manipulation of arousal). In Study 2, we describe the tailoring of two interactive role-play videos to be used to measure risk perception and communication skills in an analog risky sex situation. Together, these studies illustrate a method for creating experimental stimuli to investigate sexual decision making in a laboratory setting. Research using this approach will support experimental research that affords a stronger basis for drawing causal inferences regarding sexual decision making. PMID:19760530
Testing the accuracy of timing reports in visual timing tasks with a consumer-grade digital camera.
Smyth, Rachael E; Oram Cardy, Janis; Purcell, David
2017-06-01
This study tested the accuracy of a visual timing task using a readily available and relatively inexpensive consumer grade digital camera. A visual inspection time task was recorded using short high-speed video clips and the timing as reported by the task's program was compared to the timing as recorded in the video clips. Discrepancies in these two timing reports were investigated further and based on display refresh rate, a decision was made whether the discrepancy was large enough to affect the results as reported by the task. In this particular study, the errors in timing were not large enough to impact the results of the study. The procedure presented in this article offers an alternative method for performing a timing test, which uses readily available hardware and can be used to test the timing in any software program on any operating system and display.
Fujiwara, Takeo; Mizuki, Rie; Miki, Takahiro; Chemtob, Claude
2015-01-01
“Emotional numbing” is a symptom of post-traumatic stress disorder (PTSD) characterized by a loss of interest in usually enjoyable activities, feeling detached from others, and an inability to express a full range of emotions. Emotional numbing is usually assessed through self-report, and is particularly difficult to ascertain among young children. We conducted a pilot study to explore the use of facial expression ratings in response to a comedy video clip to assess emotional reactivity among preschool children directly exposed to the Great East Japan Earthquake. This study included 23 child participants. Child PTSD symptoms were measured using a modified version of the Parent’s Report of the Child’s Reaction to Stress scale. Children were filmed while watching a 2-min video compilation of natural scenes (‘baseline video’) followed by a 2-min video clip from a television comedy (‘comedy video’). Children’s facial expressions were processed the using Noldus FaceReader software, which implements the Facial Action Coding System (FACS). We investigated the association between PTSD symptom scores and facial emotion reactivity using linear regression analysis. Children with higher PTSD symptom scores showed a significantly greater proportion of neutral facial expressions, controlling for sex, age, and baseline facial expression (p < 0.05). This pilot study suggests that facial emotion reactivity, measured using facial expression recognition software, has the potential to index emotional numbing in young children. This pilot study adds to the emerging literature on using experimental psychopathology methods to characterize children’s reactions to disasters. PMID:26528206
Emotional Diathesis, Emotional Stress, and Childhood Stuttering
Conture, Edward G.; Walden, Tedra A.; Jones, Robin M.; Kim, Hanjoe
2016-01-01
Purpose The purpose of this study was to determine (a) whether emotional reactivity and emotional stress of children who stutter (CWS) are associated with their stuttering frequency, (b) when the relationship between emotional reactivity and stuttering frequency is more likely to exist, and (c) how these associations are mediated by a 3rd variable (e.g., sympathetic arousal). Method Participants were 47 young CWS (M age = 50.69 months, SD = 10.34). Measurement of participants' emotional reactivity was based on parental report, and emotional stress was engendered by viewing baseline, positive, and negative emotion-inducing video clips, with stuttered disfluencies and sympathetic arousal (indexed by tonic skin conductance level) measured during a narrative after viewing each of the various video clips. Results CWS's positive emotional reactivity was positively associated with percentage of their stuttered disfluencies regardless of emotional stress condition. CWS's negative emotional reactivity was more positively correlated with percentage of stuttered disfluencies during a narrative after a positive, compared with baseline, emotional stress condition. CWS's sympathetic arousal did not appear to mediate the effect of emotional reactivity, emotional stress condition, and their interaction on percentage of stuttered disfluencies, at least during this experimental narrative task following emotion-inducing video clips. Conclusions Results were taken to suggest an association between young CWS's positive emotional reactivity and stuttering, with negative reactivity seemingly more associated with these children's stuttering during positive emotional stress (a stress condition possibly associated with lesser degrees of emotion regulation). Such findings seem to support the notion that emotional processes warrant inclusion in any truly comprehensive account of childhood stuttering. PMID:27327187
Context-specific effects of musical expertise on audiovisual integration
Bishop, Laura; Goebl, Werner
2014-01-01
Ensemble musicians exchange auditory and visual signals that can facilitate interpersonal synchronization. Musical expertise improves how precisely auditory and visual signals are perceptually integrated and increases sensitivity to asynchrony between them. Whether expertise improves sensitivity to audiovisual asynchrony in all instrumental contexts or only in those using sound-producing gestures that are within an observer's own motor repertoire is unclear. This study tested the hypothesis that musicians are more sensitive to audiovisual asynchrony in performances featuring their own instrument than in performances featuring other instruments. Short clips were extracted from audio-video recordings of clarinet, piano, and violin performances and presented to highly-skilled clarinetists, pianists, and violinists. Clips either maintained the audiovisual synchrony present in the original recording or were modified so that the video led or lagged behind the audio. Participants indicated whether the audio and video channels in each clip were synchronized. The range of asynchronies most often endorsed as synchronized was assessed as a measure of participants' sensitivities to audiovisual asynchrony. A positive relationship was observed between musical training and sensitivity, with data pooled across stimuli. While participants across expertise groups detected asynchronies most readily in piano stimuli and least readily in violin stimuli, pianists showed significantly better performance for piano stimuli than for either clarinet or violin. These findings suggest that, to an extent, the effects of expertise on audiovisual integration can be instrument-specific; however, the nature of the sound-producing gestures that are observed has a substantial effect on how readily asynchrony is detected as well. PMID:25324819
Emotional Diathesis, Emotional Stress, and Childhood Stuttering.
Choi, Dahye; Conture, Edward G; Walden, Tedra A; Jones, Robin M; Kim, Hanjoe
2016-08-01
The purpose of this study was to determine (a) whether emotional reactivity and emotional stress of children who stutter (CWS) are associated with their stuttering frequency, (b) when the relationship between emotional reactivity and stuttering frequency is more likely to exist, and (c) how these associations are mediated by a 3rd variable (e.g., sympathetic arousal). Participants were 47 young CWS (M age = 50.69 months, SD = 10.34). Measurement of participants' emotional reactivity was based on parental report, and emotional stress was engendered by viewing baseline, positive, and negative emotion-inducing video clips, with stuttered disfluencies and sympathetic arousal (indexed by tonic skin conductance level) measured during a narrative after viewing each of the various video clips. CWS's positive emotional reactivity was positively associated with percentage of their stuttered disfluencies regardless of emotional stress condition. CWS's negative emotional reactivity was more positively correlated with percentage of stuttered disfluencies during a narrative after a positive, compared with baseline, emotional stress condition. CWS's sympathetic arousal did not appear to mediate the effect of emotional reactivity, emotional stress condition, and their interaction on percentage of stuttered disfluencies, at least during this experimental narrative task following emotion-inducing video clips. Results were taken to suggest an association between young CWS's positive emotional reactivity and stuttering, with negative reactivity seemingly more associated with these children's stuttering during positive emotional stress (a stress condition possibly associated with lesser degrees of emotion regulation). Such findings seem to support the notion that emotional processes warrant inclusion in any truly comprehensive account of childhood stuttering.
Stephen, Kate; Cumming, Grant P
2012-09-01
This paper describes the investigation, categorization/characterization and viewing of pelvic floor muscle exercises (PFME) on YouTube from the perspective of the 'wisdom of the crowd'. The aim of the research was to increase awareness of the type of clips that individuals are likely to come across when searching YouTube and to describe trends and popularity. This awareness will be useful for the design of continence promotion services, especially for hard-to-reach individuals. Web-based videos relating to PFE were identified by searching YouTube using the snowball technique. Main outcome measures Number of views; the approach taken (health, fitness, sexual and pregnancy); product promotion; and the use of music, visual cues and elements designed to encourage exercise. The number of views of each video was recorded at three points over a seven-month period. Twenty-two videos were identified. Overall these videos had been viewed over 430,000 times during the study period. One video was viewed over 100,000 times and overall the median increase in views was 59.4%. YouTube is increasingly used to access information about pelvic floor exercises. Different approaches are used to communicate PFME information but there are no formal structures for quality control. Further research is required to identify which elements of the video clips are effective in communicating information and in motivating exercise and to establish appropriate protocols. Kitemarking is recommended in order that women obtain correct advice.
Ahn, Ji Yun; Cho, Gyu Chong; Shon, You Dong; Park, Seung Min; Kang, Ku Hyun
2011-12-01
Skills related to cardiopulmonary resuscitation (CPR) and automated external defibrillator (AED) use by lay responders decay rapidly after training, and efforts are required to maintain competence among trainees. We examined whether repeated viewing of a reminder video on a mobile phone would be an effective means of maintaining CPR and AED skills in lay responders. In a single-blind case-control study, 75 male students received training in CPR and AED use. They were allocated either to the control or to the video-reminded group, who received a memory card containing a video clip about CPR and AED use for their mobile phone, which they were repeatedly encouraged to watch by SMS text message. CPR and AED skills were assessed in scenario format by examiners immediately and 3 months after initial training. Three months after initial training, the video-reminded group showed more accurate airway opening (P<0.001), breathing check (P<0.001), first rescue breathing (P=0.004), hand positioning (P=0.004), AED electrode positioning (P<0.001), pre-shock safety check (P<0.001), defibrillation within 90s (P=0.010), and resuming CPR after defibrillation (P<0.001) than controls. They also showed significantly higher self-assessed CPR confidence scores and increased willingness to perform bystander CPR in cardiac arrest than the controls at 3 months (P<0.001, P=0.024, respectively). Repeated viewing of a reminder video clip on a mobile phone increases retention of CPR and AED skills in lay responders. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
1988-05-01
This video shows, with high quality animation, the formation of the Solar System: comets, Jupiter, Europa, Saturn, Titan, Mars, the Sun, and early Earth. The focus is on life elsewhere in the Solar System. The recording was prepared for a news conference.
Driver comprehension of managed lane signing.
DOT National Transportation Integrated Search
2009-09-01
A statewide survey of driver comprehension of managed lane signing is reported. Computer-based surveys were conducted using video clips of computer animations as well as still images of signs. The surveys were conducted in four Texas cities with a to...
Bocher, M; Chisin, R; Parag, Y; Freedman, N; Meir Weil, Y; Lester, H; Mishani, E; Bonne, O
2001-07-01
This study attempted to use PET and 15O-H2O to measure changes in regional cerebral blood flow (rCBF) during sexual arousal evoked in 10 young heterosexual males while they watched a pornographic video clip, featuring heterosexual intercourse. This condition was compared with other mental setups evoked by noisy, nature, and talkshow audiovisual clips. Immediately after each clip, the participants answered three questions pertaining to what extent they thought about sex, felt aroused, and sensed an erection. They scored their answers using a 1 to 10 scale. SPM was used for data analysis. Sexual arousal was mainly associated with activation of bilateral, predominantly right, inferoposterior extrastriate cortices, of the right inferolateral prefrontal cortex and of the midbrain. The significance of those findings is discussed in the light of current theories concerning selective attention, "mind reading" and mirroring, reinforcement of pleasurable stimuli, and penile erection.
Obesity in the new media: a content analysis of obesity videos on YouTube.
Yoo, Jina H; Kim, Junghyun
2012-01-01
This study examines (1) how the topics of obesity are framed and (2) how obese persons are portrayed on YouTube video clips. The analysis of 417 obesity videos revealed that a newer medium like YouTube, similar to traditional media, appeared to assign responsibility and solutions for obesity mainly to individuals and their behaviors, although there was a tendency that some video categories have started to show other causal claims or solutions. However, due to the prevailing emphasis on personal causes and solutions, numerous YouTube videos had a theme of weight-based teasing, or showed obese persons engaging in stereotypical eating behaviors. We discuss a potential impact of YouTube videos on shaping viewers' perceptions about obesity and further reinforcing stigmatization of obese persons.
Hierarchical vs non-hierarchical audio indexation and classification for video genres
NASA Astrophysics Data System (ADS)
Dammak, Nouha; BenAyed, Yassine
2018-04-01
In this paper, Support Vector Machines (SVMs) are used for segmenting and indexing video genres based on only audio features extracted at block level, which has a prominent asset by capturing local temporal information. The main contribution of our study is to show the wide effect on the classification accuracies while using an hierarchical categorization structure based on Mel Frequency Cepstral Coefficients (MFCC) audio descriptor. In fact, the classification consists in three common video genres: sports videos, music clips and news scenes. The sub-classification may divide each genre into several multi-speaker and multi-dialect sub-genres. The validation of this approach was carried out on over 360 minutes of video span yielding a classification accuracy of over 99%.
Impact of a communication skills audiovisual package on medical students' knowledge.
Saab, Bassem R; Usta, Jinan; Major, Stella; Antoun, Jumana
2009-01-01
Over the last decade more emphasis is being put on teaching communication skills (CS). Use of videos and role-play was suggested to improve CS. This article will present the impact of an audiovisual package on promoting the knowledge of medical students in CS. Seventy-five second year medical students--distributed into eight groups led by four facilitators--critiqued a video clip immediately before and after the introduction of a communication skills audiovisual package. The skills taught included opening the interview, questioning, facilitation, clarification, reflection, confrontation, summarizing, and preparation of the patient for the physical exam. The students, also, role-played the reviewed scenario. The students' pre- and post-intervention responses were analyzed using a standardized grading form. There was a significant improvement in students' knowledge (p < 0.000) after the introduction of the intervention in all the CS taught except closed ended questioning. This improvement was consistent among the four facilitators. Reviewing video scenarios and role-playing improved the knowledge in core communication skills among second-year medical students assessed by a video-based written examination.
Maternal response to child affect: Role of maternal depression and relationship quality.
Morgan, Judith K; Ambrosia, Marigrace; Forbes, Erika E; Cyranowski, Jill M; Amole, Marlissa C; Silk, Jennifer S; Elliott, Rosalind D; Swartz, Holly A
2015-11-15
Maternal depression is associated with negative outcomes for offspring, including increased incidence of child psychopathology. Quality of mother-child relationships can be compromised among affectively ill dyads, such as those characterized by maternal depression and child psychopathology, and negatively impact outcomes bidirectionally. Little is known about the neural mechanisms that may modulate depressed mothers' responses to their psychiatrically ill children during middle childhood and adolescence, partially because of a need for ecologically valid personally relevant fMRI tasks that might most effectively elicit these neural mechanisms. The current project evaluated maternal response to child positive and negative affective video clips in 19 depressed mothers with psychiatrically ill offspring using a novel fMRI task. The task elicited activation in the ventral striatum when mothers viewed positive clips and insula when mothers viewed negative clips of their own (versus unfamiliar) children. Both types of clips elicited activation in regions associated with affect regulation and self-related and social processing. Greater lifetime number of depressive episodes, comorbid anxiety, and poor mother-child relationship quality all emerged as predictors of maternal response to child affect. Findings may be specific to dyads with psychiatrically ill children. Altered neural response to child affect may be an important characteristic of chronic maternal depression and may impact mother-child relationships negatively. Existing interventions for depression may be improved by helping mothers respond to their children's affect more adaptively. Copyright © 2015 Elsevier B.V. All rights reserved.
Behrends, Marianne; Stiller, Gerald; Dudzinska, Agnieszka; Schneidewind, Sabine
2016-01-01
To improve medical students' competences in physical examination videos clips were created, with and without an explaining commentary. The uncommented videos show the communication and interaction between physician and patient during a physical examination, the commented videos show the single steps of the physical examination supplemented with an off-screen commentary emphasizing important facts. To investigate whether uncommented and more authentic videos are more helpful to practice a physical examination than commented videos we interviewed 133 students via online surveys. 72% of the students used the uncommented videos for practicing with others, compared to 55% using the commented videos. 37% of the students think that practical skills can be learned better with the uncommented videos. In general, 97% state that the videos helped them to improve their skills. Our findings indicate that the cinematic form of an educational video has an effect on learning behavior, learning success and didactic quality.
Using a new, free spectrograph program to critically investigate acoustics
NASA Astrophysics Data System (ADS)
Ball, Edward; Ruiz, Michael J.
2016-11-01
We have developed an online spectrograph program with a bank of over 30 audio clips to visualise a variety of sounds. Our audio library includes everyday sounds such as speech, singing, musical instruments, birds, a baby, cat, dog, sirens, a jet, thunder, and screaming. We provide a link to a video of the sound sources superimposed with their respective spectrograms in real time. Readers can use our spectrograph program to view our library, open their own desktop audio files, and use the program in real time with a computer microphone.
Baghdad. The Urban Sanctuary in Desert Storm
1997-01-01
Jenoub telephone exchange (Ma’moon in Al Karkh), Maiden Square (Bab al Muadem) telephone exchange, Saddam City exchange and radio relay, and Shemal ...on targets. Hits are bombs delivered and scored by the 37th Wing as on or near aimpoints based upon onboard gun camera video . Misses are bombs...bombs, highlight ing Bagh dad from the first night. US military spokes men, who chose the quick and glitzy sound bite and video clip when more bal
An efficient approach for video information retrieval
NASA Astrophysics Data System (ADS)
Dong, Daoguo; Xue, Xiangyang
2005-01-01
Today, more and more video information can be accessed through internet, satellite, etc.. Retrieving specific video information from large-scale video database has become an important and challenging research topic in the area of multimedia information retrieval. In this paper, we introduce a new and efficient index structure OVA-File, which is a variant of VA-File. In OVA-File, the approximations close to each other in data space are stored in close positions of the approximation file. The benefit is that only a part of approximations close to the query vector need to be visited to get the query result. Both shot query algorithm and video clip algorithm are proposed to support video information retrieval efficiently. The experimental results showed that the queries based on OVA-File were much faster than that based on VA-File with small loss of result quality.
Debevc, Matjaž; Milošević, Danijela; Kožuh, Ines
2015-01-01
One important theme in captioning is whether the implementation of captions in individual sign language interpreter videos can positively affect viewers' comprehension when compared with sign language interpreter videos without captions. In our study, an experiment was conducted using four video clips with information about everyday events. Fifty-one deaf and hard of hearing sign language users alternately watched the sign language interpreter videos with, and without, captions. Afterwards, they answered ten questions. The results showed that the presence of captions positively affected their rates of comprehension, which increased by 24% among deaf viewers and 42% among hard of hearing viewers. The most obvious differences in comprehension between watching sign language interpreter videos with and without captions were found for the subjects of hiking and culture, where comprehension was higher when captions were used. The results led to suggestions for the consistent use of captions in sign language interpreter videos in various media.
Viewer discretion advised: is YouTube a friend or foe in surgical education?
Rodriguez, H Alejandro; Young, Monica T; Jackson, Hope T; Oelschlager, Brant K; Wright, Andrew S
2018-04-01
In the current era, trainees frequently use unvetted online resources for their own education, including viewing surgical videos on YouTube. While operative videos are an important resource in surgical education, YouTube content is not selected or organized by quality but instead is ranked by popularity and other factors. This creates a potential for videos that feature poor technique or critical safety violations to become the most viewed for a given procedure. A YouTube search for "Laparoscopic cholecystectomy" was performed. Search results were screened to exclude animations and lectures; the top ten operative videos were evaluated. Three reviewers independently analyzed each of the 10 videos. Technical skill was rated using the GOALS score. Establishment of a critical view of safety (CVS) was scored according to CVS "doublet view" score, where a score of ≥5 points (out of 6) is considered satisfactory. Videos were also screened for safety concerns not listed by the previous tools. Median competence score was 8 (±1.76) and difficulty was 2 (±1.8). GOALS score median was 18 (±3.4). Only one video achieved adequate critical view of safety; median CVS score was 2 (range 0-6). Five videos were noted to have other potentially dangerous safety violations, including placing hot ultrasonic shears on the duodenum, non-clipping of the cystic artery, blind dissection in the hepatocystic triangle, and damage to the liver capsule. Top ranked laparoscopic cholecystectomy videos on YouTube show suboptimal technique with half of videos demonstrating concerning maneuvers and only one in ten having an adequate critical view of safety. While observing operative videos can be an important learning tool, surgical educators should be aware of the low quality of popular videos on YouTube. Dissemination of high-quality content on video sharing platforms should be a priority for surgical societies.
X-15 drop launch, view from B-52 mothership
NASA Technical Reports Server (NTRS)
1960-01-01
This roughly 20-second video clip shows the first planned glide flight of X-15 #1 on June 8, 1959. Then-North American pilot Scott Crossfield flew the mission, dropped from the B-52A mothership that bore the tail number 0003.
de Vries, Merlijn W; Visscher, Corine; Delwel, Suzanne; van der Steen, Jenny T; Pieper, Marjoleine J C; Scherder, Erik J A; Achterberg, Wilco P; Lobbezoo, Frank
2016-01-01
Objectives. The aim of this study was to establish the reliability of the "chewing" subscale of the OPS-NVI, a novel tool designed to estimate presence and severity of orofacial pain in nonverbal patients. Methods. The OPS-NVI consists of 16 items for observed behavior, classified into four categories and a subjective estimate of pain. Two observers used the OPS-NVI for 237 video clips of people with dementia in Dutch nursing homes during their meal to observe their behavior and to estimate the intensity of orofacial pain. Six weeks later, the same observers rated the video clips a second time. Results. Bottom and ceiling effects for some items were found. This resulted in exclusion of these items from the statistical analyses. The categories which included the remaining items (n = 6) showed reliability varying between fair-to-good and excellent (interobserver reliability, ICC: 0.40-0.47; intraobserver reliability, ICC: 0.40-0.92). Conclusions. The "chewing" subscale of the OPS-NVI showed a fair-to-good to excellent interobserver and intraobserver reliability in this dementia population. This study contributes to the validation process of the OPS-NVI as a whole and stresses the need for further assessment of the reliability of the OPS-NVI with subjects that might already show signs of orofacial pain.
Emotional Processing of Infants Displays in Eating Disorders
Cardi, Valentina; Corfield, Freya; Leppanen, Jenni; Rhind, Charlotte; Deriziotis, Stephanie; Hadjimichalis, Alexandra; Hibbs, Rebecca; Micali, Nadia; Treasure, Janet
2014-01-01
Aim The aim of this study is to examine emotional processing of infant displays in people with Eating Disorders (EDs). Background Social and emotional factors are implicated as causal and maintaining factors in EDs. Difficulties in emotional regulation have been mainly studied in relation to adult interactions, with less interest given to interactions with infants. Method A sample of 138 women were recruited, of which 49 suffered from Anorexia Nervosa (AN), 16 from Bulimia Nervosa (BN), and 73 were healthy controls (HCs). Attentional responses to happy and sad infant faces were tested with the visual probe detection task. Emotional identification of, and reactivity to, infant displays were measured using self-report measures. Facial expressions to video clips depicting sad, happy and frustrated infants were also recorded. Results No significant differences between groups were observed in the attentional response to infant photographs. However, there was a trend for patients to disengage from happy faces. People with EDs also reported lower positive ratings of happy infant displays and greater subjective negative reactions to sad infants. Finally, patients showed a significantly lower production of facial expressions, especially in response to the happy infant video clip. Insecure attachment was negatively correlated with positive facial expressions displayed in response to the happy infant and positively correlated with the intensity of negative emotions experienced in response to the sad infant video clip. Conclusion People with EDs do not have marked abnormalities in their attentional processing of infant emotional faces. However, they do have a reduction in facial affect particularly in response to happy infants. Also, they report greater negative reactions to sadness, and rate positive emotions less intensively than HCs. This pattern of emotional responsivity suggests abnormalities in social reward sensitivity and might indicate new treatment targets. PMID:25463051
Arfeller, Carola; Schwarzbach, Jens; Ubaldi, Silvia; Ferrari, Paolo; Barchiesi, Guido; Cattaneo, Luigi
2013-04-01
The posterior superior temporal sulcus (pSTS) is active when observing biological motion. We investigated the functional connections of the pSTS node within the action observation network by measuring the after-effect of focal repetitive transcranial magnetic stimulation (rTMS) with whole-brain functional magnetic resonance imaging (fMRI). Participants received 1-Hz rTMS over the pSTS region for 10 min and underwent fMRI immediately after. While scanned, they were shown short video clips of a hand grasping an object (grasp clips) or moving next to it (control clips). rTMS-fMRI was repeated for four consecutive blocks. In two blocks we stimulated the left pSTS region and in the other two the right pSTS region. For each side TMS was applied with an effective intensity (95 % of motor threshold) or with ineffective intensity (50 % of motor threshold). Brain regions showing interactive effects of (clip type) × (TMS intensity) were identified in the lateral temporo-occipital cortex, in the anterior intraparietal region and in the ventral premotor cortex. Remote effects of rTMS were mostly limited to the stimulated hemisphere and consisted in an increase of blood oxygen level-dependent responses to grasp clips compared to control clips. We show that the pSTS occupies a pivotal relay position during observation of goal-directed actions.
Indexing and retrieval of MPEG compressed video
NASA Astrophysics Data System (ADS)
Kobla, Vikrant; Doermann, David S.
1998-04-01
To keep pace with the increased popularity of digital video as an archival medium, the development of techniques for fast and efficient analysis of ideo streams is essential. In particular, solutions to the problems of storing, indexing, browsing, and retrieving video data from large multimedia databases are necessary to a low access to these collections. Given that video is often stored efficiently in a compressed format, the costly overhead of decompression can be reduced by analyzing the compressed representation directly. In earlier work, we presented compressed domain parsing techniques which identified shots, subshots, and scenes. In this article, we present efficient key frame selection, feature extraction, indexing, and retrieval techniques that are directly applicable to MPEG compressed video. We develop a frame type independent representation which normalizes spatial and temporal features including frame type, frame size, macroblock encoding, and motion compensation vectors. Features for indexing are derived directly from this representation and mapped to a low- dimensional space where they can be accessed using standard database techniques. Spatial information is used as primary index into the database and temporal information is used to rank retrieved clips and enhance the robustness of the system. The techniques presented enable efficient indexing, querying, and retrieval of compressed video as demonstrated by our system which typically takes a fraction of a second to retrieve similar video scenes from a database, with over 95 percent recall.
Sex differences during humor appreciation in child-sibling pairs.
Vrticka, Pascal; Neely, Michelle; Walter Shelly, Elizabeth; Black, Jessica M; Reiss, Allan L
2013-01-01
The developmental origin of sex differences in adult brain function is poorly understood. Elucidating neural mechanisms underlying comparable cognitive functionality in both children and adults is required to address this gap. Humor appreciation represents a particularly relevant target for such developmental research because explanatory theories apply across the life span, and underlying neurocircuitry shows sex differences in adults. As a positive mood state, humor is also of interest due to sex differences in rates of depression, a disorder afflicting twice as many women as men. In this study, we employed functional magnetic resonance imaging (fMRI) to investigate brain responses to funny versus positive (and neutral) video clips in 22 children, ages 6-13 years, including eight sibling-pairs. Our data revealed increased activity to funny clips in bilateral temporo-occipital cortex, midbrain, and amygdala in girls. Conversely, we found heightened activation to positive clips in bilateral inferior parietal lobule, fusiform gyrus, inferior frontal gyrus, amygdala, and ventromedial prefrontal cortex in boys. Many of these effects persisted when looking at sibling-pairs only. We interpret such findings as reflecting the presence of early sex divergence in reward saliency or expectation and stimulus relevance attribution. These findings are discussed in the context of evolutionary and developmental theories of humor function.
SEX DIFFERENCES DURING HUMOR APPRECIATION IN CHILD SIBLING-PAIRS
Vrticka, Pascal; Neely, Michelle; Walter, Elizabeth; Black, Jessica M.; Reiss, Allan L.
2013-01-01
The developmental origin of sex differences in adult brain function is poorly understood. Elucidating neural mechanisms underlying comparable cognitive functionality in both children and adults is required to address this gap. Humor appreciation represents a particularly relevant target for such developmental research because explanatory theories apply across the life span and underlying neurocircuitry shows sex differences in adults. As a positive mood state, humor is also of interest due to sex differences in rates of depression, a disorder afflicting twice as many women as men. In this study, we employed fMRI to investigate brain responses to funny versus positive (and neutral) video clips in 22 children ages 6 to 13 years, including 8 sibling pairs. Our data revealed increased activity to funny clips in bilateral temporo-occipital cortex, midbrain, and amygdala in girls. Conversely, we found heightened activation to positive clips in bilateral inferior parietal lobule, fusiform gyrus, inferior frontal gyrus, amygdala, and ventromedial prefrontal cortex in boys. Many of these effects persisted when looking at sibling-pairs only. We interpret such findings as reflecting the presence of early sex divergence in reward saliency / expectation and stimulus relevance attribution. These findings are discussed in the context of evolutionary and developmental theories of humor function. PMID:23672302
CLIPS++: Embedding CLIPS into C++
NASA Technical Reports Server (NTRS)
Obermeyer, Lance; Miranker, Daniel P.
1994-01-01
This paper describes a set of C++ extensions to the CLIPS language and their embodiment in CLIPS++. These extensions and the implementation approach of CLIPS++ provide a new level of embeddability with C and C++. These extensions are a C++ include statement and a defcontainer construct; (include (c++-header-file.h)) and (defcontainer (c++-type)). The include construct allows C++ functions to be embedded in both the LHS and RHS of CLIPS rules. The header file in an include construct is the same header file the programmer uses for his/her own C++ code, independent of CLIPS. The defcontainer construct allows the inference engine to treat C++ class instances as CLIPS deftemplate facts. Consequently existing C++ class libraries may be transparently imported into CLIPS. These C++ types may use advanced features like inheritance, virtual functions, and templates. The implementation has been tested with several class libraries, including Rogue Wave Software's Tools.h++, GNU's libg++, and USL's C++ Standard Components. The execution speed of CLIPS++ has been determined to be 5 to 700 times the execution speed of CLIPS 6.0 (10 to 20X typical).
Kim, Hodam; Ha, Jihyeon; Park, Wanjoo; Kim, Laehyun
2018-01-01
The increase in the number of adolescents with internet gaming disorder (IGD), a type of behavioral addiction is becoming an issue of public concern. Teaching adolescents to suppress their craving for gaming in daily life situations is one of the core strategies for treating IGD. Recent studies have demonstrated that computer-aided treatment methods, such as neurofeedback therapy, are effective in relieving the symptoms of a variety of addictions. When a computer-aided treatment strategy is applied to the treatment of IGD, detecting whether an individual is currently experiencing a craving for gaming is important. We aroused a craving for gaming in 57 adolescents with mild to severe IGD using numerous short video clips showing gameplay videos of three addictive games. At the same time, a variety of biosignals were recorded including photoplethysmogram, galvanic skin response, and electrooculogram measurements. After observing the changes in these biosignals during the craving state, we classified each individual participant’s craving/non-craving states using a support vector machine. When video clips edited to arouse a craving for gaming were played, significant decreases in the standard deviation of the heart rate, the number of eye blinks, and saccadic eye movements were observed, along with a significant increase in the mean respiratory rate. Based on these results, we were able to classify whether an individual participant felt a craving for gaming with an average accuracy of 87.04%. This is the first study that has attempted to detect a craving for gaming in an individual with IGD using multimodal biosignal measurements. Moreover, this is the first that showed that an electrooculogram could provide useful biosignal markers for detecting a craving for gaming. PMID:29301261
Kim, Hodam; Ha, Jihyeon; Chang, Won-Du; Park, Wanjoo; Kim, Laehyun; Im, Chang-Hwan
2018-01-01
The increase in the number of adolescents with internet gaming disorder (IGD), a type of behavioral addiction is becoming an issue of public concern. Teaching adolescents to suppress their craving for gaming in daily life situations is one of the core strategies for treating IGD. Recent studies have demonstrated that computer-aided treatment methods, such as neurofeedback therapy, are effective in relieving the symptoms of a variety of addictions. When a computer-aided treatment strategy is applied to the treatment of IGD, detecting whether an individual is currently experiencing a craving for gaming is important. We aroused a craving for gaming in 57 adolescents with mild to severe IGD using numerous short video clips showing gameplay videos of three addictive games. At the same time, a variety of biosignals were recorded including photoplethysmogram, galvanic skin response, and electrooculogram measurements. After observing the changes in these biosignals during the craving state, we classified each individual participant's craving/non-craving states using a support vector machine. When video clips edited to arouse a craving for gaming were played, significant decreases in the standard deviation of the heart rate, the number of eye blinks, and saccadic eye movements were observed, along with a significant increase in the mean respiratory rate. Based on these results, we were able to classify whether an individual participant felt a craving for gaming with an average accuracy of 87.04%. This is the first study that has attempted to detect a craving for gaming in an individual with IGD using multimodal biosignal measurements. Moreover, this is the first that showed that an electrooculogram could provide useful biosignal markers for detecting a craving for gaming.
Pioneer-Venus Press Clip. [Solar System formation and extraterrestrial life
NASA Technical Reports Server (NTRS)
1988-01-01
This video shows, with high quality animation, the formation of the Solar System: comets, Jupiter, Europa, Saturn, Titan, Mars, the Sun, and early Earth. The focus is on life elsewhere in the Solar System. The recording was prepared for a news conference.
The Daily Show with Jon Stewart: Part 2
ERIC Educational Resources Information Center
Trier, James
2008-01-01
"The Daily Show With Jon Stewart" is one of the best critical literacy programs on television, and in this Media Literacy column the author suggests ways that teachers can use video clips from the show in their classrooms. (For Part 1, see EJ784683.)
2016-10-04
A prominence observed along the right edge of the sun rose up and then most of it bent back down to the surface (Oct. 4, 2016). Prominences are clouds of plasma, usually elongated, that are suspended above the sun by magnetic forces. They are notably unstable. A review of SOHO's coronagraph videos shows that some of the particles did break away into space. The video clip, which covers eight hours of activity, was taken in a wavelength of extreme UV light. Movies are available at http://photojournal.jpl.nasa.gov/catalog/PIA21106
NASA Technical Reports Server (NTRS)
1960-01-01
In this 17-second video clip, the X-15 is shown in flight and then landing on Rogers Dry Lakebed adjacent to Edwards Air Force Base. It is followed by an F-104A chase aircraft, whose pilot provided a second set of eyes to the X-15 pilot on landing in case of any problems. The video shows the skids on the back of the X-15 contacting the lakebed, with the aircraft's nose then rotating downward until the nose landing gear was on the lakebed.
Videos and images from 25 years of teaching compressible flow
NASA Astrophysics Data System (ADS)
Settles, Gary
2008-11-01
Compressible flow is a very visual topic due to refractive optical flow visualization and the public fascination with high-speed flight. Films, video clips, and many images are available to convey this in the classroom. An overview of this material is given and selected examples are shown, drawn from educational films, the movies, television, etc., and accumulated over 25 years of teaching basic and advanced compressible-flow courses. The impact of copyright protection and the doctrine of fair use is also discussed.
News video story segmentation method using fusion of audio-visual features
NASA Astrophysics Data System (ADS)
Wen, Jun; Wu, Ling-da; Zeng, Pu; Luan, Xi-dao; Xie, Yu-xiang
2007-11-01
News story segmentation is an important aspect for news video analysis. This paper presents a method for news video story segmentation. Different form prior works, which base on visual features transform, the proposed technique uses audio features as baseline and fuses visual features with it to refine the results. At first, it selects silence clips as audio features candidate points, and selects shot boundaries and anchor shots as two kinds of visual features candidate points. Then this paper selects audio feature candidates as cues and develops different fusion method, which effectively using diverse type visual candidates to refine audio candidates, to get story boundaries. Experiment results show that this method has high efficiency and adaptability to different kinds of news video.
Identifying swimmers as water-polo or swim team-mates from visual displays of less than one second.
Steel, Kylie A; Adams, Roger D; Canning, Colleen G
2007-09-01
Opportunities for ball passing in water-polo may be brief and the decision to pass only informed by minimal visual input. Since researchers using point light displays have shown that the walking or running gait of familiars can be identified, water-polo players may have the ability to recognize team-mates from their swimming gait. To test this hypothesis, members of a water-polo team and a competition swim team viewed two randomized sets of video clips, each less than one second long, of swimmers from both teams sprinting freestyle past a fixed camera. The arm stroke clip sequence showed only the upper body, and the kick sequence showed only the lower body. After viewing each video clip, observers rated their level of certainty as to whether the swimmer presented was a team-mate or not. Discrimination was significantly above chance in both groups. Water-polo players were better able to identify team-mates from their kick, whereas swimmers were better able to do so by viewing arm stroke. Our results suggest that, as with walking and running gait, small amounts of visual information about swimmers can be used for recognition, and so raise the possibility that specific training may be able to improve team-mate classification in water-polo, particularly in newly formed teams.
Boardman, Katie A; Bartels, Ross M
2018-05-19
In this experimental study, 89 participants were allocated to an offending pedophile, nonoffending pedophile, or control video condition. They then watched two short help-seeking video clips of an older male and a younger male (counterbalanced). Judgments about each male were assessed, as were general attitudes toward pedophiles and sexual offenders. Offending pedophiles were judged as more deserving of punishment than the nonoffending pedophiles and controls. Age of the male was found to have an effect on judgments of dangerousness. Existing attitudes toward pedophiles and sexual offenders did not statistically differ. Limitations and future research ideas are discussed.
Training in Cerebral Aneurysm Clipping Using Self-Made 3-Dimensional Models.
Mashiko, Toshihiro; Kaneko, Naoki; Konno, Takehiko; Otani, Keisuke; Nagayama, Rie; Watanabe, Eiju
Recently, there have been increasingly fewer opportunities for junior surgeons to receive on-the-job training. Therefore, we created custom-built three-dimensional (3D) surgical simulators for training in connection with cerebral aneurysm clipping. Three patient-specific models were composed of a trimmed skull, retractable brain, and a hollow elastic aneurysm with its parent artery. The brain models were created using 3D printers via a casting technique. The artery models were made by 3D printing and a lost-wax technique. Four residents and 2 junior neurosurgeons attended the training courses. The trainees retracted the brain, observed the parent arteries and aneurysmal neck, selected the clip(s), and clipped the neck of an aneurysm. The duration of simulation was recorded. A senior neurosurgeon then assessed the trainee's technical skill and explained how to improve his/her performance for the procedure using a video of the actual surgery. Subsequently, the trainee attempted the clipping simulation again, using the same model. After the course, the senior neurosurgeon assessed each trainee's technical skill. The trainee critiqued the usefulness of the model and the effectiveness of the training course. Trainees succeeded in performing the simulation in line with an actual surgery. Their skills tended to improve upon completion of the training. These simulation models are easy to create, and we believe that they are very useful for training junior neurosurgeons in the surgical techniques needed for cerebral aneurysm clipping. Copyright © 2017 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Pearson, Richard L.
2016-10-01
We have developed Astronomy4Kids to help cultivate the next generation of scientists by using technology to reach every interested child in both formal and informal learning environments. This online video series fills the void of effective STEM education tools for children under the age of 8. Our first collection of videos discuss many planetary topics, including the following: planet and moon formation theories, solar and lunar eclipses, and the seasonal effect of the Earth's tilt. As education and outreach become a larger focus of groups such as AAS and NASA, it is imperative to include programs such as Astronomy4Kids to extend these initiatives to younger age groups.Traditionally, this age group has been viewed as too young to be introduced to physics and astronomy concepts. However, child development research is consistently demonstrating the amazing plasticity of a young child's mind: the younger one is introduced to a complex concept, the easier it is to grasp later on. Following the philosophies of Fred Rogers, we present children with a real, relatable, instructor allowing them to focus on the concepts being presented.The format of Astronomy4Kids includes short instruction video clips that usually include a hands-on activity that is easily reproduced at home or in the classroom. This permits flexibility in how the video series is utilized. Within formal classroom or after-school situations, teachers and instructors can lead the discussion and activity with help from the video and supplemental materials (e.g. worksheets, concept outlines, etc.). Informal environments permit the viewer to complete the tasks on their own or simply enjoy the presentation. The video series can be found on YouTube (under "Astronomy 4 Kids") or Facebook (at www.facebook.com/astronomy4kids); we have also expanded to Instagram (www.instragram.com/astronomy4kids) and Pinterest (www.pinterest.com/astronomy4kids).
Feigning Amnesia Moderately Impairs Memory for a Mock Crime Video.
Mangiulli, Ivan; van Oorsouw, Kim; Curci, Antonietta; Merckelbach, Harald; Jelicic, Marko
2018-01-01
Previous studies showed that feigning amnesia for a crime impairs actual memory for the target event. Lack of rehearsal has been proposed as an explanation for this memory-undermining effect of feigning. The aim of the present study was to replicate and extend previous research adopting a mock crime video instead of a narrative story. We showed participants a video of a violent crime. Next, they were requested to imagine that they had committed this offense and to either feign amnesia or confess the crime. A third condition was included: Participants in the delayed test-only control condition did not receive any instruction. On subsequent recall tests, participants in all three conditions were instructed to report as much information as possible about the offense. On the free recall test, feigning amnesia impaired memory for the video clip, but participants who were asked to feign crime-related amnesia outperformed controls. However, no differences between simulators and confessors were found on both correct cued recollection or on distortion and commission rates. We also explored whether inner speech might modulate memory for the crime. Inner speech traits were not found to be related to the simulating amnesia effect. Theoretical and practical implications of our results are discussed.
Sorsdahl, Anne Brit; Moe-Nilssen, Rolf; Strand, Liv Inger
2008-02-01
The aim of this study was to examine observer reliability of the Gross Motor Performance Measure (GMPM) and the Quality of Upper Extremity Skills Test (QUEST) based on video clips. The tests were administered to 26 children with cerebral palsy (CP; 14 males, 12 females; range 2-13y, mean 7y 6mo), 24 with spastic CP, and two with dyskinesia. Respectively, five, six, five, four, and six children were classified in Gross Motor Function Classification System Levels I to V; and four, nine, five, five, and three children were classified in Manual Ability Classification System levels I to V. The children's performances were recorded and edited. Two experienced paediatric physical therapists assessed the children from watching the video clips. Intraobserver and interobserver reliability values of the total scores were mostly high, intraclass correlation coefficient (ICC)(1,1) varying from 0.69 to 0.97 with only one coefficient below 0.89. The ICCs of subscores varied from 0.36 to 0.95, finding'Alignment'and'Weight shift'in GMPM and'Protective extension'in QUEST highly reliable. The subscores'Dissociated movements'in GMPM and QUEST, and'Grasp'in QUEST were the least reliable, and recommendations are made to increase reliability of these subscores. Video scoring was time consuming, but was found to offer many advantages; the possibility to review performance, to use special trained observers for scoring and less demanding assessment for the children.
Eye movements while viewing narrated, captioned, and silent videos
Ross, Nicholas M.; Kowler, Eileen
2013-01-01
Videos are often accompanied by narration delivered either by an audio stream or by captions, yet little is known about saccadic patterns while viewing narrated video displays. Eye movements were recorded while viewing video clips with (a) audio narration, (b) captions, (c) no narration, or (d) concurrent captions and audio. A surprisingly large proportion of time (>40%) was spent reading captions even in the presence of a redundant audio stream. Redundant audio did not affect the saccadic reading patterns but did lead to skipping of some portions of the captions and to delays of saccades made into the caption region. In the absence of captions, fixations were drawn to regions with a high density of information, such as the central region of the display, and to regions with high levels of temporal change (actions and events), regardless of the presence of narration. The strong attraction to captions, with or without redundant audio, raises the question of what determines how time is apportioned between captions and video regions so as to minimize information loss. The strategies of apportioning time may be based on several factors, including the inherent attraction of the line of sight to any available text, the moment by moment impressions of the relative importance of the information in the caption and the video, and the drive to integrate visual text accompanied by audio into a single narrative stream. PMID:23457357
Using video analysis for concussion surveillance in Australian football.
Makdissi, Michael; Davis, Gavin
2016-12-01
The objectives of the study were to assess the relationship between various player and game factors and risk of concussion; and to assess the reliability of video analysis for mechanistic assessment of concussion in Australian football. Prospective cohort study. All impacts and collisions resulting in concussion were identified during the 2011 Australian Football League season. An extensive list of factors for assessment was created based upon previous analysis of concussion in Australian Football League and expert opinions. The authors independently reviewed the video clips and correlation for each factor was examined. A total of 82 concussions were reported in 194 games (rate: 8.7 concussions per 1000 match hours; 95% confidence interval: 6.9-10.5). Player demographics and game variables such as venue, timing of the game (day, night or twilight), quarter, travel status (home or interstate) or score margin did not demonstrate a significant relationship with risk of concussion; although a higher percentage of concussions occurred in the first 5min of game time of the quarter (36.6%), when compared to the last 5min (20.7%). Variables with good inter-rater agreement included position on the ground, circumstances of the injury and cause of the impact. The remainder of the variables assessed had fair-poor inter-rater agreement. Common problems included insufficient or poor quality video and interpretation issues related to the definitions used. Clear definitions and good quality video from multiple camera angles are required to improve the utility of video analysis for concussion surveillance in Australian football. Copyright © 2016 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.
McKay, Sandra M; Maki, Brian E
2010-01-01
A computer-based 'Useful Field of View' (UFOV) training program has been shown to be effective in improving visual processing in older adults. Studies of young adults have shown that playing video games can have similar benefits; however, these studies involved realistic and violent 'first-person shooter' (FPS) games. The willingness of older adults to play such games has not been established. OBJECTIVES: To determine the degree to which older adults would accept playing a realistic, violent FPS-game, compared to video games not involving realistic depiction of violence. METHODS: Sixteen older adults (ages 64-77) viewed and rated video-clip demonstrations of the UFOV program and three video-game genres (realistic-FPS, cartoon-FPS, fixed-shooter), and were then given an opportunity to try them out (30 minutes per game) and rate various features. RESULTS: The results supported a hypothesis that the participants would be less willing to play the realistic-FPS game in comparison to the less violent alternatives (p's<0.02). After viewing the video-clip demonstrations, 10 of 16 participants indicated they would be unwilling to try out the realistic-FPS game. Of the six who were willing, three did not enjoy the experience and were not interested in playing again. In contrast, all 12 subjects who were willing to try the cartoon-FPS game reported that they enjoyed it and would be willing to play again. A high proportion also tried and enjoyed the UFOV training (15/16) and the fixed-shooter game (12/15). DISCUSSION: A realistic, violent FPS video game is unlikely to be an appropriate choice for older adults. Cartoon-FPS and fixed-shooter games are more viable options. Although most subjects also enjoyed UFOV training, a video-game approach has a number of potential advantages (for instance, 'addictive' properties, low cost, self-administration at home). We therefore conclude that non-violent cartoon-FPS and fixed-shooter video games warrant further investigation as an alternative to the UFOV program for training improved visual processing in seniors.
Sleath, Betsy; Carpenter, Delesha M; Lee, Charles; Loughlin, Ceila E; Etheridge, Dana; Rivera-Duchesne, Laura; Reuland, Daniel S; Batey, Karolyne; Duchesne, Cristina I; Garcia, Nacire; Tudor, Gail
2016-09-01
Our objective was to develop a series of short educational videos for teens and parents to watch before pediatric visits to motivate teens to be more actively involved during their visits. The development of the short educational videos was theoretically guided by Social Cognitive Theory. First we conducted four focus groups with teens (ages 11 to 17) with asthma, four focus groups with the teens' parents, and seven focus groups with pediatric providers from four clinics. The research team, which included two teens with asthma and their parents, analyzed the focus group transcripts for themes and then developed the initial video script. Next, a visual storyboard was reviewed by focus groups with parents and four with teens to identify areas of the script for improvement. The English videos were then produced. Focus groups with Hispanic parents and teens were then conducted for advice on how to modify the videos to make a more culturally appropriate Spanish version. Based on focus group results, teen newscasters narrate six one- to two-minute videos with different themes: (a) how to get mom off your back, (b) asthma triggers, (c) staying active with asthma, (d) tracking asthma symptoms, (e) how to talk to your doctor and (f) having confidence with asthma. Each video clip has three key messages and emphasizes how teens should discuss these messages with their providers. Teens, parents, and providers gave us excellent insight into developing videos to increase teen involvement during medical visits.
From Icons to iPods: Visual Electronic Media Use and Worship Satisfaction
ERIC Educational Resources Information Center
Gilbert, Ronald
2010-01-01
A steady transition has been taking place in church services with the employment of visual electronic media intended to enhance the worship experience for congregants. Electronically assisted worship utilizes presentational software and hardware to incorporate video, film clips, texts, graphics, lyrics, TV broadcasts, Internet, Twitter, and even…
Adolescents' Perceptions of Male Involvement in Relational Aggression: Age and Gender Differences
ERIC Educational Resources Information Center
Johnson, Curt; Heath, Melissa Allen; Bailey, Benjamin M.; Coyne, Sarah M.; Yamawaki, Niwako; Eggett, Dennis L.
2013-01-01
This study compared age and gender differences in adolescents' perceptions of male involvement in relational aggression (RA). After viewing two of four video clips portraying RA, each participating adolescent (N = 314; Grades 8-12) answered questions related to rationalizing bullying behaviors--specifically minimizing bullying, blaming victims,…
Practical Epistemologies in Physical Education Practice
ERIC Educational Resources Information Center
Quennerstedt, Mikael
2013-01-01
With a point of departure in a transactional understanding of epistemology, the purpose of this paper is to explore practical epistemologies in physical education (PE) by investigating how knowledge is produced and reproduced in students' and teachers' actions in PE practices posted as clips on the user-generated video-sharing website…
Brief Report: Driving Hazard Perception in Autism
ERIC Educational Resources Information Center
Sheppard, Elizabeth; Ropar, Danielle; Underwood, Geoffrey; van Loon, Editha
2010-01-01
This study investigated whether individuals with ASD (autistic spectrum disorders) are able to identify driving hazards, given their difficulties processing social information, Klin et al. ("Archives of General Psychiatry" 59: 809-816, 2002). Twenty-three adult males with ASD and 21 comparison participants viewed 10 video clips containing driving…
Music Software and Young Children: Fun and Focused Instruction
ERIC Educational Resources Information Center
Peters, G. David
2009-01-01
Readers have experienced the acceleration in music technology developments in recent years. The ease with which students and teacher can access digital audio files, video clips of music performances, and online instructional resources is impressive. Creativity "environments" were developed in a game-like format for children to experiment with…
ERIC Educational Resources Information Center
New York Univ., NY. Alternate Media Center.
The Community Video Workshop, a pilot project being undertaken by the Alternate Media Center of New York University's School of the Arts in cooperation with ATC and Berks TV Cable Company, was intended to make cable television facilities available to Berks County. This document consists of a collection of newspaper clippings, letters, memos, and…
Perceived Credibility and Eyewitness Testimony of Children with Intellectual Disabilities
ERIC Educational Resources Information Center
Henry, L.; Ridley, A.; Perry, J.; Crane, L.
2011-01-01
Background: Although children with intellectual disabilities (ID) often provide accurate witness testimony, jurors tend to perceive their witness statements to be inherently unreliable. Method: The current study explored the free recall transcripts of child witnesses with ID who had watched a video clip, relative to those of typically developing…
The Casual Effects of Emotion on Couples' Cognition and Behavior
ERIC Educational Resources Information Center
Tashiro, Ty; Frazier, Patricia
2007-01-01
The authors conducted 2 translational studies that assessed the causal effects of emotion on maladaptive cognitions and behaviors in couples. Specifically, the authors examined whether negative emotions increased and positive emotions decreased partner attributions and demand-withdraw behaviors. Study 1 (N=164) used video clips to assess the…
DOT National Transportation Integrated Search
2006-07-01
This report describes the development of a new coding scheme to classify potentially distracting secondary tasks performed while driving, such as eating and using a cell phone. Compared with prior schemes (Stutts et al., first-generation UMTRI scheme...
Reading the Intentionality of Young Children
ERIC Educational Resources Information Center
Forman, George E.
2010-01-01
Through six video clips and accompanying commentary, the author argues that by carefully observing how very young children play, adults can gain insight into their high-level thinking and their knowledge, as well as the implications that their strategies hold for their assumptions, theories, and expectations. Adults can then become more protective…
NASA Technical Reports Server (NTRS)
Adams, Mitzi L.; Mortfield, P.; Hathaway, D. H.; Whitaker, Ann F. (Technical Monitor)
2001-01-01
To promote awareness of the Sun-Earth connection, NASA's Marshall Space Flight Center, in collaboration with the Stanford SOLAR Center, sponsored a one-day Sun-Earth Day event on April 27, 2001. Although "celebrated" on only one day, teachers and students from across the nation, prepared for over a month in advance. Workshops were held in March to train teachers. Students performed experiments, results of which were shared through video clips and an internet web cast. Our poster includes highlights from student experiments (grades 2 - 12), lessons learned from the teacher workshops and the event itself, and plans for Sun-Earth Day 2002.
Dive and discover: Expeditions to the seafloor
NASA Astrophysics Data System (ADS)
Lawrence, Lisa Ayers
The Dive and Discover Web site is a virtual treasure chest of deep sea science and classroom resources. The goals of Dive and Discover are to engage students, teachers, and the general public in the excitement of ocean disco very through an interactive educational Web site. You can follow scientists on oceanographic research cruises by reading their daily cruise logs, viewing photos and video clips of the discoveries, and even e-mailing questions to the scientists and crew. WHOI has also included an “Educator's Companion” section with teaching strategies, activities, and assessments, making Dive and Discover an excellent resource for the classroom.
Dive and discover: Expeditions to the seafloor
NASA Astrophysics Data System (ADS)
Ayers Lawrence, Lisa
The Dive and Discover Web site is a virtual treasure chest of deep sea science and classroom resources. The goals of Dive and Discover are to engage students, teachers, and the general public in the excitement of ocean disco very through an interactive educational Web site. You can follow scientists on oceanographic research cruises by reading their daily cruise logs, viewing photos and video clips of the discoveries, and even e-mailing questions to the scientists and crew. WHOI has also included an "Educator's Companion" section with teaching strategies, activities, and assessments, making Dive and Discover an excellent resource for the classroom.
With the VLT Interferometer towards Sharper Vision
NASA Astrophysics Data System (ADS)
2000-05-01
The Nova-ESO VLTI Expertise Centre Opens in Leiden (The Netherlands) European science and technology will gain further strength when the new, front-line Nova-ESO VLTI Expertise Centre (NEVEC) opens in Leiden (The Netherlands) this week. It is a joint venture of the Netherlands Research School for Astronomy (NOVA) (itself a collaboration between the Universities of Amsterdam, Groningen, Leiden, and Utrecht) and the European Southern Observatory (ESO). It is concerned with the Very Large Telescope Interferometer (VLTI). The Inauguration of the new Centre will take place on Friday, May 26, 2000, at the Gorlaeus Laboratory (Lecture Hall no. 1), Einsteinweg 55 2333 CC Leiden; the programme is available on the web. Media representatives who would like to participate in this event and who want further details should contact the Nova Information Centre (e-mail: jacques@astro.uva.nl; Tel: +31-20-5257480 or +31-6-246 525 46). The inaugural ceremony is preceded by a scientific workshop on ground and space-based optical interferometry. NEVEC: A Technology Centre of Excellence As a joint project of NOVA and ESO, NEVEC will develop in the coming years the expertise to exploit the unique interferometric possibilities of the Very Large Telescope (VLT) - now being built on Paranal mountain in Chile. Its primary goals are the * development of instrument modeling, data reduction and calibration techniques for the VLTI; * accumulation of expertise relevant for second-generation VLTI instruments; and * education in the use of the VLTI and related matters. NEVEC will develop optical equipment, simulations and software to enable interferometry with VLT [1]. The new Center provides a strong impulse to Dutch participation in the VLTI. With direct involvement in this R&D work, the scientists at NOVA will be in the front row to do observations with this unique research facility, bound to produce top-level research and many exciting new discoveries. The ESO VLTI at Paranal ESO PR Photo 14a/00 ESO PR Photo 14a/00 [Preview - JPEG: 359 x 400 pix - 120k] [Normal - JPEG: 717 x 800 pix - 416k] [High-Res - JPEG: 2689 x 3000 pix - 6.7M] Caption : A view of the Paranal platform with the four 8.2-m VLT Unit Telescopes (UTs) and the foundations for the 1.8-m VLT Auxiliary Telescopes (ATs) that together will be used as the VLT Interferometer (VLTI). The three ATs will move on rails (yet to be installed) between the thirty observing stations above the holes that provide access to the underlying tunnel system. The light beams from the individual telescopes will be guided towards the centrally located, partly underground Interferometry Laboratory in which the VLTI instruments will be set up. This photo was obtained in December 1999 at which time some construction materials were still present on the platform; they were electronically removed in this reproduction. The ESO VLT facility at Paranal (Chile) consists of four Unit Telescopes with 8.2-m mirrors and several 1.8-m auxiliary telescopes that move on rails, cf. PR Photo 14a/00 . While each of the large telescopes can be used individually for astronomical observations, a prime feature of the VLT is the possibility to combine all of these telescopes into the Very Large Telescope Interferometer (VLTI) . In the interferometric mode, the light beams from the VLT telescopes are brought together at a common focal point in the Interferometry Laboratory that is placed at the centre of the observing platform on top of Paranal. In principle, this can be done in such a way that the resulting (reconstructed) image appears to come from a virtual telescope with a diameter that is equal to the largest distance between two of the individual telescopes, i.e., up to about 200 metres. The theoretically achievable image sharpness of an astronomical telescope is proportional to its diameter (or, for an interferometer, the largest distance between two of its component telescopes). The interferometric observing technique will thus allow the VLTI to produce images as sharp as 0.001 arcsec (at wavelength 1 µm) - this corresponds to viewing the shape of a golfball at more than 8,000 km distance. The VLTI will do even better when this technique is later extended to shorter wavelengths in the visible part of the spectrum - it may ultimately distinguish human-size objects on the surface of the Moon (a 2-metre object at this distance, about 400,000 km, subtends an angle of about 0.001 arcsec). However, interferometry with the VLT demands that the wavefronts of light from the individual telescopes that are up to 200 meters apart must be matched exactly, with less than 1 wavelength of difference. This demands continuous mechanical stability to a fraction of 1 µm (0.001 mm) for the heavy components over such large distances, and is a technically formidable challenge. This is achieved by electronic feed-back loops that measure and adjust the distances during the observations. In addition, continuous and automatic correction of image distortions from air turbulence in the telescopes' field of view is performed by means of adaptive optics [2]. VLTI technology at ESO, industry and institutes The VLT Interferometer is based on front-line technologies introduced and advanced by ESO, and its many parts are now being constructed at various sites in Europe. ESO PR Photo 14b/00 ESO PR Photo 14b/00 [Preview - JPEG: 359 x 400 pix - 72k] [Normal - JPEG: 717 x 800 pix - 200k] [High-Res - JPEG: 2687 x 3000 pix - 1.3M] Caption : Schematic lay-out of the VLT Interferometer. The light from a distant celestial objects enters two of the VLT telescopes and is reflected by the various mirrors into the Interferometric Tunnel, below the observing platform on the top of Paranal. Two Delay Lines with moveable carriages continuously adjust the length of the paths so that the two beams interfere constructively and produce fringes at the interferometric focus in the laboratory. In 1998, Fokker Space (also in Leiden, The Netherlands) was awarded a contract for the delivery of the three Delay Lines of the VLTI. This mechanical-optical system will compensate the optical path differences of the light beams from the individual telescopes. It is necessary to ensure that the light from all telescopes arrives in the same phase at the focal point of the interferometer. Otherwise, the very sharp interferometric images cannot be obtained. More details are available in the corresponding ESO PR 04/98 and recent video sequences, included in ESO Video News Reel No. 9 and Video Clip 04a/00 , cf. below. Also in 1998, the company AMOS (Liège, Belgium) was awarded an ESO contract for the delivery of the three 1.8-m Auxiliary Telescopes (ATs) and of the full set of on-site equipment for the 30 AT observing stations, cf. ESO PR Photos 25a-b/98. This work is now in progress at the factory - various scenes are incorporated into ESO Video News Reel No. 9 and Video Clip 04b/00 . Several instruments for imaging and spectroscopy are currently being developed for the VLTI. The first will be the VLT Interferometer Commissioning Instrument (VINCI) that is the test and first-light instrument for the VLT Interferometer. It is being built by a consortium of French and German institutes under ESO contract. The VLTI Near-Infrared / Red Focal Instrument (AMBER) is a collaborative project between five institutes in France, Germany and Italy, under ESO contract. It will operate with two 8.2-m UTs in the wavelength range between 1 and 2.5 µm during a first phase (2001-2003). The wavelength coverage will be extended in a second phase down to 0.6 µm (600 nm) at the time the ATs become operational. Main scientific objectives are the investigation at very high-angular resolution of disks and jets around young stellar objects and dust tori at active galaxy nuclei with spectroscopic observations. The Phase-Referenced Imaging and Microarcsecond Astrometry (PRIMA) device is managed by ESO and will allow simultaneous interferometric observations of two objects - each with a maximum size of 2 arcsec - and provide exceedingly accurate positional measurements. This will be of importance for many different kinds of astronomical investigations, for instance the search for planetary companions by means of accurate astrometry. The MID-Infrared interferometric instrument (MIDI) is a project collaboration between eight institutes in France, Germany and the Netherlands [1], under ESO contract. The actual design of MIDI is optimized for operation at 10 µm and a possible extension to 20 µm is being considered. Notes [1] The NEVEC Centre is involved in the MIDI project for the VLTI. Another joint project between ESO and NOVA is the Wide-Field Imager OMEGACAM for the VLT Survey Telescope (VST) that will be placed at Paranal. [2] Adaptive Optics systems allow to continuously "re-focus" an astronomical telescope in order to compensate for the atmospheric turbulence and thus to obtain the sharpest possible images. The work at ESO is described on the Adaptive Optics Team Homepage. VLTI-related videos now available In conjunction with the Inauguration of the NEVEC Centre (Leiden, The Netherlands) on May 26, 2000, ESO has issued ESO Video News Reel No. 9 (May 2000) ( "The Sharpest Vision - Interferometry with the VLT" ). Tapes with this VNR, suitable for transmission and in full professional quality (Betacam, etc.), are now available for broadcasters upon request; please contact the ESO EPR Department for more details. Extracts from this VNR are available as ESO Video Clips 04a/00 and 04b/00 . ESO PR Video Clip 04a/00 [160x120 pix MPEG-version] ESO PR Video Clip 04a/00 (2600 frames/1:44 min) [MPEG Video+Audio; 160x120 pix; 2.4Mb] [MPEG Video+Audio; 320x240 pix; 4.8 Mb] [RealMedia; streaming; 33kps] [RealMedia; streaming; 200kps] ESO Video Clip 04a/00 shows some recent tests with the prototype VLT Delay Line carriage at FOKKER Space (Leiden, The Netherlands. This device is crucial for the proper functioning of the VLTI and will be mounted in the main interferometric tunnel at Paranal. Contents: Outside view of the FOKKER site. The carriage on rails. The protecting cover is removed. View towards the cat's eye. The carriage moves on the rails. ESO PR Video Clip 04b/00 [160x120 pix MPEG-version] ESO PR Video Clip 04b/00 (3425 frames/2:17 min) [MPEG Video+Audio; 160x120 pix; 3.2Mb] [MPEG Video+Audio; 320x240 pix; 6.3 Mb] [RealMedia; streaming; 33kps] [RealMedia; streaming; 200kps] ESO Video Clip 04b/00 shows the construction of the 1.8-m VLT Auxiliary Telescopes at AMOS (Liège, Belgium). Contents: External view of the facility. Computer drawing of the mechanics. The 1.8-m mirror (graphics). Construction of the centerpiece of the telescope tube. Mechanical parts. Checking the optical shape of an 1.8-m mirror. Mirror cell with supports for the 1.8-m mirror. Test ramp with rails on which the telescope moves and an "observing station" (the hole). The telescope yoke that will support the telescope tube. Both clips are available in four versions: two MPEG files and two streamer-versions of different sizes; the latter require RealPlayer software. They may be freely reproduced if ESO is mentioned as source. Most of the ESO PR Video Clips at the ESO website provide "animated" illustrations of the ongoing work and events at the European Southern Observatory. The most recent clip was: ESO PR Video Clip 03/00 with a trailer for "Physics on Stage" (2 May 2000). Information is also available on the web about other ESO videos.
A Comparison of Comprehension Processes in Sign Language Interpreter Videos with or without Captions
Debevc, Matjaž; Milošević, Danijela; Kožuh, Ines
2015-01-01
One important theme in captioning is whether the implementation of captions in individual sign language interpreter videos can positively affect viewers’ comprehension when compared with sign language interpreter videos without captions. In our study, an experiment was conducted using four video clips with information about everyday events. Fifty-one deaf and hard of hearing sign language users alternately watched the sign language interpreter videos with, and without, captions. Afterwards, they answered ten questions. The results showed that the presence of captions positively affected their rates of comprehension, which increased by 24% among deaf viewers and 42% among hard of hearing viewers. The most obvious differences in comprehension between watching sign language interpreter videos with and without captions were found for the subjects of hiking and culture, where comprehension was higher when captions were used. The results led to suggestions for the consistent use of captions in sign language interpreter videos in various media. PMID:26010899
Video content parsing based on combined audio and visual information
NASA Astrophysics Data System (ADS)
Zhang, Tong; Kuo, C.-C. Jay
1999-08-01
While previous research on audiovisual data segmentation and indexing primarily focuses on the pictorial part, significant clues contained in the accompanying audio flow are often ignored. A fully functional system for video content parsing can be achieved more successfully through a proper combination of audio and visual information. By investigating the data structure of different video types, we present tools for both audio and visual content analysis and a scheme for video segmentation and annotation in this research. In the proposed system, video data are segmented into audio scenes and visual shots by detecting abrupt changes in audio and visual features, respectively. Then, the audio scene is categorized and indexed as one of the basic audio types while a visual shot is presented by keyframes and associate image features. An index table is then generated automatically for each video clip based on the integration of outputs from audio and visual analysis. It is shown that the proposed system provides satisfying video indexing results.
Minardi, H A; Ritter, S
1999-06-01
Video recording techniques have been used in educational settings for a number of years. They have included viewing video taped lessons, using whole videos or clips of tapes as a trigger for discussion, viewing video recordings to observe role models for practice, and being video recorded in order to receive feedback on performance from peers and tutors. Although this last application has been in use since the 1960s, it has only been evaluated as a teaching method with health care professionals in the past 10 years and mostly in the areas of medical and counsellor education. In nurse education, however, use of video recording techniques has been advocated without any empirical evidence on its efficacy. This study has used nursing degree students and nurse educationalists to categorize statements from four cohorts of students who took part in a 12-day clinical supervision course during which their interpersonal skills were recorded on videotape. There were two categories: positive and negative/neutral. Analysis of the data showed that between 61% and 72% of the subjects gave an overall positive categorization to the statements in the questionnaire. Chi-square tests were significant for all groups in both categories. This suggests that both nursing students and nurse lecturers thought that course participants' statements expressed a positive belief that video tape recording is useful in enhancing students' ability to learn effective interpersonal skills in clinical supervision.
STS-114: Crew Training Clip from JSC
NASA Technical Reports Server (NTRS)
2003-01-01
STS-114 Discovery crew is shown in various training exercises at Johnson Space Center. The crew consists of Eileen Collins, Commander; James Kelley, Pilot; Charles Camarda, Mission Specialist; Wendy Lawrence, Mission Specialist; Soichi Noguchi, Mission Specialist; Steve Robinson, Mission Specialist; and Andy Thomas, Mission Specialist. The exercises include: 1) EVA training in the VR lab; 2) Neutral Buoyancy Laboratory (NBL) EVA Training; 3) Walk to Motion Base Simulator; 4) EVA Preparations in ISS Airlock; and 7) Emergency Egress from Crew Compartment Trainer (CCT). A crew photo session is also presented. Footage of The Space Shuttle Atlantis inside the Kennedy Space Center Vehicle Assembly Building (VAB) after its demating from the Solid Rocket Booster and External Tank is shown. The video ends with techniques for inspecting and repairing Thermal Protection System tiles, a video of external tank production at the Michoud Assembly Facility (MAF) and redesign of the foam from the bipod ramp at Michoud Assembly Facility (MAF).
Rodriguez, Edward K; Kwon, John Y; Herder, Lindsay M; Appleton, Paul T
2013-11-01
Our aim was to assess whether the Lauge-Hansen (LH) and the Muller AO classification systems for ankle fractures radiographically correlate with in vivo injuries based on observed mechanism of injury. Videos of potential study candidates were reviewed on YouTube.com. Individuals were recruited for participation if the video could be classified by injury mechanism with a high likelihood of sustaining an ankle fracture. Corresponding injury radiographs were obtained. Injury mechanism was classified using the LH system as supination/external rotation (SER), supination/adduction (SAD), pronation/external rotation (PER), or pronation/abduction (PAB). Corresponding radiographs were classified by the LH system and the AO system. Thirty injury videos with their corresponding radiographs were collected. Of the video clips reviewed, 16 had SAD mechanisms and 14 had PER mechanisms. There were 26 ankle fractures, 3 nonfractures, and 1 subtalar dislocation. Twelve fractures with SAD mechanisms had corresponding SAD fracture patterns. Five PER mechanisms had PER fracture patterns. Eight PER mechanisms had SER fracture patterns and 1 had SAD fracture pattern. When the AO classification was used, all 12 SAD type injuries had a 44A type fracture, whereas the 14 PER injuries resulted in nine 44B fractures, two 44C fractures, and three 43A fractures. When injury video clips of ankle fractures were matched to their corresponding radiographs, the LH system was 65% (17/26) consistent in predicting fracture patterns from the deforming injury mechanism. When the AO classification system was used, consistency was 81% (21/26). The AO classification, despite its development as a purely radiographic system, correlated with in vivo injuries, as based on observed mechanism of injury, more closely than did the LH system. Level IV, case series.
Sex differences in visual attention to sexually explicit videos: a preliminary study.
Tsujimura, Akira; Miyagawa, Yasushi; Takada, Shingo; Matsuoka, Yasuhiro; Takao, Tetsuya; Hirai, Toshiaki; Matsushita, Masateru; Nonomura, Norio; Okuyama, Akihiko
2009-04-01
Although men appear to be more interested in sexual stimuli than women, this difference is not completely understood. Eye-tracking technology has been used to investigate visual attention to still sexual images; however, it has not been applied to moving sexual images. To investigate whether sex difference exists in visual attention to sexual videos. Eleven male and 11 female healthy volunteers were studied by our new methodology. The subjects viewed two sexual videos (one depicting sexual intercourse and one not) in which several regions were designated for eye-gaze analysis in each frame. Visual attention was measured across each designated region according to gaze duration. Sex differences, the region attracting the most attention, and visually favored sex were evaluated. In the nonintercourse clip, gaze time for the face and body of the actress was significantly shorter among women than among men. Gaze time for the face and body of the actor and nonhuman regions was significantly longer for women than men. The region attracting the most attention was the face of the actress for both men and women. Men viewed the opposite sex for a significantly longer period than did women, and women viewed their own sex for a significantly longer period than did men. However, gaze times for the clip showing intercourse were not significantly different between sexes. A sex difference existed in visual attention to a sexual video without heterosexual intercourse; men viewed the opposite sex for longer periods than did women, and women viewed the same sex for longer periods than did men. There was no statistically significant sex difference in viewing patterns in a sexual video showing heterosexual intercourse, and we speculate that men and women may have similar visual attention patterns if the sexual stimuli are sufficiently explicit.
Quantitative analysis on electrooculography (EOG) for neurodegenerative disease
NASA Astrophysics Data System (ADS)
Liu, Chang-Chia; Chaovalitwongse, W. Art; Pardalos, Panos M.; Seref, Onur; Xanthopoulos, Petros; Sackellares, J. C.; Skidmore, Frank M.
2007-11-01
Many studies have documented abnormal horizontal and vertical eye movements in human neurodegenerative disease as well as during altered states of consciousness (including drowsiness and intoxication) in healthy adults. Eye movement measurement may play an important role measuring the progress of neurodegenerative diseases and state of alertness in healthy individuals. There are several techniques for measuring eye movement, Infrared detection technique (IR). Video-oculography (VOG), Scleral eye coil and EOG. Among those available recording techniques, EOG is a major source for monitoring the abnormal eye movement. In this real-time quantitative analysis study, the methods which can capture the characteristic of the eye movement were proposed to accurately categorize the state of neurodegenerative subjects. The EOG recordings were taken while 5 tested subjects were watching a short (>120 s) animation clip. In response to the animated clip the participants executed a number of eye movements, including vertical smooth pursued (SVP), horizontal smooth pursued (HVP) and random saccades (RS). Detection of abnormalities in ocular movement may improve our diagnosis and understanding a neurodegenerative disease and altered states of consciousness. A standard real-time quantitative analysis will improve detection and provide a better understanding of pathology in these disorders.
Chevallier, Coralie; Parish-Morris, Julia; McVey, Alana; Rump, Keiran M; Sasson, Noah J; Herrington, John D; Schultz, Robert T
2015-10-01
Autism Spectrum Disorder (ASD) is characterized by social impairments that have been related to deficits in social attention, including diminished gaze to faces. Eye-tracking studies are commonly used to examine social attention and social motivation in ASD, but they vary in sensitivity. In this study, we hypothesized that the ecological nature of the social stimuli would affect participants' social attention, with gaze behavior during more naturalistic scenes being most predictive of ASD vs. typical development. Eighty-one children with and without ASD participated in three eye-tracking tasks that differed in the ecological relevance of the social stimuli. In the "Static Visual Exploration" task, static images of objects and people were presented; in the "Dynamic Visual Exploration" task, video clips of individual faces and objects were presented side-by-side; in the "Interactive Visual Exploration" task, video clips of children playing with objects in a naturalistic context were presented. Our analyses uncovered a three-way interaction between Task, Social vs. Object Stimuli, and Diagnosis. This interaction was driven by group differences on one task only-the Interactive task. Bayesian analyses confirmed that the other two tasks were insensitive to group membership. In addition, receiver operating characteristic analyses demonstrated that, unlike the other two tasks, the Interactive task had significant classification power. The ecological relevance of social stimuli is an important factor to consider for eye-tracking studies aiming to measure social attention and motivation in ASD. © 2015 International Society for Autism Research, Wiley Periodicals, Inc.
Video- Demonstration of Tea and Sugar in Water Onboard the International Space Station (ISS)
NASA Technical Reports Server (NTRS)
2003-01-01
Saturday Morning Science, the science of opportunity series of applied experiments and demonstrations, performed aboard the International Space Station (ISS) by Expedition 6 astronaut Dr. Don Pettit, revealed some remarkable findings. Imagine what would happen if a collection of loosely attractive particles were confined in a relatively small region in the floating environment of space. Would they self organize into a compact structure, loosely organize into a fractal, or just continue to float around in their container? In this video clip, Dr. Pettit explored the possibilities. At one point he remarks, 'These things look like pictures from the Hubble Space Telescope.' Watch the video and see what happens!
Video- Demonstration of Laminar Flow in a Liquid Onboard the International Space Station (ISS)
NASA Technical Reports Server (NTRS)
2003-01-01
Saturday Morning Science, the science of opportunity series of applied experiments and demonstrations, performed aboard the International Space Station (ISS) by Expedition 6 astronaut Dr. Don Pettit, revealed some remarkable findings. In this video clip, Pettit demonstrates laminar flow in a rotating film of water. The demonstration is done by placing tracer particles in a water film held in place by a round wire loop, then stirring the system rotationally. The resulting flow clearly demonstrates laminar 2D behavior with spiraling streamlines.
Moisture-Induced Delamination Video of an Oxidized Thermal Barrier Coating
NASA Technical Reports Server (NTRS)
Smialek, James L.; Zhu, Dongming; Cuy, Michael D.
2008-01-01
PVD TBC coatings were thermally cycled to near-failure at 1150 C. Normal failure occurred after 200 to 300 1-hr cycles with only moderate weight gains (0.5 mg/sq cm). Delamination and buckling was often delayed until well after cooldown (desktop spallation), but could be instantly induced by the application of water drops, as shown in a video clip which can be viewed by clicking on figure 2 of this report. Moisture therefore plays a primary role in delayed desktop TBC failure. Hydrogen embrittlement is proposed as the underlying mechanism.
Tracking flow of leukocytes in blood for drug analysis
NASA Astrophysics Data System (ADS)
Basharat, Arslan; Turner, Wesley; Stephens, Gillian; Badillo, Benjamin; Lumpkin, Rick; Andre, Patrick; Perera, Amitha
2011-03-01
Modern microscopy techniques allow imaging of circulating blood components under vascular flow conditions. The resulting video sequences provide unique insights into the behavior of blood cells within the vasculature and can be used as a method to monitor and quantitate the recruitment of inflammatory cells at sites of vascular injury/ inflammation and potentially serve as a pharmacodynamic biomarker, helping screen new therapies and individualize dose and combinations of drugs. However, manual analysis of these video sequences is intractable, requiring hours per 400 second video clip. In this paper, we present an automated technique to analyze the behavior and recruitment of human leukocytes in whole blood under physiological conditions of shear through a simple multi-channel fluorescence microscope in real-time. This technique detects and tracks the recruitment of leukocytes to a bioactive surface coated on a flow chamber. Rolling cells (cells which partially bind to the bioactive matrix) are detected counted, and have their velocity measured and graphed. The challenges here include: high cell density, appearance similarity, and low (1Hz) frame rate. Our approach performs frame differencing based motion segmentation, track initialization and online tracking of individual leukocytes.
Elements of Scenario-Based Learning on Suicidal Patient Care Using Real-Time Video.
Lu, Chuehfen; Lee, Hueying; Hsu, Shuhui; Shu, Inmei
2016-01-01
This study aims understanding of students' learning experiences when receiving scenario-based learning combined with real-time video. Videos that recorded student nurses intervention with a suicidal standardized patient (SP) were replayed immediately as teaching materials. Videos clips and field notes from ten classes were analysed. Investigators and method triangulation were used to boost the robustness of the study. Three key elements, emotional involvement, concretizing of the teaching material and substitute learning were identified. Emotions were evoked among the SP, the student performer and the students who were observing, thus facilitating a learning effect. Concretizing of the teaching material refers to students were able to focus on the discussions using visual and verbal information. Substitute learning occurred when the students watching the videos, both the strengths and weaknesses represented were similar to those that would be likely to occur. These key elements explicate their learning experience and suggested a strategic teaching method.
Maekawa, Satoshi; Nomura, Ryosuke; Murase, Takayuki; Ann, Yasuyoshi; Harada, Masaru
2015-02-01
A 5-7 day hospital stay is usually needed after endoscopic submucosal dissection (ESD) of gastric tumor because of the possibility of delayed perforation or bleeding. The aim of this study was to evaluate the efficacy of combined use of a single over-the-scope clip (OTSC) and through-the-scope clips (TTSCs) to achieve complete closure of artificial gastric ulcer after ESD. We prospectively studied 12 patients with early gastric cancer or gastric adenoma. We performed complete closure of post-ESD artificial gastric ulcer using a combination of a single OTSC and TTSCs. Mean size of post-ESD artificial ulcer was 54.6 mm. The mean operating time for the closure procedure was 15.2 min., and the success rate was 91.7 % (11/12). Patients who underwent complete closure of post-ESD artificial gastric ulcer could be discharged the day after ESD and the closing procedure. Complete closure of post-ESD artificial gastric ulcer using a combination of a single OTSC and TTSCs is useful for shortening the period of hospitalization and reducing treatment cost.
Autonomic nervous system activity of preschool-age children who stutter
Jones, Robin M.; Buhr, Anthony P.; Conture, Edward G.; Tumanova, Victoria; Walden, Tedra A.; Porges, Stephen W.
2014-01-01
Purpose The purpose of this study was to investigate potential differences in autonomic nervous system (ANS) activity to emotional stimuli between preschool-age children who do (CWS) and do not stutter (CWNS). Methods Participants were 20 preschool-age CWS (15 male) and 21 preschool-age CWNS (11 male). Participants were exposed to two emotion-inducing video clips (negative and positive) with neutral clips used to establish pre-and post-arousal baselines, and followed by age-appropriate speaking tasks. Respiratory sinus arrhythmia (RSA) – often used as an index of parasympathetic activity – and skin conductance level (SCL) – often used as an index of sympathetic activity – were measured while participants listened to/watched the audio-video clip presentation and performed a speaking task. Results CWS, compared to CWNS, displayed lower amplitude RSA at baseline and higher SCL during a speaking task following the positive, compared to the negative, condition. During speaking, only CWS had a significant positive relation between RSA and SCL. Conclusion Present findings suggest that preschool-age CWS, when compared to their normally fluent peers, have a physiological state that is characterized by a greater vulnerability to emotional reactivity (i.e., lower RSA indexing less parasympathetic tone) and a greater mobilization of resources in support of emotional reactivity (i.e., higher SCL indexing more sympathetic activity) during positive conditions. Thus, while reducing stuttering to a pure physiological process is unwarranted, the present findings suggest that parasympathetic and sympathetic nervous system activity is involved. PMID:25087166
Taya, Shuichiro; Windridge, David; Osman, Magda
2012-01-01
Several studies have reported that task instructions influence eye-movement behavior during static image observation. In contrast, during dynamic scene observation we show that while the specificity of the goal of a task influences observers’ beliefs about where they look, the goal does not in turn influence eye-movement patterns. In our study observers watched short video clips of a single tennis match and were asked to make subjective judgments about the allocation of visual attention to the items presented in the clip (e.g., ball, players, court lines, and umpire). However, before attending to the clips, observers were either told to simply watch clips (non-specific goal), or they were told to watch the clips with a view to judging which of the two tennis players was awarded the point (specific goal). The results of subjective reports suggest that observers believed that they allocated their attention more to goal-related items (e.g. court lines) if they performed the goal-specific task. However, we did not find the effect of goal specificity on major eye-movement parameters (i.e., saccadic amplitudes, inter-saccadic intervals, and gaze coherence). We conclude that the specificity of a task goal can alter observer’s beliefs about their attention allocation strategy, but such task-driven meta-attentional modulation does not necessarily correlate with eye-movement behavior. PMID:22768058
ERIC Educational Resources Information Center
Ortega, Marcela Itzel
2013-01-01
It's funny how a single lucid moment can change so many things; a single video can light a spark for justice. That reality hit the author when her teacher, Devin Carberry, played a YouTube clip of UNIDOS, a group of students fighting to keep ethnic studies alive in Arizona. Devin brought in the documentary "Precious Knowledge," an…
The Voyage of Exploration and Discovery: Earth-Moon, Mars and Beyond
NASA Technical Reports Server (NTRS)
Esper, Jaime
2005-01-01
This viewgraph is a printout of a presentation which originally contained multimedia components. The presentation summarizes the accomplishments of the Cassini-Huygens mission, with numerous images and video clips of Saturn, its rings, and its moons. The presentation also summarizes a feasibility analysis of the Neptune-Triton Explorer (NExTEP).
YouEDU: Addressing Confusion in MOOC Discussion Forums by Recommending Instructional Video Clips
ERIC Educational Resources Information Center
Agrawal, Akshay; Venkatraman, Jagadish; Leonard, Shane; Paepcke, Andreas
2015-01-01
In Massive Open Online Courses (MOOCs), struggling learners often seek help by posting questions in discussion forums. Unfortunately, given the large volume of discussion in MOOCs, instructors may overlook these learners' posts, detrimentally impacting the learning process and exacerbating attrition. In this paper, we present YouEDU, an…
Energy & Environmental Issues Interactive CD-ROM. Version 2.0. [CD-ROM].
ERIC Educational Resources Information Center
Florida State Univ., Tallahassee.
This CD-ROM presents various energy and environmental topics. "Great Energy Debate" uses video clips to explore the pros and cons of solar, coal, nuclear, and oil energy sources. "Energy Plant Tour" presents a virtual tour through a plant that converts solid waste into energy. "How Stuff Works" explains energy…
Using Presentation Software to Flip an Undergraduate Analytical Chemistry Course
ERIC Educational Resources Information Center
Fitzgerald, Neil; Li, Luisa
2015-01-01
An undergraduate analytical chemistry course has been adapted to a flipped course format. Course content was provided by video clips, text, graphics, audio, and simple animations organized as concept maps using the cloud-based presentation platform, Prezi. The advantages of using Prezi to present course content in a flipped course format are…
How Interviewers' Nonverbal Behaviors Can Affect Children's Perceptions and Suggestibility
ERIC Educational Resources Information Center
Almerigogna, Jehanne; Ost, James; Akehurst, Lucy; Fluck, Mike
2008-01-01
We conducted two studies to examine how interviewers' nonverbal behaviors affect children's perceptions and suggestibility. In the first study, 42 8- to 10-year-olds watched video clips showing an interviewer displaying combinations of supportive and nonsupportive nonverbal behaviors and were asked to rate the interviewer on six attributes (e.g.,…
Kinematic Measures of Imitation Fidelity in Primary School Children
ERIC Educational Resources Information Center
Williams, Justin H. G.; Casey, Jackie M.; Braadbaart, Lieke; Culmer, Peter R.; Mon-Williams, Mark
2014-01-01
We sought to develop a method for measuring imitation accuracy objectively in primary school children. Children imitated a model drawing shapes on the same computer-tablet interface they saw used in video clips, allowing kinematics of model and observers' actions to be directly compared. Imitation accuracy was reported as a correlation reflecting…
Realizing the Promise of Visualization in the Theory of Computing
ERIC Educational Resources Information Center
Cogliati, Joshua J.; Goosey, Frances W.; Grinder, Michael T.; Pascoe, Bradley A.; Ross, Rockford J.; Williams, Cheston J.
2005-01-01
Progress on a hypertextbook on the theory of computing is presented. The hypertextbook is a novel teaching and learning resource built around web technologies that incorporates text, sound, pictures, illustrations, slide shows, video clips, and--most importantly--active learning models of the key concepts of the theory of computing into an…
What Can Students Learn about Lab Safety from Mr. Bean?
ERIC Educational Resources Information Center
Carr, Jeremy M.; Carr, June M.
2016-01-01
Chemical laboratory safety education is often synonymous with boring, dry, drawn-out lectures. In an effort to challenge this norm and stimulate vivid learning opportunities about laboratory safety, college chemistry classes analyzed a short, humorous video clip of a character, named Mr. Bean, who visits a chemistry laboratory and commits several…
The Power of Digital Storytelling to Support Teaching and Learning
ERIC Educational Resources Information Center
Robin, Bernard R.
2016-01-01
Although the term "digital storytelling" may not be familiar to all readers, over the last twenty years, an increasing number of educators, students and others around the world have created short movies by combining computer-based images, text, recorded audio narration, video clips and music in order to present information on various…
Supporting Physiology Learning: The Development of Interactive Concept-Based Video Clips
ERIC Educational Resources Information Center
Guy, Richard; Byrne, Bruce; Rich, Peter
2014-01-01
The accommodation of diverse student learning approaches and maintenance of good academic outcomes are often difficult to achieve in university courses, particularly where large classes are concerned. These issues become even more significant when dealing with first-year students in science courses with high levels of factual and conceptual…
Bringing in the Bard: Shakespearean Plays as Context for Instrumental Analysis Projects
ERIC Educational Resources Information Center
Kloepper, Kathryn D.
2015-01-01
Scenes from the works of William Shakespeare were incorporated into individual and group projects for an upper-level chemistry class, instrumental analysis. Students read excerpts from different plays and then viewed a corresponding video clip from a stage or movie production. Guided-research assignments were developed based on these scenes. These…
Thick Slice and Thin Slice Teaching Evaluations
ERIC Educational Resources Information Center
Tom, Gail; Tong, Stephanie Tom; Hesse, Charles
2010-01-01
Student-based teaching evaluations are an integral component to institutions of higher education. Previous work on student-based teaching evaluations suggest that evaluations of instructors based upon "thin slice" 30-s video clips of them in the classroom correlate strongly with their end of the term "thick slice" student evaluations. This study's…
Baker, Nancy A; Cook, James R; Redfern, Mark S
2009-01-01
This paper describes the inter-rater and intra-rater reliability, and the concurrent validity of an observational instrument, the Keyboard Personal Computer Style instrument (K-PeCS), which assesses stereotypical postures and movements associated with computer keyboard use. Three trained raters independently rated the video clips of 45 computer keyboard users to ascertain inter-rater reliability, and then re-rated a sub-sample of 15 video clips to ascertain intra-rater reliability. Concurrent validity was assessed by comparing the ratings obtained using the K-PeCS to scores developed from a 3D motion analysis system. The overall K-PeCS had excellent reliability [inter-rater: intra-class correlation coefficients (ICC)=.90; intra-rater: ICC=.92]. Most individual items on the K-PeCS had from good to excellent reliability, although six items fell below ICC=.75. Those K-PeCS items that were assessed for concurrent validity compared favorably to the motion analysis data for all but two items. These results suggest that most items on the K-PeCS can be used to reliably document computer keyboarding style.
Shinada, Mizuho; Yamagishi, Toshio; Tanida, Shigehito; Takahashi, Chisato; Inukai, Keigo; Koizumi, Michiko; Yokota, Kunihiro; Mifune, Nobuhiro; Takagishi, Haruto; Horita, Yutaka; Hashimoto, Hirofumi
2010-06-01
Cooperation in interdependent relationships is based on reciprocity in repeated interactions. However, cooperation in one-shot relationships cannot be explained by reciprocity. Frank, Gilovich, & Regan (1993) argued that cooperative behavior in one-shot interactions can be adaptive if cooperators displayed particular signals and people were able to distinguish cooperators from non-cooperators by decoding these signals. We argue that attractiveness and facial expressiveness are signals of cooperators. We conducted an experiment to examine if these signals influence the detection accuracy of cooperative behavior. Our participants (blind to the target's behavior in a Trust Game) viewed 30-seconds video-clips. Each video-clip was comprised of a cooperator and a non-cooperator in a Trust Game. The participants judged which one of the pair gave more money to the other participant. We found that participants were able to detect cooperators with a higher accuracy than chance. Furthermore, participants rated male non-cooperators as more attractive than male cooperators, and rated cooperators more expressive than non-cooperators. Further analyses showed that attractiveness inhibited detection accuracy while facial expressiveness fostered it.
Chugh, A Jessey; Pace, Jonathan R; Singer, Justin; Tatsuoka, Curtis; Hoffer, Alan; Selman, Warren R; Bambakidis, Nicholas C
2017-03-01
OBJECTIVE The field of neurosurgery is constantly undergoing improvements and advances, both in technique and technology. Cerebrovascular neurosurgery is no exception, with endovascular treatments changing the treatment paradigm. Clipping of aneurysms is still necessary, however, and advances are still being made to improve patient outcomes within the microsurgical treatment of aneurysms. Surgical rehearsal platforms are surgical simulators that offer the opportunity to rehearse a procedure prior to entering the operative suite. This study is designed to determine whether use of a surgical rehearsal platform in aneurysm surgery is helpful in decreasing aneurysm dissection time and clip manipulation of the aneurysm. METHODS The authors conducted a blinded, prospective, randomized study comparing key effort and time variables in aneurysm clip ligation surgery with and without preoperative use of the SuRgical Planner (SRP) surgical rehearsal platform. Initially, 40 patients were randomly assigned to either of two groups: one in which surgery was performed after use of the SRP (SRP group) and one in which surgery was performed without use of the SRP (control group). All operations were videotaped. After exclusion of 6 patients from the SRP group and 9 from the control group, a total of 25 surgical cases were analyzed by a reviewer blinded to group assignment. The videos were analyzed for total microsurgical time, number of clips used, and number of clip placement attempts. Means and standard deviations (SDs) were calculated and compared between groups. RESULTS The mean (± SD) amount of operative time per clip used was 920 ± 770 seconds in the SRP group and 1294 ± 678 seconds in the control group (p = 0.05). In addition, the mean values for the number of clip attempts, total operative time, ratio of clip attempts to clips used, and time per clip attempt were all lower in the SRP group, although the between-group differences were not statistically significant. CONCLUSIONS Preoperative rehearsal with SRP increased efficiency and safety in aneurysm microsurgery as demonstrated by the statistically significant improvement in time per clip used. Although the rest of the outcomes did not demonstrate statistically significant between-group differences, the fact that the SRP group showed improvement in mean values for all measures studied suggests that preoperative rehearsal may increase the efficiency and safety of aneurysm microsurgery. Future studies aimed at improving patient outcome and safety during surgical clipping of aneurysms will be needed to keep pace with the quickly advancing endovascular field.
Moreno, M Perla; Moreno, Alberto; García-González, Luis; Ureña, Aurelio; Hernández, César; Del Villar, Fernando
2016-06-01
This study applied an intervention program, based on video feedback and questioning, to expert female volleyball players to improve their tactical knowledge. The sample consisted of eight female attackers (26 ± 2.6 years old) from the Spanish National Volleyball Team, who were divided into an experimental group (n = 4) and a control group (n = 4). The video feedback and questioning program applied in the study was developed over eight reflective sessions and consisted of three phases: viewing of the selected actions, self-analysis and reflection by the attacker, and joint player-coach analysis. The attackers were videotaped in an actual game and four clips (situations) of each of the attackers were chosen for each reflective session. Two of the clips showed a correct action by the attacker, and two showed an incorrect decision. Tactical knowledge was measured by problem representation with a verbal protocol. The members of the experimental group showed adaptations in long-term memory, significantly improving their tactical knowledge. With respect to conceptual content, there was an increase in the total number of conditions verbalized by the players; with respect to conceptual sophistication, there was an increase in the indication of appropriate conditions with two or more details; and finally, with respect to conceptual structure, there was an increase in the use of double or triple conceptual structures. The intervention program, based on video feedback and questioning, in addition to on-court training sessions of expert volleyball players, appears to improve the athletes' tactical knowledge. © The Author(s) 2016.
Jagsch, Reinhold; Drog, Claudio; Mosgoeller, Wilhelm; Wutzl, Arno; Millesi, Gabriele; Klug, Clemens
2018-01-01
Typically, before and after surgical correction faces are assessed on still images by surgeons, orthodontists, the patients, and family members. We hypothesized that judgment of faces in motion and by naïve raters may closer reflect the impact on patients’ real life, and the treatment impact on e.g. career chances. Therefore we assessed faces from dysgnathic patients (Class II, III and Laterognathia) on video clips. Class I faces served as anchor and controls. Each patient’s face was assessed twice before and after treatment in changing sequence, by 155 naïve raters with similar age to the patients. The raters provided independent estimates on aesthetic trait pairs like ugly /beautiful, and personality trait pairs like dominant /flexible. Furthermore the perception of attractiveness, intelligence, health, the persons’ erotic aura, faithfulness, and five additional items were rated. We estimated the significance of the perceived treatment related differences and the respective effect size by general linear models for repeated measures. The obtained results were comparable to our previous rating on still images. There was an overall trend, that faces in video clips are rated along common stereotypes to a lesser extent than photographs. We observed significant class differences and treatment related changes of most aesthetic traits (e.g. beauty, attractiveness), these were comparable to intelligence, erotic aura and to some extend healthy appearance. While some personality traits (e.g. faithfulness) did not differ between the classes and between baseline and after treatment, we found that the intervention significantly and effectively altered the perception of the personality trait self-confidence. The effect size was highest in Class III patients, smallest in Class II patients, and in between for patients with Laterognathia. All dysgnathic patients benefitted from orthognathic surgery. We conclude that motion can mitigate marked stereotypes but does not entirely offset the mostly negative perception of dysgnathic faces. PMID:29390018
Skog, Alexander; Peyre, Sarah E; Pozner, Charles N; Thorndike, Mary; Hicks, Gloria; Dellaripa, Paul F
2012-01-01
The situational leadership model suggests that an effective leader adapts leadership style depending on the followers' level of competency. We assessed the applicability and reliability of the situational leadership model when observing residents in simulated hospital floor-based scenarios. Resident teams engaged in clinical simulated scenarios. Video recordings were divided into clips based on Emergency Severity Index v4 acuity scores. Situational leadership styles were identified in clips by two physicians. Interrater reliability was determined through descriptive statistical data analysis. There were 114 participants recorded in 20 sessions, and 109 clips were reviewed and scored. There was a high level of interrater reliability (weighted kappa r = .81) supporting situational leadership model's applicability to medical teams. A suggestive correlation was found between frequency of changes in leadership style and the ability to effectively lead a medical team. The situational leadership model represents a unique tool to assess medical leadership performance in the context of acuity changes.
Using web-based video to enhance physical examination skills in medical students.
Orientale, Eugene; Kosowicz, Lynn; Alerte, Anton; Pfeiffer, Carol; Harrington, Karen; Palley, Jane; Brown, Stacey; Sapieha-Yanchak, Teresa
2008-01-01
Physical examination (PE) skills among U.S. medical students have been shown to be deficient. This study examines the effect of a Web-based physical examination curriculum on first-year medical student PE skills. Web-based video clips, consisting of instruction in 77 elements of the physical examination, were created using Microsoft Windows Moviemaker software. Medical students' PE skills were evaluated by standardized patients before and after implementation of the Internet-based video. Following implementation of this curriculum, there was a higher level of competency (from 87% in 2002-2003 to 91% in 2004-2005), and poor performances on standardized patient PE exams substantially diminished (from a 14%-22%failure rate in 2002-2003, to 4% in 2004-2005. A significant improvement in first-year medical student performance on the adult PE occurred after implementing Web-based instructional video.
Paek, Hye-Jin; Kim, Kyongseok; Hove, Thomas
2010-12-01
Focusing on several message features that are prominent in antismoking campaign literature, this content-analytic study examines 934 antismoking video clips on YouTube for the following characteristics: message sensation value (MSV) and three types of message appeal (threat, social and humor). These four characteristics are then linked to YouTube's interactive audience response mechanisms (number of viewers, viewer ratings and number of comments) to capture message reach, viewer preference and viewer engagement. The findings suggest the following: (i) antismoking messages are prevalent on YouTube, (ii) MSV levels of online antismoking videos are relatively low compared with MSV levels of televised antismoking messages, (iii) threat appeals are the videos' predominant message strategy and (iv) message characteristics are related to viewer reach and viewer preference.
Recognizing Induced Emotions of Happiness and Sadness from Dance Movement
Van Dyck, Edith; Vansteenkiste, Pieter; Lenoir, Matthieu; Lesaffre, Micheline; Leman, Marc
2014-01-01
Recent research revealed that emotional content can be successfully decoded from human dance movement. Most previous studies made use of videos of actors or dancers portraying emotions through choreography. The current study applies emotion induction techniques and free movement in order to examine the recognition of emotional content from dance. Observers (N = 30) watched a set of silent videos showing depersonalized avatars of dancers moving to an emotionally neutral musical stimulus after emotions of either sadness or happiness had been induced. Each of the video clips consisted of two dance performances which were presented side-by-side and were played simultaneously; one of a dancer in the happy condition and one of the same individual in the sad condition. After every film clip, the observers were asked to make forced-choices concerning the emotional state of the dancer. Results revealed that observers were able to identify the emotional state of the dancers with a high degree of accuracy. Moreover, emotions were more often recognized for female dancers than for their male counterparts. In addition, the results of eye tracking measurements unveiled that observers primarily focus on movements of the chest when decoding emotional information from dance movement. The findings of our study show that not merely portrayed emotions, but also induced emotions can be successfully recognized from free dance movement. PMID:24587026
Anatomy education for the YouTube generation.
Barry, Denis S; Marzouk, Fadi; Chulak-Oglu, Kyrylo; Bennett, Deirdre; Tierney, Paul; O'Keeffe, Gerard W
2016-01-01
Anatomy remains a cornerstone of medical education despite challenges that have seen a significant reduction in contact hours over recent decades; however, the rise of the "YouTube Generation" or "Generation Connected" (Gen C), offers new possibilities for anatomy education. Gen C, which consists of 80% Millennials, actively interact with social media and integrate it into their education experience. Most are willing to merge their online presence with their degree programs by engaging with course materials and sharing their knowledge freely using these platforms. This integration of social media into undergraduate learning, and the attitudes and mindset of Gen C, who routinely creates and publishes blogs, podcasts, and videos online, has changed traditional learning approaches and the student/teacher relationship. To gauge this, second year undergraduate medical and radiation therapy students (n = 73) were surveyed regarding their use of online social media in relation to anatomy learning. The vast majority of students had employed web-based platforms to source information with 78% using YouTube as their primary source of anatomy-related video clips. These findings suggest that the academic anatomy community may find value in the integration of social media into blended learning approaches in anatomy programs. This will ensure continued connection with the YouTube generation of students while also allowing for academic and ethical oversight regarding the use of online video clips whose provenance may not otherwise be known. © 2015 American Association of Anatomists.
Proposal for a CLIPS software library
NASA Technical Reports Server (NTRS)
Porter, Ken
1991-01-01
This paper is a proposal to create a software library for the C Language Integrated Production System (CLIPS) expert system shell developed by NASA. Many innovative ideas for extending CLIPS were presented at the First CLIPS Users Conference, including useful user and database interfaces. CLIPS developers would benefit from a software library of reusable code. The CLIPS Users Group should establish a software library-- a course of action to make that happen is proposed. Open discussion to revise this library concept is essential, since only a group effort is likely to succeed. A response form intended to solicit opinions and support from the CLIPS community is included.
NASA Astrophysics Data System (ADS)
Weiland, C.; Chadwick, W. W.
2004-12-01
Several years ago we created an exciting and engaging multimedia exhibit for the Hatfield Marine Science Center that lets visitors simulate making a dive to the seafloor with the remotely operated vehicle (ROV) named ROPOS. The exhibit immerses the user in an interactive experience that is naturally fun but also educational. The public display is located at the Hatfield Marine Science Visitor Center in Newport, Oregon. We are now completing a revision to the project that will make this engaging virtual exploration accessible to a much larger audience. With minor modifications we will be able to put the exhibit onto the world wide web so that any person with internet access can view and learn about exciting volcanic and hydrothermal activity at Axial Seamount on the Juan de Fuca Ridge. The modifications address some cosmetic and logistic ISSUES confronted in the museum environment, but will mainly involve compressing video clips so they can be delivered more efficiently over the internet. The web version, like the museum version, will allow users to choose from 1 of 3 different dives sites in the caldera of Axial Volcano. The dives are based on real seafloor settings at Axial seamount, an active submarine volcano on the Juan de Fuca Ridge (NE Pacific) that is also the location of a seafloor observatory called NeMO. Once a dive is chosen, then the user watches ROPOS being deployed and then arrives into a 3-D computer-generated seafloor environment that is based on the real world but is easier to visualize and navigate. Once on the bottom, the user is placed within a 360 degree panorama and can look in all directions by manipulating the computer mouse. By clicking on markers embedded in the scene, the user can then either move to other panorama locations via movies that travel through the 3-D virtual environment, or they can play video clips from actual ROPOS dives specifically related to that scene. Audio accompanying the video clips informs the user where they are going or what they are looking at. After the user is finished exploring the dive site they end the dive by leaving the bottom and watching the ROV being recovered onto the ship at the surface. Within the three simulated dives there are a total of 6 arrival and departure movies, 7 seafloor panoramas, 12 travel movies, and 23 ROPOS video clips. This virtual exploration is part of the NeMO web site and will be at this URL http://www.pmel.noaa.gov/vents/dive.html
Vehicle-borne IED detection using the ULTOR correlation processor
NASA Astrophysics Data System (ADS)
Burcham, Joel D.; Vachon, Joyce E.
2006-05-01
Advanced Optical Systems, Inc. developed the ULTOR(r) system, a real-time correlation processor that looks for improvised explosive devices (IED) by examining imagery of vehicles. The system determines the level of threat an approaching vehicle may represent. The system works on incoming video collected at different wavelengths, including visible, infrared, and synthetic aperture radar. Sensors that attach to ULTOR can be located wherever necessary to improve the safety around a checkpoint. When a suspect vehicle is detected, ULTOR can track the vehicle, alert personnel, check for previous instances of the vehicle, and update other networked systems with the threat information. The ULTOR processing engine focuses on the spatial frequency information available in the image. It correlates the imagery with templates that specify the criteria defining a suspect vehicle. It can perform full field correlations at a rate of 180 Hz or better. Additionally, the spatial frequency information is applied to a trained neural network to identify suspect vehicles. We have performed various laboratory and field experiments to verify the performance of the ULTOR system in a counter IED environment. The experiments cover tracking specific targets in video clips to demonstrating real-time ULTOR system performance. The selected targets in the experiments include various automobiles in both visible and infrared video.
Feigning Amnesia Moderately Impairs Memory for a Mock Crime Video
Mangiulli, Ivan; van Oorsouw, Kim; Curci, Antonietta; Merckelbach, Harald; Jelicic, Marko
2018-01-01
Previous studies showed that feigning amnesia for a crime impairs actual memory for the target event. Lack of rehearsal has been proposed as an explanation for this memory-undermining effect of feigning. The aim of the present study was to replicate and extend previous research adopting a mock crime video instead of a narrative story. We showed participants a video of a violent crime. Next, they were requested to imagine that they had committed this offense and to either feign amnesia or confess the crime. A third condition was included: Participants in the delayed test-only control condition did not receive any instruction. On subsequent recall tests, participants in all three conditions were instructed to report as much information as possible about the offense. On the free recall test, feigning amnesia impaired memory for the video clip, but participants who were asked to feign crime-related amnesia outperformed controls. However, no differences between simulators and confessors were found on both correct cued recollection or on distortion and commission rates. We also explored whether inner speech might modulate memory for the crime. Inner speech traits were not found to be related to the simulating amnesia effect. Theoretical and practical implications of our results are discussed. PMID:29760675
Red blood cell sedimentation of Apheresis Granulocytes.
Lodermeier, Michelle A; Byrne, Karen M; Flegel, Willy A
2017-10-01
Sedimentation of Apheresis Granulocyte components removes red blood cells. It is used to increase the blood donor pool when blood group-compatible donors cannot be recruited for a patient because of a major ABO incompatibility or incompatible red blood cell antibodies in the recipient. Because granulocytes have little ABO and few other red blood cell antigens on their membrane, such incompatibility lies mostly with the contaminating red blood cells. Video Clip S1 shows the process of red blood cell sedimentation of an Apheresis Granulocyte component. This video was filmed with a single smart phone attached to a commercial tripod and was edited on a tablet computer with free software by an amateur videographer without prior video experience. © 2017 AABB.
Effects of Immediate Repetition in L2 Speaking Tasks: A Focused Study
ERIC Educational Resources Information Center
Bei, Gavin Xiaoyue
2013-01-01
This paper reports on a focused investigation into the immediate effects of oral narrative task repetition by two adult EFL learners of intermediate and high proficiency. Two participants performed a narrative speaking task after watching a cartoon video clip and repeated their performance three times, followed by a retrospective report in an…
ERIC Educational Resources Information Center
Roberts, Amy L. D.; Rogoff, Barbara
2012-01-01
Forty-four pairs of Mexican-heritage and European-heritage US children were asked to characterize differences between two contrasting cultural patterns of working together in video clips that showed a) Mexican Indigenous-heritage children working together by collaborating, helping, observing others, and using nonverbal as well as verbal…
Learning to Teach from Anticipating Lessons through Comics-Based Approximations of Practice
ERIC Educational Resources Information Center
Chen, Chia-Ling
2012-01-01
Teaching is complex and relational work that involves teacher's interactions with individual or multiple students around the subject matter. It has been argued that observation experiences (e.g. field placement or watching video clips) are not sufficient to help prospective teachers to develop knowledge of teaching. This study aims to…
ERIC Educational Resources Information Center
Kaendler, Celia; Wiedmann, Michael; Leuders, Timo; Rummel, Nikol; Spada, Hans
2016-01-01
The monitoring by teachers of collaborative, cognitive, and meta-cognitive student activities in collaborative learning is crucial for fostering beneficial student interaction. In a quasi-experimental study, we trained pre-service teachers (N = 74) to notice behavioral indicators for these three dimensions of student activities. Video clips of…
Children's Understanding of Nonverbal Expressions of Pride
ERIC Educational Resources Information Center
Nelson, Nicole L.; Russell, James A.
2012-01-01
To chart the developmental path of children's attribution of pride to others, we presented children (4 years 0 month to 11 years 11 months of age, N = 108) with video clips of head-and-face, body posture, and multi-cue (both head-and-face and body posture simultaneously) expressions that adults consider to convey pride. Across age groups, 4- and…
Sex Differences in How Erotic and Painful Stimuli Impair Inhibitory Control
ERIC Educational Resources Information Center
Yu, Jiaxin; Hung, Daisy L.; Tseng, Philip; Tzeng, Ovid J. L.; Muggleton, Neil G.; Juan, Chi-Hung
2012-01-01
Witnessing emotional events such as arousal or pain may impair ongoing cognitive processes such as inhibitory control. We found that this may be true only half of the time. Erotic images and painful video clips were shown to men and women shortly before a stop signal task, which measures cognitive inhibitory control. These stimuli impaired…
Infant Attention to Dynamic Audiovisual Stimuli: Look Duration from 3 to 9 Months of Age
ERIC Educational Resources Information Center
Reynolds, Greg D.; Zhang, Dantong; Guy, Maggie W.
2013-01-01
The goal of this study was to examine developmental change in visual attention to dynamic visual and audiovisual stimuli in 3-, 6-, and 9-month-old infants. Infant look duration was measured during exposure to dynamic geometric patterns and Sesame Street video clips under three different stimulus modality conditions: unimodal visual, synchronous…
Chinese Mine Warfare: A PLA Navy Assassin’s Mace Capability (China MaritimeStudy, Number 3)
2009-06-01
derived from obsolete tor- pedoes (e.g., earlier models of China’s Yu series) and launched from submarines, they travel along a user-determined course... video clip, originally at web.search.cctv .com, has been removed from the CCTV website. An image from the television footage has been posted on
Advancing Development of Intercultural Competence through Supporting Predictions in Narrative Video
ERIC Educational Resources Information Center
Ogan, Amy; Aleven, Vincent; Jones, Christopher
2009-01-01
Most successes in intelligent tutoring systems have come in well-defined domains like algebra or physics. We investigate how to support students in acquiring ill-defined skills of intercultural competence using an online environment that employs clips of feature films from a target culture. To test the effectiveness of a set of attention-focusing…
Concussion Awareness Education: A Design and Development Research Study
ERIC Educational Resources Information Center
Pilbeam, Renee M.
2016-01-01
This research study looks at the design and development of an online concussion awareness education module. The Keep Your Head in the Game: Concussion Awareness Training for High School Athletes, or Brainbook, is a stand-alone e-learning module designed to run for fifty minutes and to be highly interactive using short video clips with associated…
Chinese and German Teachers' Conceptions of Play and Learning and Children's Play Behaviour
ERIC Educational Resources Information Center
Wu, Shu-Chen; Rao, Nirmala
2011-01-01
Commonalities and distinctions in Hong Kong-Chinese and German kindergarten teachers' conceptions of play and learning were examined. Six video clips of play episodes reflecting common play behavior and themes were selected from observations made during free play in two kindergartens in Hong Kong and two in Germany. Ten Chinese and seven German…
ERIC Educational Resources Information Center
McCurry, David S.
This paper describes a qualitative study exploring the efficacy of using selected multimedia technologies to engage preservice and practicing teachers in critical dialogue. Visual representations, such as 360-degree panoramic views of classrooms hyperlinked to text descriptions, audio clips, and video of learning environments are used as anchor…
YouTube in the Classroom: Helpful Tips and Student Perceptions
ERIC Educational Resources Information Center
Fleck, Bethany K. B.; Beckman, Lisa M.; Sterns, Jillian L.; Hussey, Heather D.
2014-01-01
The rise in popularity of YouTube has made the use of short video clips during college classroom instruction a common learning tool. However, questions still remain on how to best implement this learning tool as well as students' perceptions of its use. Blended Learning Theory and Information Processing Theory provide insights into successful…
ERIC Educational Resources Information Center
Ding, Lin; Domínguez, Higinio
2016-01-01
This paper investigates the noticing of six Chinese mathematics prospective teachers (PSTs) when looking at a procedural error and responding to three specific tasks related to that error. Using video clips of one student's procedural error consisting of exchanging the order of coordinates when applying the distance formula, some variation was…
Semantic Categorization of Placement Verbs in L1 and L2 Danish and Spanish
ERIC Educational Resources Information Center
Cadierno, Teresa; Ibarretxe-Antuñano, Iraide; Hijazo-Gascón, Alberto
2016-01-01
This study investigates semantic categorization of the meaning of placement verbs by Danish and Spanish native speakers and two groups of intermediate second language (L2) learners (Danish learners of L2 Spanish and Spanish learners of L2 Danish). Participants described 31 video clips picturing different types of placement events. Cluster analyses…
Popular Culture in the Classroom: Using Audio and Video Clips to Enhance Survey Classes
ERIC Educational Resources Information Center
Hoover, D. Sandy
2006-01-01
Students often approach history survey classes with a significant degree of dread. Nevertheless, at least one history class is required for graduation from most, if not all, universities, and most students elect to take survey courses to fulfill that requirement. Students rarely enroll in an American history class eagerly, because they anticipate…
An Inquiry-Based Course Using "Physics?" in Cartoons and Movies
ERIC Educational Resources Information Center
Rogers, Michael
2007-01-01
Books, cartoons, movies, and video games provide engaging opportunities to get both science and nonscience students excited about physics. An easy way to use these media in one's classroom is to have students view clips and identify unusual events, odd physics, or list things that violate our understanding of the physics that governs our universe.…
ERIC Educational Resources Information Center
Yap, Boon Chien; Chew, Charles
2014-01-01
This quantitative research study reports the effectiveness of demonstrations supported by appropriate information and communication technology (ICT) tools such as dataloggers, animations and video clips on upper secondary school students' attitudes towards the learning of physics. A sample of 94 secondary four express stream (age 16 years) and…
ERIC Educational Resources Information Center
Golfeto, Raquel M.; de Souza, Deisy G.
2015-01-01
Three children with neurosensory deafness who used cochlear implants were taught to match video clips to dictated sentences. We used matrix training with overlapping components and tested for recombinative generalization. Two 3?×?3 matrices generated 18 sentences. For each matrix, we taught 6 sentences and evaluated generalization with the…
A Reflective Encounter with the Fine Sand Area in a Nursery School Setting
ERIC Educational Resources Information Center
Barnett, Anthony
2016-01-01
This article draws on a model of reflection that involves creating meanings through repeated encounters with evocative objects. Responses to one such evocative object, a 20-second video clip of children playing in the fine sand area, illustrates the "turning toward" and then "turning away" from the object to engage with broader…
ERIC Educational Resources Information Center
Ghavamnia, M.; Eslami-Rasekh, A.; Vahid Dastjerdi, H.
2018-01-01
This study investigates the relative effectiveness of four types of input-enhanced instruction on the development of Iranian EFL learners' production of pragmatically appropriate and grammatically accurate suggestions. Over a 16-week course, input delivered through video clips was enhanced differently in four intact classes: (1) metapragmatic…
Thinking Visually: Using Visual Media in the College Classroom
ERIC Educational Resources Information Center
Tobolowsky, Barbara F.
2007-01-01
Getting through to students in the classroom continues to be one of the great mysteries of an educator's life. What will capture their attention, and, more important, what will transform their thinking? Film industry veteran and educator Barbara Tobolowsky returned to her roots in visual media to find answers. Using video clips to introduce a…
Explaining Global Women's Empowerment Using Geographic Inquiry
ERIC Educational Resources Information Center
Grubbs, Melanie R.
2018-01-01
It is difficult for students who are just being introduced to major geographical concepts to understand how relatively free countries like India or Mali can have such high levels of human rights abuses as child brides, dowry deaths, and domestic violence. Textbooks explain it and video clips show examples, but it still seems surreal to teenagers…
Concerning the Video Drift Method to Measure Double Stars
NASA Astrophysics Data System (ADS)
Nugent, Richard L.; Iverson, Ernest W.
2015-05-01
Classical methods to measure position angles and separations of double stars rely on just a few measurements either from visual observations or photographic means. Visual and photographic CCD observations are subject to errors from the following sources: misalignments from eyepiece/camera/barlow lens/micrometer/focal reducers, systematic errors from uncorrected optical distortions, aberrations from the telescope system, camera tilt, magnitude and color effects. Conventional video methods rely on calibration doubles and graphically calculating the east-west direction plus careful choice of select video frames stacked for measurement. Atmospheric motion is one of the larger sources of error in any exposure/measurement method which is on the order of 0.5-1.5. Ideally, if a data set from a short video can be used to derive position angle and separation, with each data set self-calibrating independent of any calibration doubles or star catalogues, this would provide measurements of high systematic accuracy. These aims are achieved by the video drift method first proposed by the authors in 2011. This self calibrating video method automatically analyzes 1,000's of measurements from a short video clip.
Use of streamed internet video for cytology training and education: www.PathLab.org.
Poller, David; Ljung, Britt-Marie; Gonda, Peter
2009-05-01
An Internet-based method is described for submission of video clips to a website editor to be reviewed, edited, and then uploaded onto a video server, with a hypertext link to a website. The information on the webpages is searchable via the website sitemap on Internet search engines. A survey of video users who accessed a single 59-minute FNA cytology training cytology video via the website showed a mean score for usefulness for specialists/consultants of 3.75, range 1-5, n = 16, usefulness for trainees mean score was 4.4, range 3-5, n = 12, with a mean score for visual and sound quality of 3.9, range 2-5, n = 16. Fifteen out of 17 respondents thought that posting video training material on the Internet was a good idea, and 9 of 17 respondents would also consider submitting training videos to a similar website. This brief exercise has shown that there is value in posting educational or training video content on the Internet and that the use of streamed video accessed via the Internet will be of increasing importance. (c) 2009 Wiley-Liss, Inc.
Kleiman, Amanda M; Forkin, Katherine T; Bechtel, Allison J; Collins, Stephen R; Ma, Jennie Z; Nemergut, Edward C; Huffmyer, Julie L
2017-05-01
Transesophageal echocardiography (TEE) is a valuable monitor for patients undergoing cardiac and noncardiac surgery as it allows for evaluation of cardiovascular compromise in the perioperative period. It is challenging for anesthesiology residents and medical students to learn to use and interpret TEE in the clinical environment. A critical component of learning to use and interpret TEE is a strong grasp of normal cardiovascular ultrasound anatomy. Fifteen fourth-year medical students and 15 post-graduate year (PGY) 1 and 2 anesthesiology residents without prior training in cardiac anesthesia or TEE viewed normal cardiovascular anatomy TEE video clips; participants were randomized to learning cardiac anatomy in generative retrieval (GR) and standard practice (SP) groups. GR participants were required to verbally identify each unlabeled cardiac anatomical structure within 10 seconds of the TEE video appearing on the screen. Then a correctly labeled TEE video clip was shown to the GR participant for 5 more seconds. SP participants viewed the same TEE video clips as GR but there was no requirement for SP participants to generate an answer; for the SP group, each TEE video image was labeled with the correctly identified anatomical structure for the 15 second period. All participants were tested for intermediate (1 week) and late (1 month) retention of normal TEE cardiovascular anatomy. Improvement of intermediate and late retention of TEE cardiovascular anatomy was evaluated using a linear mixed effects model with random intercepts and random slopes. There was no statistically significant difference in baseline score between GR (49% ± 11) and SP (50% ± 12), with mean difference (95% CI) -1.1% (-9.5, 7.3%). At 1 week following the educational intervention, GR (90% ± 5) performed significantly better than SP (82% ± 11), with mean difference (95% CI) 8.1% (1.9, 14.2%); P = .012. This significant increase in scores persisted in the late posttest session at one month (GR: 83% ± 12; SP: 72% ± 12), with mean difference (95% CI) 10.2% (1.3 to 19.1%); P = .026. Mixed effects analysis showed significant improvements in TEE cardiovascular anatomy over time, at 5.9% and 3.5% per week for GR and SP groups respectively (P = .0003), and GR improved marginally faster than SP (P = .065). Medical students and anesthesiology residents inexperienced in the use of TEE showed both improved learning and retention of basic cardiovascular ultrasound anatomy with the incorporation of GR into the educational experience.
dCLIP: a computational approach for comparative CLIP-seq analyses
2014-01-01
Although comparison of RNA-protein interaction profiles across different conditions has become increasingly important to understanding the function of RNA-binding proteins (RBPs), few computational approaches have been developed for quantitative comparison of CLIP-seq datasets. Here, we present an easy-to-use command line tool, dCLIP, for quantitative CLIP-seq comparative analysis. The two-stage method implemented in dCLIP, including a modified MA normalization method and a hidden Markov model, is shown to be able to effectively identify differential binding regions of RBPs in four CLIP-seq datasets, generated by HITS-CLIP, iCLIP and PAR-CLIP protocols. dCLIP is freely available at http://qbrc.swmed.edu/software/. PMID:24398258
Tsoi, D T-Y; Lee, K-H; Gee, K A; Holden, K L; Parks, R W; Woodruff, P W R
2008-06-01
The ability to appreciate humour is essential to successful human interactions. In this study, we hypothesized that individuals with schizophrenia would have diminished ability to recognize and appreciate humour. The relationship between humour experience and clinical symptoms, cognitive and social functioning was examined. Thirty patients with a DSM-IV diagnosis of schizophrenia were compared with 30 age-, gender-, IQ- and ethnicity-matched healthy controls. Humour recognition was measured by identification of humorous moments in four silent slapstick comedy film clips and calculated as d-prime (d') according to signal detection theory. Humour appreciation was measured by self-report mood state and funniness ratings. Patients were assessed for clinical symptoms, theory of mind ability, executive function [using the Wisconsin Card Sorting Test (WCST)] and social functioning [using the Life Skills Profile (LSP)]. Patient and control groups did not differ in the funniness ratings they attributed to the video clips. Patients with schizophrenia had a lower d' (humour) compared to the controls, after controlling for (1) the performance of a baseline recognition task with a non-humorous video clip and (2) severity of depressive symptoms. In patients, d' (humour) had significant negative correlation with delusion and depression scores, the perseverative error score of the WCST and the total scores of the LSP. Compared with controls, patients with schizophrenia were less sensitive at detecting humour but similarly able to appreciate humour. The degree of humour recognition difficulty may be associated with the extent of executive dysfunction and thus contribute to the psychosocial impairment in patients with schizophrenia.
Cross-modal enhancement of speech detection in young and older adults: does signal content matter?
Tye-Murray, Nancy; Spehar, Brent; Myerson, Joel; Sommers, Mitchell S; Hale, Sandra
2011-01-01
The purpose of the present study was to examine the effects of age and visual content on cross-modal enhancement of auditory speech detection. Visual content consisted of three clearly distinct types of visual information: an unaltered video clip of a talker's face, a low-contrast version of the same clip, and a mouth-like Lissajous figure. It was hypothesized that both young and older adults would exhibit reduced enhancement as visual content diverged from the original clip of the talker's face, but that the decrease would be greater for older participants. Nineteen young adults and 19 older adults were asked to detect a single spoken syllable (/ba/) in speech-shaped noise, and the level of the signal was adaptively varied to establish the signal-to-noise ratio (SNR) at threshold. There was an auditory-only baseline condition and three audiovisual conditions in which the syllable was accompanied by one of the three visual signals (the unaltered clip of the talker's face, the low-contrast version of that clip, or the Lissajous figure). For each audiovisual condition, the SNR at threshold was compared with the SNR at threshold for the auditory-only condition to measure the amount of cross-modal enhancement. Young adults exhibited significant cross-modal enhancement with all three types of visual stimuli, with the greatest amount of enhancement observed for the unaltered clip of the talker's face. Older adults, in contrast, exhibited significant cross-modal enhancement only with the unaltered face. Results of this study suggest that visual signal content affects cross-modal enhancement of speech detection in both young and older adults. They also support a hypothesized age-related deficit in processing low-contrast visual speech stimuli, even in older adults with normal contrast sensitivity.
Thiarawat, Peeraphong; Jahromi, Behnam Rezai; Kozyrev, Danil A; Intarakhao, Patcharin; Teo, Mario K; Choque-Velasquez, Joham; Hernesniemi, Juha
2017-05-01
The objectives of this study were to analyze microsurgical techniques and to determine correlations between microsurgical techniques and the radiographic findings in the microneurosurgical treatment of posterior communicating artery aneurysms (PCoAAs). We retrospectively analyzed radiographic findings and videos of surgeries in 64 patients with PCoAAs who underwent microsurgical clipping by the senior author from August 2010 to 2014. From 64 aneurysms, 30 (47%) had acute subarachnoid hemorrhage (SAH) that necessitated lamina terminalis fenestration (odds ratio [OR], 67.67; P < 0.001) and Liliequist membrane fenestration (OR, 19.62; P < 0.001). The low-lying aneurysms significantly necessitated the coagulation of the dura covering the anterior clinoid process (ACP) (OR, 7.43; P = 0.003) or anterior clinoidectomy (OR, 91.0; P < 0.001). We preferred straight clips in 45 (83%) of 54 posterolateral projecting aneurysms (OR, 45.0; P < 0.001), but preferred curved clips for posteromedial projecting aneurysms (OR, 6.39; P = 0.008). The mean operative time from the brain retraction to the final clipping was 17 minutes and 43 seconds. Postoperative computed tomography angiography revealed complete occlusion of 60 (94%) aneurysms. Three (4.6%) patients with acute SAH suffered postoperative lacunar infarction. For ruptured aneurysms, lamina terminalis and Liliequist membrane fenestration are useful for additional cerebrospinal fluid drainage. For low-lying aneurysms, coagulation of the dura covering the ACP or tailored anterior clinoidectomy might be necessary for exposing the proximal aneurysm neck. Type of clips depends on the direction of projection. The microsurgical clipping of the PCoAAs can achieve good immediate complete occlusion rate with low postoperative stroke rate. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Haines-Stiles, G.; Akuginow, E.
2010-12-01
POLAR-PALOOZA and its companion project, "International POLAR-PALOOZA" shared the same central premise: that polar researchers, speaking for themselves, could be powerful communicators about the science and mission of the 4th International Polar Year, and could successfully engage a wide variety of public audiences across America and around the world. Supported for the US tour by NSF and NASA, and internationally by NSF alone, the project enlisted more than forty American researchers, and 14 polar scientists from Brazil, China and Australia, to participate in events at science centers and natural history museums, universities, public libraries and schools, and also for targeted outreach to special audiences such as young female researchers in Oklahoma, or the Downtown Rotary in San Diego. Evaluations by two different ISE groups found similar results domestically and internationally. When supported by HD video clips and presenting informally in teams of 3, 4, 5 and sometimes even 6 researchers as part of a fast-paced "show," the scientists themselves were almost always rated as among the most important aspects of the program. Significant understandings about polar science and global climate change resulted, along with a positive impression of the research undertaken during IPY. This presentation at Fall AGU 2010 will present results from the Summative Evaluation of both projects, show representative video clips of the public presentations, share photographs of some of the most dramatically varied venues and candid behind-the-scenes action, and share "Lessons Learned" that can be broadly applied to the dissemination of Earth and space science research. These include: collaboration with partner institutions is never easy. (Duh.) Authentic props (such as ice cores, when not trashed by TSA) make a powerful impression on audiences, and give reality to remote places and complex science. And, most importantly, that since 85% of Americans have never met a scientist, that traveling science road shows, even in a time of plentiful media, can serve a powerful and important function. Tips and tricks? Disastrous "rehearsals" often make for successful performances. Personality is more important than media coaching. Stories are more important than facts. Less is sometimes more. Researchers appreciate hearing "Thank you for your service." Lastly, evaluation of "International POLAR-PALOOZA" indicates that many aspects of the model travel well, with the opportunity for personal interaction rating highly with all audiences. (And, yes, a short clip of a rap/music video about ice coring will be played.)
Teaching Health Literacy Using Popular Television Programming: A Qualitative Pilot Study
Primack, Brian A.; Wickett, Dustin J.; Kraemer, Kevin L.; Zickmund, Susan
2011-01-01
Background Teaching of health and medical concepts in the K-12 curriculum may help improve health literacy. Purpose The purpose of this project was to determine acceptability and preliminary efficacy of pilot implementation of a health literacy curriculum using brief clips from a popular television program. Methods Participants included 55 ninth-grade students in a low-income school with a high proportion of minority students. The curriculum used three brief interspersed segments from the television show ER to teach basic topics in cardiology. After the 30-minute experimental curriculum, students completed open-ended surveys which were coded qualitatively. Result The most common codes described “enjoyment” (N=28), “acquisition of new knowledge” (N=28), “informative” (N=15), “interesting” (N=12), and “TV/video” (N=10). We found on average 2.9 examples of medical content per participant. Of the 26 spontaneously-generated verifiable statements, 24 (92.3%) were judged as accurate by two independent coders (κ=0.70, P=.0002). Discussion Use of brief segments of video material contributed to the acceptability of health education curricula without detracting from students’ acquisition of accurate information. Translation to Health Education Practice Health education practitioners may wish to include brief clips from popular programming to motivate students and provide context for health-related lessons. PMID:23998135
Connecting multimodality in human communication
Regenbogen, Christina; Habel, Ute; Kellermann, Thilo
2013-01-01
A successful reciprocal evaluation of social signals serves as a prerequisite for social coherence and empathy. In a previous fMRI study we studied naturalistic communication situations by presenting video clips to our participants and recording their behavioral responses regarding empathy and its components. In two conditions, all three channels transported congruent emotional or neutral information, respectively. Three conditions selectively presented two emotional channels and one neutral channel and were thus bimodally emotional. We reported channel-specific emotional contributions in modality-related areas, elicited by dynamic video clips with varying combinations of emotionality in facial expressions, prosody, and speech content. However, to better understand the underlying mechanisms accompanying a naturalistically displayed human social interaction in some key regions that presumably serve as specific processing hubs for facial expressions, prosody, and speech content, we pursued a reanalysis of the data. Here, we focused on two different descriptions of temporal characteristics within these three modality-related regions [right fusiform gyrus (FFG), left auditory cortex (AC), left angular gyrus (AG) and left dorsomedial prefrontal cortex (dmPFC)]. By means of a finite impulse response (FIR) analysis within each of the three regions we examined the post-stimulus time-courses as a description of the temporal characteristics of the BOLD response during the video clips. Second, effective connectivity between these areas and the left dmPFC was analyzed using dynamic causal modeling (DCM) in order to describe condition-related modulatory influences on the coupling between these regions. The FIR analysis showed initially diminished activation in bimodally emotional conditions but stronger activation than that observed in neutral videos toward the end of the stimuli, possibly by bottom-up processes in order to compensate for a lack of emotional information. The DCM analysis instead showed a pronounced top-down control. Remarkably, all connections from the dmPFC to the three other regions were modulated by the experimental conditions. This observation is in line with the presumed role of the dmPFC in the allocation of attention. In contrary, all incoming connections to the AG were modulated, indicating its key role in integrating multimodal information and supporting comprehension. Notably, the input from the FFG to the AG was enhanced when facial expressions conveyed emotional information. These findings serve as preliminary results in understanding network dynamics in human emotional communication and empathy. PMID:24265613
Connecting multimodality in human communication.
Regenbogen, Christina; Habel, Ute; Kellermann, Thilo
2013-01-01
A successful reciprocal evaluation of social signals serves as a prerequisite for social coherence and empathy. In a previous fMRI study we studied naturalistic communication situations by presenting video clips to our participants and recording their behavioral responses regarding empathy and its components. In two conditions, all three channels transported congruent emotional or neutral information, respectively. Three conditions selectively presented two emotional channels and one neutral channel and were thus bimodally emotional. We reported channel-specific emotional contributions in modality-related areas, elicited by dynamic video clips with varying combinations of emotionality in facial expressions, prosody, and speech content. However, to better understand the underlying mechanisms accompanying a naturalistically displayed human social interaction in some key regions that presumably serve as specific processing hubs for facial expressions, prosody, and speech content, we pursued a reanalysis of the data. Here, we focused on two different descriptions of temporal characteristics within these three modality-related regions [right fusiform gyrus (FFG), left auditory cortex (AC), left angular gyrus (AG) and left dorsomedial prefrontal cortex (dmPFC)]. By means of a finite impulse response (FIR) analysis within each of the three regions we examined the post-stimulus time-courses as a description of the temporal characteristics of the BOLD response during the video clips. Second, effective connectivity between these areas and the left dmPFC was analyzed using dynamic causal modeling (DCM) in order to describe condition-related modulatory influences on the coupling between these regions. The FIR analysis showed initially diminished activation in bimodally emotional conditions but stronger activation than that observed in neutral videos toward the end of the stimuli, possibly by bottom-up processes in order to compensate for a lack of emotional information. The DCM analysis instead showed a pronounced top-down control. Remarkably, all connections from the dmPFC to the three other regions were modulated by the experimental conditions. This observation is in line with the presumed role of the dmPFC in the allocation of attention. In contrary, all incoming connections to the AG were modulated, indicating its key role in integrating multimodal information and supporting comprehension. Notably, the input from the FFG to the AG was enhanced when facial expressions conveyed emotional information. These findings serve as preliminary results in understanding network dynamics in human emotional communication and empathy.
Tsujimura, Akira; Kiuchi, Hiroshi; Soda, Tetsuji; Takezawa, Kentaro; Fukuhara, Shinichiro; Takao, Tetsuya; Sekiguchi, Yuki; Iwasa, Atsushi; Nonomura, Norio; Miyagawa, Yasushi
2017-09-01
Very little has been elucidated about sexual interest in female-to-male (FtM) transsexual persons. To investigate the sexual interest of FtM transsexual persons vs that of men using an eye-tracking system. The study included 15 men and 13 FtM transsexual subjects who viewed three sexual videos (clip 1: sexy clothed young woman kissing the region of the male genitals covered by underwear; clip 2: naked actor and actress kissing and touching each other; and clip 3: heterosexual intercourse between a naked actor and actress) in which several regions were designated for eye-gaze analysis in each frame. The designation of each region was not visible to the participants. Visual attention was measured across each designated region according to gaze duration. For clip 1, there was a statistically significant sex difference in the viewing pattern between men and FtM transsexual subjects. Longest gaze time was for the eyes of the actress in men, whereas it was for non-human regions in FtM transsexual subjects. For clip 2, there also was a statistically significant sex difference. Longest gaze time was for the face of the actress in men, whereas it was for non-human regions in FtM transsexual subjects, and there was a significant difference between regions with longest gaze time. The most apparent difference was in the gaze time for the body of the actor: the percentage of time spent gazing at the body of the actor was 8.35% in FtM transsexual subjects, whereas it was only 0.03% in men. For clip 3, there were no statistically significant differences in viewing patterns between men and FtM transsexual subjects, although longest gaze time was for the face of the actress in men, whereas it was for non-human regions in FtM transsexual subjects. We suggest that the characteristics of sexual interest of FtM transsexual persons are not the same as those of biological men. Tsujimura A, Kiuchi H, Soda T, et al. The Pattern of Sexual Interest of Female-to-Male Transsexual Persons With Gender Identity Disorder Does Not Resemble That of Biological Men: An Eye-Tracking Study. Sex Med 2017;5:e169-e174. Copyright © 2017. Published by Elsevier Inc.
Apparatus for the compact cooling of modules
Iyengar, Madhusudan K.; Parida, Pritish R.
2015-07-07
An apparatus for the compact cooling of modules. The apparatus includes a clip, a first cover plate coupled to a first side of the clip, a second cover plate coupled to a second side of the clip opposite to the first side of the clip, a first frame thermally coupled to the first cover plate, and a second frame thermally coupled to the second cover plate. Each of the first frame and the second frame may include a plurality of channels for passing coolant through the first frame and the second frame, respectively. Additionally, the apparatus may further include a filler for directing coolant through the plurality of channels, and for blocking coolant from flowing along the first side of the clip and the second side of the clip.
Young Children's Sensitivity to New and Given Information when Answering Predicate-Focus Questions
ERIC Educational Resources Information Center
Salomo, Dorothe; Lieven, Elena; Tomasello, Michael
2010-01-01
In two studies we investigated 2-year-old children's answers to predicate-focus questions depending on the preceding context. Children were presented with a successive series of short video clips showing transitive actions (e.g., frog washing duck) in which either the action (action-new) or the patient (patient-new) was the changing, and therefore…
Anticipating Intentional Actions: The Effect of Eye Gaze Direction on the Judgment of Head Rotation
ERIC Educational Resources Information Center
Hudson, Matthew; Liu, Chang Hong; Jellema, Tjeerd
2009-01-01
Using a representational momentum paradigm, this study investigated the hypothesis that judgments of how far another agent's head has rotated are influenced by the perceived gaze direction of the head. Participants observed a video-clip of a face rotating 60[degrees] towards them starting from the left or right profile view. The gaze direction of…
ERIC Educational Resources Information Center
Furtado, Ovande, Jr.; Gallagher, Jere D.
2012-01-01
Mastery of fundamental movement skills (FMS) is an important factor in preventing weight gain and increasing physical activity. To master FMS, performance evaluation is necessary. In this study, we investigated the reliability of a new observational assessment tool. In Phase I, 110 video clips of children performing five locomotor, and six…
Re Viewing Listening: "Clip Culture" and Cross-Modal Learning in the Music Classroom
ERIC Educational Resources Information Center
Webb, Michael
2010-01-01
This article envisions a new, cross-modal approach to classroom music listening, one that takes advantage of students' rising screen literacy and the ever-expanding archive of music-related visual material available on DVD and on video sharing sites such as YouTube. It is grounded in current literature on music performance studies, embodied music…
Mother-Child Shared Reading with Print and Digital Texts
ERIC Educational Resources Information Center
Kim, Ji Eun; Anderson, Jim
2008-01-01
The purpose of this study was to (1) compare mother-child interactions in three contexts: shared reading with a book in a traditional print format, with an electronic book in a CD-ROM format, and with an electronic book in a video clip format; (2) compare mother-child interactions with a three-year-old and a seven-year-old; and (3) compare…
Using Artifacts to Understand the Life of a Soldier in World War II
ERIC Educational Resources Information Center
Anson, Staci
2009-01-01
For years, when the author taught about World War II, she used primary and secondary source readings, she presented Power Points, and had her students watch newsreels and other video clips. Today, her students interact with actual artifacts from history so that they can draw conclusions and gain understanding about what the soldiers' lives were…
Evaluating Augmented Reality to Complete a Chain Task for Elementary Students with Autism
ERIC Educational Resources Information Center
Cihak, David F.; Moore, Eric J.; Wright, Rachel E.; McMahon, Don D.; Gibbons, Melinda M.; Smith, Cate
2016-01-01
The purpose of this study was to examine the effects of augmented reality to teach a chain task to three elementary-age students with autism spectrum disorders (ASDs). Augmented reality blends digital information within the real world. This study used a marker-based augmented reality picture prompt to trigger a video model clip of a student…
ERIC Educational Resources Information Center
Kearney, Matthew; Treagust, David F.; Yeo, Shelley; Zadnik, Marjan G.
2001-01-01
Discusses student and teacher perceptions of a new development in the use of the predict-observe-explain (POE) strategy. This development involves the incorporation of POE tasks into a multimedia computer program that uses real-life, digital video clips of difficult, expensive, time consuming, or dangerous scenarios as stimuli for these tasks.…
ERIC Educational Resources Information Center
Palmer, Loretta
A basic algebra unit was developed at Utah Valley State College to emphasize applications of mathematical concepts in the work world, using video and computer-generated graphics to integrate textual material. The course was implemented in three introductory algebra sections involving 80 students and taught algebraic concepts using such areas as…
ERIC Educational Resources Information Center
Koonce, Danel A.; Cruce, Michael K.; Aldridge, Jennifer O.; Langford, Courtney A.; Sporer, Amy K.; Stinnett, Terry A.
2004-01-01
Two hundred fifty-nine preservice teachers at a medium-sized university in the Southwest participated in the current study. The participants were randomly assigned to a labeled condition, Attention Deficit Hyperactivity Disorder, or nonlabeled condition, and were presented a vignette in one of three forms: a written case study, a video clip, or a…
ERIC Educational Resources Information Center
Mitchell, Melissa Sue
2011-01-01
Cyberbullying takes place through the information technology that students access every day: cell phones, text messages, email, Internet messaging, social networks, pictures, and video clips. With the world paying more attention to this new form of bullying, scholars have been researching the topic in an attempt to learn more about this…
DOE Office of Scientific and Technical Information (OSTI.GOV)
McInerney, Joseph D.
2003-03-31
"Genetics and Major Psychiatric Disorders: A Program for Genetic Counselors" provides an introduction to psychiatric genetics, with a focus on the genetics of common complex disease, for genetics professionals. The program is available as a CD-ROM and an online educational resource. The on-line version requires a direct internet connection. Each educational module begins with an interactive case study that raises significant issues addressed in each module. In addition, case studies provided throughout the educational materials support teaching of major concepts. Incorporated throughout the content are expert video clips, video clips from individuals affected by psychiatric illness, and optional "learn more"more » materials that offer greater depth about a particular topic. The structure of the CD-ROM permits self-navigation, but we have suggested a sequence that allows materials to build upon each other. At any point in the materials, users may pause and look up terms in the glossary or review the DSM-IV criteria for selected psychiatric disorders. A detailed site map is available for those who choose to self navigate through the content.« less
Optimism as a predictor of the effects of laboratory-induced stress on fears and hope.
Kimhi, Shaul; Eshel, Yohanan; Shahar, Eldad
2013-01-01
The objective of the current study is to explore optimism as a predictor of personal and collective fear, as well as hope, following laboratory-induced stress. Students (N = 107; 74 female, 33 male) were assigned randomly to either the experimental (stress--political violence video clip) or the control group (no-stress--nature video clip). Questionnaires of fear and hope were administered immediately after the experiment (Time 1) and 3 weeks later (Time 2). Structural equation modeling indicated the following: (a) Optimism significantly predicted both fear and hope in the stress group at Time 1, but not in the no-stress group. (b) Optimism predicted hope but not fear at Time 2 in the stress group. (c) Hope at Time 1 significantly predicted hope at Time 2, in both the stress and the no-stress groups. (d) Gender did not predict significantly fear at Time 1 in the stress group, despite a significant difference between genders. This study supports previous studies indicating that optimism plays an important role in people's coping with stress. However, based on our research the data raise the question of whether optimism, by itself, or environmental stress, by itself, may accurately predict stress response.
2017-11-03
A video news file (or a collection of raw video and interview clips) about the EcAMSat mission. Ever wonder what would happen if you got sick in space? NASA is sending samples of bacteria into low-Earth orbit to find out. One of the latest small satellite missions from NASA’s Ames Research Center in California’s Silicon Valley is the E. coli Anti-Microbial Satellite, or EcAMSat for short. The CubeSat – a spacecraft the size of a shoebox built from cube-shaped units – will explore how effectively antibiotics can combat E. coli bacteria in the low gravity of space. This information will help us improve how we fight infections, providing safer journeys for astronauts on their future voyages, and offer benefits for medicine here on Earth.
Video at Sea: Telling the Stories of the International Ocean Discovery Program
NASA Astrophysics Data System (ADS)
Wright, M.; Harned, D.
2014-12-01
Seagoing science expeditions offer an ideal opportunity for storytelling. While many disciplines involve fieldwork, few offer the adventure of spending two months at sea on a vessel hundreds of miles from shore with several dozen strangers from all over the world. As a medium, video is nearly ideal for telling these stories; it can capture the thrill of discovery, the agony of disappointment, the everyday details of life at sea, and everything in between. At the International Ocean Discovery Program (IODP, formerly the Integrated Ocean Drilling Program), we have used video as a storytelling medium for several years with great success. Over this timeframe, camera equipment and editing software have become cheaper and easier to use, while web sites such as YouTube and Vimeo have enabled sharing with just a few mouse clicks. When it comes to telling science stories with video, the barriers to entry have never been lower. As such, we have experimented with many different approaches and a wide range of styles. On one end of the spectrum, live "ship-to-shore" broadcasts with school groups - conducted with an iPad and free videoconferencing software such as Skype and Zoom - enable curious minds to engage directly with scientists in real-time. We have also contracted with professional videographers and animators who offer the experience, skill, and equipment needed to produce polished clips of the highest caliber. Amateur videographers (including some scientists looking to make use of their free time on board) have shot and produced impressive shorts using little more than a phone camera. In this talk, I will provide a brief overview of our efforts to connect with the public using video, including a look at how effective certain tactics are for connecting to specific audiences.
Shao, Yu-Yun; Liu, Tsung-Hao; Lee, Ying-Hui; Hsu, Chih-Hung; Cheng, Ann-Lii
2016-07-01
The Cancer of the Liver Italian Program (CLIP) score is a commonly used staging system for hepatocellular carcinoma (HCC) helpful with predicting prognosis of advanced HCC. CLIP uses the Child-Turcotte-Pugh (CTP) score to evaluate liver reserve. A new scoring system, the albumin-bilirubin (ALBI) grade, has been proposed as they objectively evaluate liver reserve. We examined whether the modification of CLIP with ALBI retained its prognosis prediction for patients with advanced HCC. We included patients who received first-line antiangiogenic therapy for advanced HCC. Liver reserve was assessed using CTP and ALBI scores, which were then incorporated into CLIP and ALBI-CLIP, respectively. To assess their efficacies of prognostic prediction, the Cox's proportional hazard model and concordance indexes were used. A total of 142 patients were included; 137 of them were classified CTP A and 5 patients CTP B. Patients could be divided into four or five groups with different prognosis according to CLIP and ALBI-CLIP, respectively. Higher R(2) (0.249 vs 0.216) and lower Akaike information criterion (995.0 vs 1001.1) were observed for ALBI-CLIP than for CLIP in the Cox's model predicting overall survival. ALBI-CLIP remained an independent predictor for overall survival when CLIP and ALBI-CLIP were simultaneously incorporated in Cox's models allowing variable selection with adjustment for hepatitis etiology, treatment, and performance status. The concordance index was also higher for ALBI-CLIP than for CLIP (0.724 vs 0.703). Modification of CLIP scoring with ALBI, which objectively assesses liver reserve, retains and might have improved prognosis prediction for advanced HCC. © 2016 Journal of Gastroenterology and Hepatology Foundation and John Wiley & Sons Australia, Ltd.
The illusive sound of a Bundengan string
NASA Astrophysics Data System (ADS)
Parikesit, Gea O. F.; Kusumaningtyas, Indraswari
2017-09-01
The acoustics of a vibrating string is frequently used as a simple example of how physics can be applied in the field of art. In this paper we describe a simple experiment and analysis using a clipped string. This experiment can generate scientific curiosity among students because the sound generated by the string seem surprising to our senses. The first surprise comes from the gong-like sounds produced by the string, which we usually associate with metallic instruments rather than string instruments. The second surprise comes from the fact that when we shift the clip we perceive an increase of pitch, even though the measured value of the frequency with the maximum amplitude is actually decreased. We use high-speed video recording as well as audio spectral analysis to elucidate the physics behind these two surprises. A set of student activities is prepared to help them follow up their curiosity. Students can make their own clipped string, which is found in Indonesia in an instrument called Bundengan, by setting up their own prepared piano as invented by John Cage.
Visual skills involved in decision making by expert referees.
Ghasemi, Abdollah; Momeni, Maryam; Jafarzadehpur, Ebrahim; Rezaee, Meysam; Taheri, Hamid
2011-02-01
Previous studies have compared visual skills of expert and novice athletes; referees' performance has not been addressed. Visual skills of two groups of expert referees, successful and unsuccessful in decision making, were compared. Using video clips of soccer matches to assess decision-making success of 41 national and international referees from 31 to 42 years of age, 10 top referees were selected as the Successful group and 10 as the Unsuccessful group. Visual tests included visual memory, visual reaction time, peripheral vision, recognition speed, saccadic eye movement, and facility of accommodation. The Successful group had better visual skills than the Unsuccessful group. Such visual skills enhance soccer referees' performance and may be recommended for young referees.
Hemmesch, Amanda R
2014-09-01
After viewing short video clips of individuals with Parkinson's disease (PD) who varied in the symptoms of facial masking (reduced expressivity) and abnormal bodily movement (ABM: including tremor and related movement disorders), older adult observers provided their first impressions of targets' social positivity. Impressions of targets with higher masking or ABM were more negative than impressions of targets with lower masking or ABM. Furthermore, masking was more detrimental for impressions of women and when observers considered emotional relationship goals, whereas ABM was more detrimental for instrumental relationship goals. This study demonstrated the stigmatizing effects of both reduced and excessive movement. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Video-induced yawning in stumptail macaques (Macaca arctoides)
Paukner, Annika; Anderson, James R
2005-01-01
This study reports the first experimental exploration of possible contagious yawning in monkeys. Twenty-two stumptail macaques (Macaca arctoides) were presented with video clips of either yawns or control mouth movements by conspecifics. At a group level, monkeys yawned significantly more often during and just after the yawn tape than the control tape. Supplementary analysis revealed that the yawn tape also elicited significantly more self-directed scratching responses than the control tape, which suggests that yawning might have been caused by tension arising from viewing the yawn tape. Understanding to what extent the observed effect resembles contagious yawning as found in humans and chimpanzees requires more detailed experimentation. PMID:17148320
Video- Demonstrations of Stable and Unstable Solid Body Rotation on the International Space Station
NASA Technical Reports Server (NTRS)
2003-01-01
Saturday Morning Science, the science of opportunity series of applied experiments and demonstrations, performed aboard the International Space Station (ISS) by Expedition 6 astronaut Dr. Don Pettit, revealed some remarkable findings. In this video clip, Pettit demonstrates stable and unstable modes for solid body rotation on the ISS. Using a hard cover textbook, he demonstrates that it will rotate stably about the longest and shortest axis, which represent the maximum and minimum movements of Inertia. Trying to rotate the book around an intermediate axis results in an unstable rotation in which the book appears to flip-flop while it rotates.
Children aged 6-24 months like to watch YouTube videos but could not learn anything from them.
Yadav, Savita; Chakraborty, Pinaki; Mittal, Prabhat; Arora, Udit
2018-03-20
Parents sometimes show young children YouTube videos on their smartphones. We studied the interaction of 55 Indian children born between December 2014 and May 2015 who watched YouTube videos when they were 6-24 months old. The children were recruited by the researchers using professional and personal contacts and visited by the same two observers at four ages, for at least 10 minutes. The observers recorded the children's abilities to interact with touch screens and identify people in videos and noted what videos attracted them the most. The children were attracted to music at six months of age and were interested in watching the videos at 12 months. They could identify their parents in videos at 12 months and themselves by 24 months. They started touching the screen at 18 months and could press the buttons that appeared on the screen, but did not understand their use. The children preferred watching dance performances by multiple artists with melodical music, advertisements for products they used and videos showing toys and balloons. Children up to two years of age could be entertained and kept busy by showing them YouTube clips on smartphones, but did not learn anything from the videos. ©2018 Foundation Acta Paediatrica. Published by John Wiley & Sons Ltd.
Compressed-domain video indexing techniques using DCT and motion vector information in MPEG video
NASA Astrophysics Data System (ADS)
Kobla, Vikrant; Doermann, David S.; Lin, King-Ip; Faloutsos, Christos
1997-01-01
Development of various multimedia applications hinges on the availability of fast and efficient storage, browsing, indexing, and retrieval techniques. Given that video is typically stored efficiently in a compressed format, if we can analyze the compressed representation directly, we can avoid the costly overhead of decompressing and operating at the pixel level. Compressed domain parsing of video has been presented in earlier work where a video clip is divided into shots, subshots, and scenes. In this paper, we describe key frame selection, feature extraction, and indexing and retrieval techniques that are directly applicable to MPEG compressed video. We develop a frame-type independent representation of the various types of frames present in an MPEG video in which al frames can be considered equivalent. Features are derived from the available DCT, macroblock, and motion vector information and mapped to a low-dimensional space where they can be accessed with standard database techniques. The spatial information is used as primary index while the temporal information is used to enhance the robustness of the system during the retrieval process. The techniques presented enable fast archiving, indexing, and retrieval of video. Our operational prototype typically takes a fraction of a second to retrieve similar video scenes from our database, with over 95% success.
CLIPS: A tool for the development and delivery of expert systems
NASA Technical Reports Server (NTRS)
Riley, Gary
1991-01-01
The C Language Integrated Production System (CLIPS) is a forward chaining rule-based language developed by the Software Technology Branch at the Johnson Space Center. CLIPS provides a complete environment for the construction of rule-based expert systems. CLIPS was designed specifically to provide high probability, low cost, and easy integration with external systems. Other key features of CLIPS include a powerful rule syntax, an interactive development environment, high performance, extensibility, a verification/validation tool, extensive documentation, and source code availability. The current release of CLIPS, version 4.3, is being used by over 2,500 users throughout the public and private community including: all NASA sites and branches of the military, numerous Federal bureaus, government contractors, 140 universities, and many companies.
ERIC Educational Resources Information Center
Maggio, Severine; Lete, Bernard; Chenu, Florence; Jisa, Harriet; Fayol, Michel
2012-01-01
This study examines the dynamics of cognitive processes during writing. Participants were 5th, 7th and 9th graders ranging in age from 10 to 15 years. They were shown a short silent video composed of clips illustrating conflictual situations between people in school, and were invited to produce a narrative text. Three chronometric measures of word…
ERIC Educational Resources Information Center
O'Brien, Amanda; Schlosser, Ralf W.; Shane, Howard C.; Abramson, Jennifer; Allen, Anna A.; Flynn, Suzanne; Yu, Christina; Dimery, Katherine
2016-01-01
Using augmented input might be an effective means for supplementing spoken language for children with autism who have difficulties following spoken directives. This study aimed to (a) explore whether JIT-delivered scene cues (photos, video clips) via the Apple Watch® enable children with autism to carry out directives they were unable to implement…
Massive Multiplayer Online Gaming: A Research Framework for Military Training and Education
2005-03-01
those required by a military transforming itself to operating under the concept of network centric warfare. The technologies and practice...learning. Simulations are popular in other business situations and management processes. Data files, video clips, and flowcharts might help learners...on nature of these environments is another key motivator. According to Randy Hinrich, Microsoft Research Group Research Manager for Learning
Navigations: The Road to a Better Orientation.
Rizzo, Leah Heather
2016-01-01
A team of nursing professional development specialists from a large Magnet® healthcare network transformed new employee orientation using a themed, interdisciplinary, learner-centered approach. Guided by project management principles, the nursing professional development team created an engaging program that serves as an interactive guide for new hires' orientation journey. This unique approach differs from traditional orientation programs through its incorporation of gaming, video clips, and group discussions.
ERIC Educational Resources Information Center
Pfeiffer, Vanessa D. I.; Scheiter, Katharina; Gemballa, Sven
2012-01-01
This study investigated the effectiveness of three different instructional materials for learning how to identify fish at the species level in a blended classroom and out-of-classroom scenario. A sample of 195 first-year students of biology or geoecology at the University of Tuebingen participated in a course on identification of European…
Test Operations Procedure (TOP) 5-2-521 Pyrotechnic Shock Test Procedures
2007-11-20
Clipping will produce a signal that resembles a square wave . (2) Filters are used to limit the frequency bandwidth of the signal . Low pass filters...video systems permit observation of explosive items under test. c. Facilities to perform non-destructive inspections such as x-ray, ultrasonic , magna...test. (1) Accelerometers (2) Signal Conditioners (3) Digital Recording System (4) Data Processing System with hardcopy output
ERIC Educational Resources Information Center
Rusanganwa, Joseph Appolinary
2015-01-01
The aim of the present study is to investigate the process of constructing a Multimedia Assisted Vocabulary Learning (MAVL) instrument at a university in Rwanda in 2009. The instrument is used in a one-computer classroom where students were taught in a foreign language and had little access to books. It consists of video clips featuring images,…
ERIC Educational Resources Information Center
Borrego, Joaquin, Jr.; Burrell, T. Lindsey
2010-01-01
This article describes the application of a behavioral parent training program, Parent-Child Interaction Therapy (PCIT), in the treatment of behavior disorders in young children. PCIT is unique in that it works with both the child and parent in treatment and it focuses on improving the parent-child relationship as a means to improving parent and…
ERIC Educational Resources Information Center
Corten-Gualtieri, Pascale; Ritter, Christian; Plumat, Jim; Keunings, Roland; Lebrun, Marcel; Raucent, Benoit
2016-01-01
Most students enter their first university physics course with a system of beliefs and intuitions which are often inconsistent with the Newtonian frame of reference. This article presents an experiment of collaborative learning aiming at helping first-year students in an engineering programme to transition from their naïve intuition about dynamics…
ERIC Educational Resources Information Center
Jones, Robin M.; Walden, Tedra A.; Conture, Edward G.; Erdemir, Aysu; Lambert, Warren E.; Porges, Stephen W.
2017-01-01
Purpose: This study sought to determine whether respiratory sinus arrhythmia (RSA) and executive functions are associated with stuttered speech disfluencies of young children who do (CWS) and do not stutter (CWNS). Method: Thirty-six young CWS and 36 CWNS were exposed to neutral, negative, and positive emotion-inducing video clips, followed by…
ERIC Educational Resources Information Center
van Vliet, E. A.; Winnips, J. C.; Brouwer, N.
2015-01-01
In flipped-class pedagogy, students prepare themselves at home before lectures, often by watching short video clips of the course contents. The aim of this study was to investigate the effects of flipped classes on motivation and learning strategies in higher education using a controlled, pre- and posttest approach. The same students were followed…
2011-04-12
8 Figure 4 : MaxxPro MRAP Vehicle ......................................................... 13...armored military vehicles , to demonstrate, often through the dissemination of video clips of attacks, the ability of overmatched irregular fighters to...34 vehicles , such as main battle tanks, were designed for. Additionally, the U.S. military is dependent on wheeled vehicles and roads for almost all combat
An fMRI investigation of expectation violation in magic tricks.
Danek, Amory H; Öllinger, Michael; Fraps, Thomas; Grothe, Benedikt; Flanagin, Virginia L
2015-01-01
Magic tricks violate the expected causal relationships that form an implicit belief system about what is possible in the world around us. Observing a magic effect seemingly invalidates our implicit assumptions about what action causes which outcome. We aimed at identifying the neural correlates of such expectation violations by contrasting 24 video clips of magic tricks with 24 control clips in which the expected action-outcome relationship is upheld. Using fMRI, we measured the brain activity of 25 normal volunteers while they watched the clips in the scanner. Additionally, we measured the professional magician who had performed the magic tricks under the assumption that, in contrast to naïve observers, the magician himself would not perceive his own magic tricks as an expectation violation. As the main effect of magic - control clips in the normal sample, we found higher activity for magic in the head of the caudate nucleus (CN) bilaterally, the left inferior frontal gyrus and the left anterior insula. As expected, the magician's brain activity substantially differed from these results, with mainly parietal areas (supramarginal gyrus bilaterally) activated, supporting our hypothesis that he did not experience any expectation violation. These findings are in accordance with previous research that has implicated the head of the CN in processing changes in the contingency between action and outcome, even in the absence of reward or feedback.
Tugwell-Allsup, J; Pritchard, A W
2018-05-01
This paper reports qualitative findings from within a larger randomised control trial where a video clip or telephone conversation with a radiographer was compared to routine appointment letter and information sheet to help alleviate anxiety prior to their MRI scan. Questionnaires consisting of three free-text response questions were administered to all of the 74 patients recruited to the MRI anxiety clinical trial. The questionnaire was designed to establish patients' experiences of the intervention they had received. These questionnaires were administered post-scan. Two participants from each trial arm were also interviewed. A thematic approach was utilised for identifying recurrent categories emerging from the qualitative data which are supported by direct quotations. Participants in the interventional groups commented positively about the provision of pre-MRI scan information they received and this was contrastable with the relatively indifferent responses observed among those who received the standard information letter. Many important themes were identified including the patients needs for clear and simplified information, the experience of anticipation when waiting for the scan, and also the informally acquired information about having an MRI scan i.e. the shared experiences of friends and family. All themes highlighted the need for an inclusive and individually tailored approach to pre-scan information provision. Qualitative data collected throughout the trial is supportive of the statistical findings, where it is asserted that the use of a short video clip or a radiographer having a short conversation with patients before their scan reduces pre-scan anxiety. Crown Copyright © 2017. Published by Elsevier Ltd. All rights reserved.
Automatic behavior sensing for a bomb-detecting dog
NASA Astrophysics Data System (ADS)
Nguyen, Hoa G.; Nans, Adam; Talke, Kurt; Candela, Paul; Everett, H. R.
2015-05-01
Bomb-detecting dogs are trained to detect explosives through their sense of smell and often perform a specific behavior to indicate a possible bomb detection. This behavior is noticed by the dog handler, who confirms the probable explosives, determines the location, and forwards the information to an explosive ordnance disposal (EOD) team. To improve the speed and accuracy of this process and better integrate it with the EOD team's robotic explosive disposal operation, SPAWAR Systems Center Pacific has designed and prototyped an electronic dog collar that automatically tracks the dog's location and attitude, detects the indicative behavior, and records the data. To account for the differences between dogs, a 5-minute training routine can be executed before the mission to establish initial values for the k-mean clustering algorithm that classifies a specific dog's behavior. The recorded data include GPS location of the suspected bomb, the path the dog took to approach this location, and a video clip covering the detection event. The dog handler reviews and confirms the data before it is packaged up and forwarded on to the EOD team. The EOD team uses the video clip to better identify the type of bomb and for awareness of the surrounding environment before they arrive at the scene. Before the robotic neutralization operation commences at the site, the location and path data (which are supplied in a format understandable by the next-generation EOD robots—the Advanced EOD Robotic System) can be loaded into the robotic controller to automatically guide the robot to the bomb site. This paper describes the project with emphasis on the dog-collar hardware, behavior-classification software, and feasibility testing.
Fernandez, Nicolas; Maizels, Max; Farhat, Walid; Smith, Edwin; Liu, Dennis; Chua, Michael; Bhanji, Yasin
2018-04-01
Established methods to train pediatric urology surgery by residency training programs require updating in response to administrative changes such as new, reduced trainee duty hours. Therefore, new objective methods must be developed to teach trainees. We approached this need by creating e-learning to teach attendings objective assessment of trainee skills using the Zwisch scale, an established assessment tool. The aim of this study was to identify whether or not e-learning is an appropriate platform for effective teaching of this assessment tool, by assessing inter-rater correlation of assessments made by the attendings after participation in the e-learning. Pediatric orchiopexy was used as the index case. An e-learning tool was created to teach attending surgeons objective assessment of trainees' surgical skills. First, e-learning content was created which showed the assessment method videotape of resident surgery done in the operating room. Next, attendings were enrolled to e-learn this method. Finally, the ability of enrollees to assess resident surgery skill performance was tested. Namely, test video was made showing a trainee performing inguinal orchiopexy. All enrollees viewed the same online videos. Assessments of surgical skills (Zwisch scale) were entered into an online survey. Data were analyzed by intercorrelation coefficient kappa analysis (strong correlation was ICC ≥ 0.7). A total of 11 attendings were enrolled. All accessed the online learning and then made assessments of surgical skills trainees showed on videotapes. The e-learning comprised three modules: 1. "Core concepts," in which users learned the assessment tool methods; 2. "Learn to assess," in which users learned how to assess by watching video clips, explaining the assessment method; and 3. "Test," in which users tested their skill at making assessments by watching video clips and then actively inputting their ratings of surgical and global skills as viewed in the video clips (Figure). A total of 89 surgical skill ratings were performed with 56 (65%) exact matches between raters and 89 (100%) matched within one rank. Interclass correlation coefficient (ANOVA) showed statistically significant correlation. (r = 0.725, 95% CI 0.571-0.837, F = 3.976, p ≤ 0.00001). Kappa analysis of inter-rater reliability showed strong consensus between attendings for average measures with ICC = 0.71, 95% CI 0.46-0.95 (p = 0.03). We launched e-learning to teach pediatric urology attendings "how to" assess trainee surgical skills objectively (Zwisch scale). After e-learning, there was strong inter-rater correlation in assessments made. We plan to extend such e-learning to pediatric urology surgical training programs. Copyright © 2017 Journal of Pediatric Urology Company. Published by Elsevier Ltd. All rights reserved.
Carbon, Climate and Cameras: Showcasing Arctic research through multimedia storytelling
NASA Astrophysics Data System (ADS)
Tachihara, B. L.; Linder, C. A.; Holmes, R. M.
2011-12-01
In July 2011, Tachihara spent three weeks in the Siberian Arctic documenting The Polaris Project, an NSF-funded effort that brings together an international group of undergraduate students and research scientists to study Arctic systems. Using a combination of photography, video and interviews gathered during the field course, we produced a six-minute film focusing on the researchers' quest to track carbon as it moves from terrestrial upland areas into lakes, streams, rivers and eventually into the Arctic Ocean. The overall goal was to communicate the significance of Arctic science in the face of changing climate. Using a selection of clips from the 2011 video, we will discuss the advantages and challenges specific to using multimedia presentations to represent Arctic research, as well as science in general. The full video can be viewed on the Polaris website: http://www.thepolarisproject.org.
Guo, Xue; Zhou, Xishu; Hale, Lauren; Yuan, Mengting; Feng, Jiajie; Ning, Daliang; Shi, Zhou; Qin, Yujia; Liu, Feifei; Wu, Liyou; He, Zhili; Van Nostrand, Joy D.; Liu, Xueduan; Luo, Yiqi; Tiedje, James M.; Zhou, Jizhong
2018-01-01
Clipping, removal of aboveground plant biomass, is an important issue in grassland ecology. However, few studies have focused on the effect of clipping on belowground microbial communities. Using integrated metagenomic technologies, we examined the taxonomic and functional responses of soil microbial communities to annual clipping (2010–2014) in a grassland ecosystem of the Great Plains of North America. Our results indicated that clipping significantly (P < 0.05) increased root and microbial respiration rates. Annual temporal variation within the microbial communities was much greater than the significant changes introduced by clipping, but cumulative effects of clipping were still observed in the long-term scale. The abundances of some bacterial and fungal lineages including Actinobacteria and Bacteroidetes were significantly (P < 0.05) changed by clipping. Clipping significantly (P < 0.05) increased the abundances of labile carbon (C) degrading genes. More importantly, the abundances of recalcitrant C degrading genes were consistently and significantly (P < 0.05) increased by clipping in the last 2 years, which could accelerate recalcitrant C degradation and weaken long-term soil carbon stability. Furthermore, genes involved in nutrient-cycling processes including nitrogen cycling and phosphorus utilization were also significantly increased by clipping. The shifts of microbial communities were significantly correlated with soil respiration and plant productivity. Intriguingly, clipping effects on microbial function may be highly regulated by precipitation at the interannual scale. Altogether, our results illustrated the potential of soil microbial communities for increased soil organic matter decomposition under clipping land-use practices. PMID:29904372
Understanding the Perception of Global Climate Change: Research into the Role of Media
NASA Astrophysics Data System (ADS)
Kundargi, R.; Gopal, S.; Tsay-Vogel, M.
2016-12-01
Here we present preliminary results for a novel study investigating the perception of climate change media, in relation to two pre-selected dimensions. We administer a questionnaire varying in two dimensions (spatial proximity and scientific literacy) to 155 mostly students in order to evaluate their emotional and cognitive reactions towards a series of video clips depicting the impacts of global climate change (GCC) events or the science behind global climate change. 19 videos were selected and vetted by experts for content and relevance to the subject matter. Our preliminary analysis indicate that the further away an event is perceived to be (spatial proximity) results in a lower uncertainty about the risks of GCC, lower self-efficacy to effect GCC, and lower personal responsibility to influence GCC. Furthermore, our results show that videos with a higher perceived background scientific knowledge requirement (scientific literacy) results in greater viewer engagement with the video. A full analysis and results of this study will be presented within the poster presentation.
Six Decades of Flight Research: Dryden Flight Research Center, 1946 - 2006 [DVD
NASA Technical Reports Server (NTRS)
Fisher, David F.; Parcel, Steve
2007-01-01
This DVD contains an introduction by Center Director Kevin Peterson, two videos on the history of NASA Dryden Flight Research Center and a bibliography of NASA Dryden Flight Research Center publications from 1946 through 2006. The NASA Dryden 60th Anniversary Summary Documentary video is narrated by Michael Dorn and give a brief history of Dryden. The Six Decades of Flight Research at NASA Dryden lasts approximately 75 minutes and is broken up in six decades: 1. The Early X-Plane Era; 2. The X-15 Era; 3. The Lifting Body Era; 4. The Space Shuttle Era; 5. The High Alpha and Thrust Vectoring Era; and 6. The technology Demonstration Era. The bibliography provides citations for NASA Technical Reports and Conference Papers, Tech Briefs, Contractor Reports, UCLA Flight Systems Research Center publications and Dryden videos. Finally, a link is provided to the NASA Dryden Gallery that features video clips and photos of the many unique aircraft flown at NASA Dryden and its predecessor organizations.
Fast Appearance Modeling for Automatic Primary Video Object Segmentation.
Yang, Jiong; Price, Brian; Shen, Xiaohui; Lin, Zhe; Yuan, Junsong
2016-02-01
Automatic segmentation of the primary object in a video clip is a challenging problem as there is no prior knowledge of the primary object. Most existing techniques thus adapt an iterative approach for foreground and background appearance modeling, i.e., fix the appearance model while optimizing the segmentation and fix the segmentation while optimizing the appearance model. However, these approaches may rely on good initialization and can be easily trapped in local optimal. In addition, they are usually time consuming for analyzing videos. To address these limitations, we propose a novel and efficient appearance modeling technique for automatic primary video object segmentation in the Markov random field (MRF) framework. It embeds the appearance constraint as auxiliary nodes and edges in the MRF structure, and can optimize both the segmentation and appearance model parameters simultaneously in one graph cut. The extensive experimental evaluations validate the superiority of the proposed approach over the state-of-the-art methods, in both efficiency and effectiveness.
Audio-based queries for video retrieval over Java enabled mobile devices
NASA Astrophysics Data System (ADS)
Ahmad, Iftikhar; Cheikh, Faouzi Alaya; Kiranyaz, Serkan; Gabbouj, Moncef
2006-02-01
In this paper we propose a generic framework for efficient retrieval of audiovisual media based on its audio content. This framework is implemented in a client-server architecture where the client application is developed in Java to be platform independent whereas the server application is implemented for the PC platform. The client application adapts to the characteristics of the mobile device where it runs such as screen size and commands. The entire framework is designed to take advantage of the high-level segmentation and classification of audio content to improve speed and accuracy of audio-based media retrieval. Therefore, the primary objective of this framework is to provide an adaptive basis for performing efficient video retrieval operations based on the audio content and types (i.e. speech, music, fuzzy and silence). Experimental results approve that such an audio based video retrieval scheme can be used from mobile devices to search and retrieve video clips efficiently over wireless networks.
Doorbar-Baptist, Stuart; Adams, Roger; Rebbeck, Trudy
2017-04-01
This study documents a protocol designed to evaluate pelvic floor motor control in men with prostate cancer. It also aims to evaluate the reliability of therapists in rating motor control of pelvic floor muscles (PFMs) using real time ultrasound imaging (RUSI) video clips. We further determine predictors of acquiring motor control. Ninety-one men diagnosed with prostate cancer attending a physiotherapy clinic for pelvic floor exercises were taught detailed pelvic floor motor control exercises by a physiotherapist using trans-abdominal RUSI for biofeedback. A new protocol to rate motor control skill acquisition was developed. Three independent physiotherapists assessed motor control skill attainment by viewing RUSI videos of the contractions. Inter-rater reliability was evaluated using intra-class correlation coefficients. Logistic regression analysis was conducted to identify predictors of successful skill attainment. Acquisition of the skill was compared between pre- and post-operative participants using an independent-group t-test. There was good reliability for rating the RUSI video clips (ICC 0.73 (95%CI 0.59-0.82)) for experienced therapists. Having low BMI and being seen pre-operatively predicted motor skill attainment, accounting for 46.3% of the variance. Significantly more patients trained pre-operatively acquired the skill of pelvic floor control compared with patients initially seen post-operatively (OR 11.87, 95%CI 1.4 to 99.5, p = 0.02). A new protocol to evaluate attainment of pelvic floor control in men with prostate cancer can be assessed reliably from RUSI images, and is most effectively delivered pre-operatively.
Classroom Materials from the Acoustical Society of America
NASA Astrophysics Data System (ADS)
Adams, W. K.; Clark, A.; Schneider, K.
2013-09-01
As part of the new education initiatives of the Acoustical Society of America (ASA), an activity kit for teachers that includes a variety of lessons addressing acoustics for a range of students (K-12) has been created. The "Sound and Music Activity Kit" is free to K-12 teachers. It includes materials sufficient to teach a class of 30 students plus a USB thumb drive containing 47 research-based, interactive, student-tested lessons, laboratory exercises, several assessments, and video clips of a class using the materials. ASA has also partnered with both the Optical Society of America (OSA) and the American Association of Physics Teachers. AAPT Physics Teaching Resource Agents (PTRA) have reviewed the lessons along with members of the ASA Teacher Activity Kit Committee. Topics include basic learning goals for teaching the physics of sound with examples and applications relating to medical imaging, animal bioacoustics, physical and psychological acoustics, speech, audiology, and architectural acoustics.
Quagga and zebra mussels: biology, impacts, and control
Nalepa, Thomas F.; Schloesser, Don W.; Nalepa, Thomas F.; Schloesser, Don W.
2013-01-01
Quagga and Zebra Mussels: Biology, Impacts, and Control, Second Edition provides a broad view of the zebra/quagga mussel issue, offering a historic perspective and up-to-date information on mussel research. Comprising 48 chapters, this second edition includes reviews of mussel morphology, physiology, and behavior. It details mussel distribution and spread in Europe and across North America, and examines policy and regulatory responses, management strategies, and mitigation efforts. In addition, this book provides extensive coverage of the impact of invasive mussel species on freshwater ecosystems, including effects on water clarity, phytoplankton, water quality, food web changes, and consequences to other aquatic fauna. It also reviews and offers new insights on how zebra and quagga mussels respond and adapt to varying environmental conditions. This new edition includes seven video clips that complement chapter text and, through visual documentation, provide a greater understanding of mussel behavior and distribution.
CLIPS - C LANGUAGE INTEGRATED PRODUCTION SYSTEM (IBM PC VERSION)
NASA Technical Reports Server (NTRS)
Riley, G.
1994-01-01
The C Language Integrated Production System, CLIPS, is a shell for developing expert systems. It is designed to allow artificial intelligence research, development, and delivery on conventional computers. The primary design goals for CLIPS are portability, efficiency, and functionality. For these reasons, the program is written in C. CLIPS meets or outperforms most micro- and minicomputer based artificial intelligence tools. CLIPS is a forward chaining rule-based language. The program contains an inference engine and a language syntax that provide a framework for the construction of an expert system. It also includes tools for debugging an application. CLIPS is based on the Rete algorithm, which enables very efficient pattern matching. The collection of conditions and actions to be taken if the conditions are met is constructed into a rule network. As facts are asserted either prior to or during a session, CLIPS pattern-matches the number of fields. Wildcards and variables are supported for both single and multiple fields. CLIPS syntax allows the inclusion of externally defined functions (outside functions which are written in a language other than CLIPS). CLIPS itself can be embedded in a program such that the expert system is available as a simple subroutine call. Advanced features found in CLIPS version 4.3 include an integrated microEMACS editor, the ability to generate C source code from a CLIPS rule base to produce a dedicated executable, binary load and save capabilities for CLIPS rule bases, and the utility program CRSV (Cross-Reference, Style, and Verification) designed to facilitate the development and maintenance of large rule bases. Five machine versions are available. Each machine version includes the source and the executable for that machine. The UNIX version includes the source and binaries for IBM RS/6000, Sun3 series, and Sun4 series computers. The UNIX, DEC VAX, and DEC RISC Workstation versions are line oriented. The PC version and the Macintosh version each contain a windowing variant of CLIPS as well as the standard line oriented version. The mouse/window interface version for the PC works with a Microsoft compatible mouse or without a mouse. This window version uses the proprietary CURSES library for the PC, but a working executable of the window version is provided. The window oriented version for the Macintosh includes a version which uses a full Macintosh-style interface, including an integrated editor. This version allows the user to observe the changing fact base and rule activations in separate windows while a CLIPS program is executing. The IBM PC version is available bundled with CLIPSITS, The CLIPS Intelligent Tutoring System for a special combined price (COS-10025). The goal of CLIPSITS is to provide the student with a tool to practice the syntax and concepts covered in the CLIPS User's Guide. It attempts to provide expert diagnosis and advice during problem solving which is typically not available without an instructor. CLIPSITS is divided into 10 lessons which mirror the first 10 chapters of the CLIPS User's Guide. The program was developed for the IBM PC series with a hard disk. CLIPSITS is also available separately as MSC-21679. The CLIPS program is written in C for interactive execution and has been implemented on an IBM PC computer operating under DOS, a Macintosh and DEC VAX series computers operating under VMS or ULTRIX. The line oriented version should run on any computer system which supports a full (Kernighan and Ritchie) C compiler or the ANSI standard C language. CLIPS was developed in 1986 and Version 4.2 was released in July of 1988. Version 4.3 was released in June of 1989.
CLIPS - C LANGUAGE INTEGRATED PRODUCTION SYSTEM (MACINTOSH VERSION)
NASA Technical Reports Server (NTRS)
Culbert, C.
1994-01-01
The C Language Integrated Production System, CLIPS, is a shell for developing expert systems. It is designed to allow artificial intelligence research, development, and delivery on conventional computers. The primary design goals for CLIPS are portability, efficiency, and functionality. For these reasons, the program is written in C. CLIPS meets or outperforms most micro- and minicomputer based artificial intelligence tools. CLIPS is a forward chaining rule-based language. The program contains an inference engine and a language syntax that provide a framework for the construction of an expert system. It also includes tools for debugging an application. CLIPS is based on the Rete algorithm, which enables very efficient pattern matching. The collection of conditions and actions to be taken if the conditions are met is constructed into a rule network. As facts are asserted either prior to or during a session, CLIPS pattern-matches the number of fields. Wildcards and variables are supported for both single and multiple fields. CLIPS syntax allows the inclusion of externally defined functions (outside functions which are written in a language other than CLIPS). CLIPS itself can be embedded in a program such that the expert system is available as a simple subroutine call. Advanced features found in CLIPS version 4.3 include an integrated microEMACS editor, the ability to generate C source code from a CLIPS rule base to produce a dedicated executable, binary load and save capabilities for CLIPS rule bases, and the utility program CRSV (Cross-Reference, Style, and Verification) designed to facilitate the development and maintenance of large rule bases. Five machine versions are available. Each machine version includes the source and the executable for that machine. The UNIX version includes the source and binaries for IBM RS/6000, Sun3 series, and Sun4 series computers. The UNIX, DEC VAX, and DEC RISC Workstation versions are line oriented. The PC version and the Macintosh version each contain a windowing variant of CLIPS as well as the standard line oriented version. The mouse/window interface version for the PC works with a Microsoft compatible mouse or without a mouse. This window version uses the proprietary CURSES library for the PC, but a working executable of the window version is provided. The window oriented version for the Macintosh includes a version which uses a full Macintosh-style interface, including an integrated editor. This version allows the user to observe the changing fact base and rule activations in separate windows while a CLIPS program is executing. The IBM PC version is available bundled with CLIPSITS, The CLIPS Intelligent Tutoring System for a special combined price (COS-10025). The goal of CLIPSITS is to provide the student with a tool to practice the syntax and concepts covered in the CLIPS User's Guide. It attempts to provide expert diagnosis and advice during problem solving which is typically not available without an instructor. CLIPSITS is divided into 10 lessons which mirror the first 10 chapters of the CLIPS User's Guide. The program was developed for the IBM PC series with a hard disk. CLIPSITS is also available separately as MSC-21679. The CLIPS program is written in C for interactive execution and has been implemented on an IBM PC computer operating under DOS, a Macintosh and DEC VAX series computers operating under VMS or ULTRIX. The line oriented version should run on any computer system which supports a full (Kernighan and Ritchie) C compiler or the ANSI standard C language. CLIPS was developed in 1986 and Version 4.2 was released in July of 1988. Version 4.3 was released in June of 1989.
CLIPS - C LANGUAGE INTEGRATED PRODUCTION SYSTEM (IBM PC VERSION WITH CLIPSITS)
NASA Technical Reports Server (NTRS)
Riley, , .
1994-01-01
The C Language Integrated Production System, CLIPS, is a shell for developing expert systems. It is designed to allow artificial intelligence research, development, and delivery on conventional computers. The primary design goals for CLIPS are portability, efficiency, and functionality. For these reasons, the program is written in C. CLIPS meets or outperforms most micro- and minicomputer based artificial intelligence tools. CLIPS is a forward chaining rule-based language. The program contains an inference engine and a language syntax that provide a framework for the construction of an expert system. It also includes tools for debugging an application. CLIPS is based on the Rete algorithm, which enables very efficient pattern matching. The collection of conditions and actions to be taken if the conditions are met is constructed into a rule network. As facts are asserted either prior to or during a session, CLIPS pattern-matches the number of fields. Wildcards and variables are supported for both single and multiple fields. CLIPS syntax allows the inclusion of externally defined functions (outside functions which are written in a language other than CLIPS). CLIPS itself can be embedded in a program such that the expert system is available as a simple subroutine call. Advanced features found in CLIPS version 4.3 include an integrated microEMACS editor, the ability to generate C source code from a CLIPS rule base to produce a dedicated executable, binary load and save capabilities for CLIPS rule bases, and the utility program CRSV (Cross-Reference, Style, and Verification) designed to facilitate the development and maintenance of large rule bases. Five machine versions are available. Each machine version includes the source and the executable for that machine. The UNIX version includes the source and binaries for IBM RS/6000, Sun3 series, and Sun4 series computers. The UNIX, DEC VAX, and DEC RISC Workstation versions are line oriented. The PC version and the Macintosh version each contain a windowing variant of CLIPS as well as the standard line oriented version. The mouse/window interface version for the PC works with a Microsoft compatible mouse or without a mouse. This window version uses the proprietary CURSES library for the PC, but a working executable of the window version is provided. The window oriented version for the Macintosh includes a version which uses a full Macintosh-style interface, including an integrated editor. This version allows the user to observe the changing fact base and rule activations in separate windows while a CLIPS program is executing. The IBM PC version is available bundled with CLIPSITS, The CLIPS Intelligent Tutoring System for a special combined price (COS-10025). The goal of CLIPSITS is to provide the student with a tool to practice the syntax and concepts covered in the CLIPS User's Guide. It attempts to provide expert diagnosis and advice during problem solving which is typically not available without an instructor. CLIPSITS is divided into 10 lessons which mirror the first 10 chapters of the CLIPS User's Guide. The program was developed for the IBM PC series with a hard disk. CLIPSITS is also available separately as MSC-21679. The CLIPS program is written in C for interactive execution and has been implemented on an IBM PC computer operating under DOS, a Macintosh and DEC VAX series computers operating under VMS or ULTRIX. The line oriented version should run on any computer system which supports a full (Kernighan and Ritchie) C compiler or the ANSI standard C language. CLIPS was developed in 1986 and Version 4.2 was released in July of 1988. Version 4.3 was released in June of 1989.
ERIC Educational Resources Information Center
Hodkinson, Alan
2012-01-01
This paper examines the picture of disability portrayed within the electronic media presented to primary-aged pupils in England. The study employed proto-text analysis to examine 494 separate electronic resources which contained 4485 illustrations, 930 photographs and 59 video clips. The major finding of the research is that the media examined…
The functional neuroanatomy of maternal love: mother's response to infant's attachment behaviors.
Noriuchi, Madoka; Kikuchi, Yoshiaki; Senoo, Atsushi
2008-02-15
Maternal love, which may be the core of maternal behavior, is essential for the mother-infant attachment relationship and is important for the infant's development and mental health. However, little has been known about these neural mechanisms in human mothers. We examined patterns of maternal brain activation in response to infant cues using video clips. We performed functional magnetic resonance imaging (fMRI) measurements while 13 mothers viewed video clips, with no sound, of their own infant and other infants of approximately 16 months of age who demonstrated two different attachment behaviors (smiling at the infant's mother and crying for her). We found that a limited number of the mother's brain areas were specifically involved in recognition of the mother's own infant, namely orbitofrontal cortex (OFC), periaqueductal gray, anterior insula, and dorsal and ventrolateral parts of putamen. Additionally, we found the strong and specific mother's brain response for the mother's own infant's distress. The differential neural activation pattern was found in the dorsal region of OFC, caudate nucleus, right inferior frontal gyrus, dorsomedial prefrontal cortex (PFC), anterior cingulate, posterior cingulate, thalamus, substantia nigra, posterior superior temporal sulcus, and PFC. Our results showed the highly elaborate neural mechanism mediating maternal love and diverse and complex maternal behaviors for vigilant protectiveness.
The Effect of Background Music in Shark Documentaries on Viewers' Perceptions of Sharks
Hastings, Philip A.; Gneezy, Ayelet
2016-01-01
Despite the ongoing need for shark conservation and management, prevailing negative sentiments marginalize these animals and legitimize permissive exploitation. These negative attitudes arise from an instinctive, yet exaggerated fear, which is validated and reinforced by disproportionate and sensationalistic news coverage of shark ‘attacks’ and by highlighting shark-on-human violence in popular movies and documentaries. In this study, we investigate another subtler, yet powerful factor that contributes to this fear: the ominous background music that often accompanies shark footage in documentaries. Using three experiments, we show that participants rated sharks more negatively and less positively after viewing a 60-second video clip of swimming sharks set to ominous background music, compared to participants who watched the same video clip set to uplifting background music, or silence. This finding was not an artifact of soundtrack alone because attitudes toward sharks did not differ among participants assigned to audio-only control treatments. This is the first study to demonstrate empirically that the connotative attributes of background music accompanying shark footage affect viewers’ attitudes toward sharks. Given that nature documentaries are often regarded as objective and authoritative sources of information, it is critical that documentary filmmakers and viewers are aware of how the soundtrack can affect the interpretation of the educational content. PMID:27487003
The Effect of Background Music in Shark Documentaries on Viewers' Perceptions of Sharks.
Nosal, Andrew P; Keenan, Elizabeth A; Hastings, Philip A; Gneezy, Ayelet
2016-01-01
Despite the ongoing need for shark conservation and management, prevailing negative sentiments marginalize these animals and legitimize permissive exploitation. These negative attitudes arise from an instinctive, yet exaggerated fear, which is validated and reinforced by disproportionate and sensationalistic news coverage of shark 'attacks' and by highlighting shark-on-human violence in popular movies and documentaries. In this study, we investigate another subtler, yet powerful factor that contributes to this fear: the ominous background music that often accompanies shark footage in documentaries. Using three experiments, we show that participants rated sharks more negatively and less positively after viewing a 60-second video clip of swimming sharks set to ominous background music, compared to participants who watched the same video clip set to uplifting background music, or silence. This finding was not an artifact of soundtrack alone because attitudes toward sharks did not differ among participants assigned to audio-only control treatments. This is the first study to demonstrate empirically that the connotative attributes of background music accompanying shark footage affect viewers' attitudes toward sharks. Given that nature documentaries are often regarded as objective and authoritative sources of information, it is critical that documentary filmmakers and viewers are aware of how the soundtrack can affect the interpretation of the educational content.
Ota, Nakao; Tanikawa, Rokuya; Noda, Kosumo; Tsuboi, Toshiyuki; Kamiyama, Hiroyasu; Tokuda, Sadahisa
2015-01-01
Background: The fenestrated clip is sometimes useful in limited approach angle and narrow working space. However, before the development of the new Yasargil titanium fenestrated mini-clip, the only variations of fenestrated clips were those of larger sizes. And those larger clips have a problem of the triangle-shaped gap at the proximal end of the blade. The authors describe the efficiency, limitations and surgical technique of using the Yasargil titanium fenestrated mini-clip. Methods: Fifty-nine cases of aneurysms were treated using these mini-clips. Aneurysm location, size and dome neck ratio, mean follow-up period, neck remnant, and recurrence rate were also analyzed. Among these cases, we present eight characteristic cases, including a case with aneurysm recurrence, and we review the problems associated with the triangle-shaped gap at the proximal end of the clip. Results: The average size of the aneurysms was 5.57 mm, and the dome neck ratio was >2.0 in 1.69%, >1.5 in 11.8%, >1.2 in 35.6%, and <1.2 in 50.8% of cases. The mean follow-up period for the 59 cases was 5.5 months (range, 0.5–16 months). Angiographic recurrence of the treated portion occurred in 1 case (1.7%), including an aneurysm in the basilar artery tip aneurysm. Conclusion: The availability of the Yasargil titanium fenestrated mini-clip increases the options for clipping to minimize the remnant of the clipped aneurysm. However, there is still concern over the triangular space at the base of the blade, especially when treating an aneurysm with a thin vessel wall. Therefore, modification of the clipping technique is sometimes needed. PMID:26664871
An Inquiry-based Course Using ``Physics?'' in Cartoons and Movies
NASA Astrophysics Data System (ADS)
Rogers, Michael
2007-01-01
Books, cartoons, movies, and video games provide engaging opportunities to get both science and nonscience students excited about physics. An easy way to use these media in one's classroom is to have students view clips and identify unusual events, odd physics, or list things that violate our understanding of the physics that governs our universe.1,2 These activities provide a lesson or two of material, but how does one create an entire course on examining the physics in books, cartoons, movies, and video games? Other approaches attempt to reconcile events in various media with our understanding of physics3-8 or use cartoons themselves to help explain physics topics.9
High fidelity case-based simulation debriefing: everything you need to know.
Hart, Danielle; McNeil, Mary Ann; Griswold-Theodorson, Sharon; Bhatia, Kriti; Joing, Scott
2012-09-01
In this 30-minute talk, the authors take an in-depth look at how to debrief high-fidelity case-based simulation sessions, including discussion on debriefing theory, goals, approaches, and structure, as well as ways to create a supportive and safe learning environment, resulting in successful small group learning and self-reflection. Emphasis is placed on the "debriefing with good judgment" approach. Video clips of sample debriefing attempts, highlighting the "dos and don'ts" of simulation debriefing, are included. The goal of this talk is to provide you with the necessary tools and information to develop a successful and effective debriefing approach. There is a bibliography and a quick reference guide in Data Supplements S1 and S2 (available as supporting information in the online version of this paper). © 2012 by the Society for Academic Emergency Medicine.
Snow, Rosamund; Crocker, Joanna; Talbot, Katherine; Moore, Jane; Salisbury, Helen
2016-12-01
Medical education increasingly includes patient perspectives, but few studies look at the impact on students' proficiency in standard examinations. We explored students' exam performance after viewing video of patients' experiences. Eighty-eight medical students were randomized to one of two e-learning modules. The experimental group saw video clips of patients describing their colposcopy, while the control group viewed a clinician describing the procedure. Students then completed a Multiple Choice Questionnaire (MCQ) and were assessed by a blinded clinical examiner in an Objective Structured Clinical Examination (OSCE) with a blinded simulated patient (SP). The SP scored students using the Doctors' Interpersonal Skills Questionnaire (DISQ). Students rated the module's effect on their skills and confidence. Regression analyses were used to compare the effect of the two modules on these outcomes, adjusting for gender and graduate entry. The experimental group performed better in the OSCE than the control group (odds ratio 2.7 [95%CI 1.2-6.1]; p = 0.016). They also reported significantly more confidence in key areas, including comfort with patients' emotions (odds ratio 6.4 [95%CI 2.7-14.9]; p < 0.0005). There were no other significant differences. Teaching that included recorded elements of real patient experience significantly improved students' examination performance and confidence.
Kim, Do Hyung; Paik, Hyo Chae; Lee, Doo Yun
2004-08-01
The main cause of dissatisfaction after sympathetic trunk blocking surgery (T2 sympathectomy, sympathetic clipping) for craniofacial hyperhidrosis is compensatory sweating. Preserving sympathetic trunk may decrease the incidence of compensatory sweating, and we introduce T2 ramicotomy, which may better preserve the sympathetic nerve trunk in order to reduce compensatory sweating. From January 2000 to November 2002, video-assisted thoracoscopic (VAT) T2 sympathetic clipping and VAT ramicotomy were performed in 44 patients suffering from craniofacial hyperhidrosis. Twenty-two patients underwent T2 sympathetic clipping (group 1), and 22 underwent division of T2 rami-communicantes (group 2). We retrospectively analyzed the rate of satisfaction, dryness of face, and grade of compensatory sweating. Both groups were similar with respect to facial dryness (P = 0.099). Group 1: excessive dry 5 patients (22.7%), dry 17 patients (77.3%); group 2: excessive dry 3 patients (13.6%), dry 15 patients (68.1%), and persistent sweating 4 patients (18.3%). The rate of satisfaction was 77.3% in group 1, and 63.6% in group 2 with no significant difference (P > 0.05). The rate of compensatory sweating in group 2 (72.7%) was significantly lower than in group 1 (95.4%) (P < 0.039). The chance of embarrassing and disabling compensatory sweating was lower in group 2 than in group 1; 76.5% (embarrassing in 8 patients, disabling in 9) in group 1, and 36.4% (embarrassing in 7 patients, disabling in 1) in group 2 which was statistically significant (P < 0.006). T2 ramicotomy for craniofacial hyperhidrosis lowers the rate of compensatory sweating and excessive dryness of face compared to T2 clipping.
Plantar pressure of clipless and toe-clipped pedals in cyclists - A pilot study
Davis, Andrea; Pemberton, Troy; Ghosh, Subhajit; Maffulli, Nicola; Padhiar, Nat
2011-01-01
Summary To determine the effect of clipless and toe-clipped pedals on plantar foot pressure while cycling. Seven bikers and 11 healthy volunteers were tested on a Giant ATX Team mountain bike, Tekscan Clinical 5.24 F-scan® system with an inner sole pressure sensor, a Tacx Cycle force One Turbo Trainer and a Cateye Mity 8 computerized speedometer were used. The subjects wore Shimano M037 shoes and used a standard clipless and toe-clipped pedal. The seat height was set at 100% of subject’s trochanteric height. Plantar pressures were recorded over 12 consecutive crank cycles at a constant speed for each of the power outputs. The videos were analysed to record the pressure exerted at 12 positions on the foot for each variable. Whether there is any dominance of any of the metatarsals, and any difference in plantar pressures between clipped and clipless pedal. There was a significant difference in the pressure at many positions of the foot, but the sites were different for each individual. General regression analysis indicated that pedal type had a statistically significant effect on plantar pressure at the sites of 1st metatarsal (p=0.042), 3rd metatarsal (p<0.001), 5th metatarsal (<0.001), 2nd (p=0.018) and 5th toe (p<0.001), lateral midfoot (p<0.001) and central heel (p<0.001) areas. Clipless pedals produce higher pressures which are more spread across the foot than toe-clipped pedals. This may have implications for their use in the prevention and/or management of overuse injuries in the knee and foot. PMID:23738240
NASA Astrophysics Data System (ADS)
Carlowicz, Michael
If you have a computer and a grasp of algebra, you can learn physics. That is one of the messages behind the release of Physics—The Root Science, a new full-text version of a physics textbook available at no cost on the World Wide Web.The interactive textbook is the work of the International Institute of Theoretical and Applied Physics (IITAP) at Iowa State University, which was established in 1993 as a partnership with the United Nations Education, Scientific, and Cultural Organization (UNESCO). With subject matter equivalent to that of a 400-page volume, the text is designed to be completed in one school year. The textbook also will eventually include video clips of experiments and interactive learning modules, as well as links to appropriate cross-references about fundamental principles of physics.
Shuttle Earth Views, 1994. Part 3
NASA Technical Reports Server (NTRS)
1995-01-01
In this third part of a four-part video compilation of Space Shuttle Earth views, various geographical areas are shown, including both land and water masses. The views cover South America, Asia (North Vietnam, Laos, Cambodia, China, Malaysia, Thailand, Java, various islands, Burma, Philippines, Taiwan, Guam), New Guinea, Australia, Morocco, Southern Europe (Spain, Portugal, Algeria, Italy, Sicily, Greece, Former Republic of Yugoslavia, Tunisia), and parts of the Middle East (Libya, Saudi Arabia, Egypt, Israel, Jordan, Sinai, Cyprus, Lebanon, Iraq), the Pacific Ocean, the Atlantic Ocean, the Indian Ocean, and the Mediterranean, Dead, Coral, Tyrrhenian, Adriatic, Ionian, Red, South China, Mindanao, Arafura, Sulu, Java, and China Seas. Each film clip has a heading that names the shuttle and the geographical location of the footage.
The Apollo 17 Lunar Surface Journal
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, E.M.
1995-08-01
The material included in the Apollo 17 Lunar Surface Journal has been assembled so that an uninitiated reader can understand, in some detail, what happened during Apollo 17 and why and what was learned, particularly about living and working on the Moon. At its heart, the Journal consists a corrected mission transcript which is interwoven with commentary by the crew and by Journal Editor -- commentary which, we hope, will make the rich detail of Apollo 17 accessible to a wide audience. To make the Journal even more accessible, this CD-ROM publication contains virtually all of the Apollo 17 audio,more » a significant fraction of the photographs and a selection of drawings, maps, video clips, and background documents.« less
Aqua Education and Public Outreach
NASA Astrophysics Data System (ADS)
Graham, S. M.; Parkinson, C. L.; Chambers, L. H.; Ray, S. E.
2011-12-01
NASA's Aqua satellite was launched on May 4, 2002, with six instruments designed to collect data about the Earth's atmosphere, biosphere, hydrosphere, and cryosphere. Since the late 1990s, the Aqua mission has involved considerable education and public outreach (EPO) activities, including printed products, formal education, an engineering competition, webcasts, and high-profile multimedia efforts. The printed products include Aqua and instrument brochures, an Aqua lithograph, Aqua trading cards, NASA Fact Sheets on Aqua, the water cycle, and weather forecasting, and an Aqua science writers' guide. On-going formal education efforts include the Students' Cloud Observations On-Line (S'COOL) Project, the MY NASA DATA Project, the Earth System Science Education Alliance, and, in partnership with university professors, undergraduate student research modules. Each of these projects incorporates Aqua data into its inquiry-based framework. Additionally, high school and undergraduate students have participated in summer internship programs. An earlier formal education activity was the Aqua Engineering Competition, which was a high school program sponsored by the NASA Goddard Space Flight Center, Morgan State University, and the Baltimore Museum of Industry. The competition began with the posting of a Round 1 Aqua-related engineering problem in December 2002 and concluded in April 2003 with a final round of competition among the five finalist teams. The Aqua EPO efforts have also included a wide range of multimedia products. Prior to launch, the Aqua team worked closely with the Special Projects Initiative (SPI) Office to produce a series of live webcasts on Aqua science and the Cool Science website aqua.nasa.gov/coolscience, which displays short video clips of Aqua scientists and engineers explaining the many aspects of the Aqua mission. These video clips, the Aqua website, and numerous presentations have benefited from dynamic visualizations showing the Aqua launch, instrument deployments, instrument sensing, and the Aqua orbit. More recently, in 2008 the Aqua team worked with the ViewSpace production team from the Space Telescope Science Institute to create an 18-minute ViewSpace feature showcasing the science and applications of the Aqua mission. Then in 2010 and 2011, Aqua and other NASA Earth-observing missions partnered with National CineMedia on the "Know Your Earth" (KYE) project. During January and July 2010 and 2011, KYE ran 2-minute segments highlighting questions that promoted global climate literacy on lobby LCD screens in movie theaters throughout the U.S. Among the ongoing Aqua EPO efforts is the incorporation of Aqua data sets onto the Dynamic Planet, a large digital video globe that projects a wide variety of spherical data sets. Aqua also has a highly successful collaboration with EarthSky communications on the production of an Aqua/EarthSky radio show and podcast series. To date, eleven productions have been completed and distributed via the EarthSky network. In addition, a series of eight video podcasts (i.e., vodcasts) are under production by NASA Goddard TV in conjunction with Aqua personnel, highlighting various aspects of the Aqua mission.
[Clip Sheets from BOCES. Opportunities. Health. Careers. = Oportunidades. Salud. Una Camera En...
ERIC Educational Resources Information Center
State Univ. of New York, Geneseo. Coll. at Geneseo. Migrant Center.
This collection of 83 clip sheets, or classroom handouts, was created to help U.S. migrants learn more about health, careers, and general "opportunities" including education programs. They are written in both English and Spanish and are presented in an easily understandable format. Health clip-sheet topics include the following: Abuse; AIDS;…
Saxena, Payal; Ji-Shin, Eun; Haito-Chavez, Yamile; Valeshabad, Ali K.; Akshintala, Venkata; Aguila, Gerard; Kumbhari, Vivek; Ruben, Dawn S.; Lennon, Anne-Marie; Singh, Vikesh; Canto, Marcia; Kalloo, Anthony; Khashab, Mouen A.
2014-01-01
Background/Aim: There are currently no data on the relative retention rates of the Instinct clip, Resolution clip, and QuickClip2Long. Also, it is unknown whether retention rate differs when clips are applied to ulcerated rather than normal mucosa. The aim of this study is to compare the retention rates of three commonly used endoscopic clips. Materials and Methods: Six pigs underwent upper endoscopy with placement of one of each of the three types of clips on normal mucosa in the gastric body. Three mucosal resections were also performed to create “ulcers”. Each ulcer was closed with placement of one of the three different clips. Repeat endoscopy was performed weekly for up to 4 weeks. Results: Only the Instinct and Resolution clips remained attached for the duration of the study (4 weeks). At each time point, a greater proportion of Instinct clips were retained on normal mucosa, followed by Resolution clips. QuickClip2Long had the lowest retention rate on normal mucosa. Similar retention rates of Instinct clips and Resolution clips were seen on simulated ulcers, although both were superior to QuickClip2Long. However, the difference did not reach statistical significance. All QuickClip2Long clips were dislodged at 4 weeks in both the groups. Conclusions: The Resolution and Instinct clips have comparable retention rates and both appeared to be better than the QuickClip2Long on normal mucosa-simulated ulcers; however this did not reach statistical significance. Both the Resolution clip and the Instinct clip may be preferred in clinical situations when long-term clip attachment is required, including marking of tumors for radiotherapy and anchoring feeding tubes or stents. Either of the currently available clips may be suitable for closure of iatrogenic mucosal defects without features of chronicity. PMID:25434317
Objectifying facial expressivity assessment of Parkinson's patients: preliminary study.
Wu, Peng; Gonzalez, Isabel; Patsis, Georgios; Jiang, Dongmei; Sahli, Hichem; Kerckhofs, Eric; Vandekerckhove, Marie
2014-01-01
Patients with Parkinson's disease (PD) can exhibit a reduction of spontaneous facial expression, designated as "facial masking," a symptom in which facial muscles become rigid. To improve clinical assessment of facial expressivity of PD, this work attempts to quantify the dynamic facial expressivity (facial activity) of PD by automatically recognizing facial action units (AUs) and estimating their intensity. Spontaneous facial expressivity was assessed by comparing 7 PD patients with 8 control participants. To voluntarily produce spontaneous facial expressions that resemble those typically triggered by emotions, six emotions (amusement, sadness, anger, disgust, surprise, and fear) were elicited using movie clips. During the movie clips, physiological signals (facial electromyography (EMG) and electrocardiogram (ECG)) and frontal face video of the participants were recorded. The participants were asked to report on their emotional states throughout the experiment. We first examined the effectiveness of the emotion manipulation by evaluating the participant's self-reports. Disgust-induced emotions were significantly higher than the other emotions. Thus we focused on the analysis of the recorded data during watching disgust movie clips. The proposed facial expressivity assessment approach captured differences in facial expressivity between PD patients and controls. Also differences between PD patients with different progression of Parkinson's disease have been observed.
Wright, M J; Bishop, D T; Jackson, R C; Abernethy, B
2011-08-18
Badminton players of varying skill levels viewed normal and point-light video clips of opponents striking the shuttle towards the viewer; their task was to predict in which quadrant of the court the shuttle would land. In a whole-brain fMRI analysis we identified bilateral cortical networks sensitive to the anticipation task relative to control stimuli. This network is more extensive and localised than previously reported. Voxel clusters responding more strongly in experts than novices were associated with all task-sensitive areas, whereas voxels responding more strongly in novices were found outside these areas. Task-sensitive areas for normal and point-light video were very similar, whereas early visual areas responded differentially, indicating the primacy of kinematic information for sport-related anticipation. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Hettige, Samantha; Walsh, Daniel
2010-03-01
To illustrate the use of indocyanine green (ICG) video-angiography to confirm abolition of spinal dural arteriovenous fistula (SDAVF) and preserve the normal vascular anatomy intraoperatively. A 73-year-old woman presenting with progressive myelopathy was diagnosed with an SDAVF, where the origin of the fistula was in close proximity to the origin of the posterior spinal artery. ICG was injected intravenously. Using a filter on the microscope, dynamic filling of the abnormal vasculature was visualised. After applying a clip to the fistulous connection, we were able to see the successful interruption of the dural fistula, on-table in real time. ICG video angiography confirmed interruption of the fistula and preservation of the associated posterior spinal artery. We find the application of this relatively new technology has the potential to shorten operating times, gives additional reassurance of completeness of surgical treatment and preservation of normal spinal vasculature.
Video attention deviation estimation using inter-frame visual saliency map analysis
NASA Astrophysics Data System (ADS)
Feng, Yunlong; Cheung, Gene; Le Callet, Patrick; Ji, Yusheng
2012-01-01
A viewer's visual attention during video playback is the matching of his eye gaze movement to the changing video content over time. If the gaze movement matches the video content (e.g., follow a rolling soccer ball), then the viewer keeps his visual attention. If the gaze location moves from one video object to another, then the viewer shifts his visual attention. A video that causes a viewer to shift his attention often is a "busy" video. Determination of which video content is busy is an important practical problem; a busy video is difficult for encoder to deploy region of interest (ROI)-based bit allocation, and hard for content provider to insert additional overlays like advertisements, making the video even busier. One way to determine the busyness of video content is to conduct eye gaze experiments with a sizable group of test subjects, but this is time-consuming and costineffective. In this paper, we propose an alternative method to determine the busyness of video-formally called video attention deviation (VAD): analyze the spatial visual saliency maps of the video frames across time. We first derive transition probabilities of a Markov model for eye gaze using saliency maps of a number of consecutive frames. We then compute steady state probability of the saccade state in the model-our estimate of VAD. We demonstrate that the computed steady state probability for saccade using saliency map analysis matches that computed using actual gaze traces for a range of videos with different degrees of busyness. Further, our analysis can also be used to segment video into shorter clips of different degrees of busyness by computing the Kullback-Leibler divergence using consecutive motion compensated saliency maps.
Giesbrecht, Timo; Merckelbach, Harald; van Oorsouw, Kim; Simeon, Daphne
2010-05-30
It is often assumed that when confronted with an emotional event, patients with DPD inhibit information processing. It is also thought that this fosters memory fragmentation. This hypothesis has not been tested in chronic depersonalization. The aim of this study was to investigate the temporal pattern of autonomic responding to emotional material in depersonalization disorder, along with concomitant deficits in subjective and objective memory formation (i.e., difficulties to form a coherent narrative consisting of an ordered sequence of events). Participants with depersonalization disorder (n=14) and healthy control participants (n=14) viewed an emotional video clip while their skin conductance (SC) levels were measured. Peritraumatic dissociation was measured before and after the clip, and memory performance was measured 35 min after viewing. Compared to controls, depersonalized participants exhibited a distinctly different temporal pattern of autonomic responding, characterized by an earlier peak and subsequent flattening of SCLs. Maximum SCLs did not differ between the two groups. Moreover, unlike the control group, depersonalized participants showed no SC recovery after clip offset. In terms of memory performance, patients exhibited objective memory fragmentation, which they also reported subjectively. However, they did not differ from controls in free recall performance. Apparently, emotional responding in DPD is characterized by a shortened latency to peak with subsequent flattening and is accompanied by memory fragmentation in the light of otherwise unremarkable memory functioning. Copyright 2010 Elsevier Ireland Ltd. All rights reserved.
Yen, Peggy; Dumas, Sandra; Albert, Arianne; Gordon, Paula
2018-02-01
The placement of localization clips following percutaneous biopsy is a standard practice for a variety of situations. Subsequent clip displacement creates challenges for imaging surveillance and surgical planning, and may cause confusion amongst radiologists and between surgeons and radiologists. Many causes have been attributed for this phenomenon including the commonly accepted "accordion effect." Herein, we investigate the performance of a low cost surgical clip system against 4 commercially available clips. We retrospectively reviewed 2112 patients who underwent stereotactic vacuum-assisted core biopsy followed by clip placement between January 2013 and June 2016. The primary performance parameter compared was displacement >10 mm following vacuum-assisted stereotactic core biopsy. Within the group of clips that had displaced, the magnitude of displacement was compared. There was a significant difference in displacement among the clip types (P < .0001) with significant pairwise comparisons between pediatric surgical clips and SecureMark (38% vs 28%; P = .001) and SenoMark (38% vs 27%; P = .0001) in the proportion displaced. The surgical clips showed a significant magnitude of displacement of approximately 25% greater average distance displaced. As a whole, the commercial clips performed better than the surgical clip after stereotactic vacuum-assisted core biopsy suggesting the surrounding outer component acts to anchor the central clip and minimizes clip displacement. The same should apply to tomosynthesis-guided biopsy. Crown Copyright © 2017. Published by Elsevier Inc. All rights reserved.
Using CLIPS as the cornerstone of a graduate expert systems course
NASA Technical Reports Server (NTRS)
Yue, Kwok-Bun
1991-01-01
The effective use of the C Language Integrated Production System (CLIPS) as a cornerstone in a graduate expert systems course is described. The course include 8 or 9 hours of in-depth lecturing in CLIPS, as well as a broad coverage of major topics and techniques in expert systems. As part of the requirements of the course, students solved two small yet non-trival problems in CLIPS before going on to develop a toy expert system in CLIPS in an incremental manner as the term project. Furthermore, students were required to evaluate CLIPS programs written by their classmates. An anonymous questionnaire at the end of the semester revealed that the students responded very favorably to the course, especially their experience with CLIPS.
ERIC Educational Resources Information Center
Kamiya, Nobuhiro
2018-01-01
This study investigated how learners' ages affect their interpretation of the nonverbal behaviors (NVBs) of teachers and other students in distinguishing between questions and statements in the second language (L2) classroom. After watching 48 short video clips without sound in which three L2 teachers asked a question or made a statement with or…
Hwang, Shin; Ha, Tae-Yong; Ahn, Chul-Soo; Moon, Deok-Bog; Kim, Ki-Hun; Song, Gi-Won; Jung, Dong-Hwan; Park, Gil-Chun; Lee, Sung-Gyu
2016-08-01
After having experienced more than 2,000 cases of adult living donor liver transplantation (LDLT), we established the concepts of right liver graft standardization. Right liver graft standardization intends to provide hemodynamics-based and regeneration-compliant reconstruction of vascular inflow and outflow. Right liver graft standardization consists of the following components: Right hepatic vein reconstruction includes a combination of caudal-side deep incision and patch venoplasty of the graft right hepatic vein to remove the acute angle between the graft right hepatic vein and the inferior vena cava; middle hepatic vein reconstruction includes interposition of a uniform-shaped conduit with large-sized homologous or prosthetic grafts; if the inferior right hepatic vein is present, its reconstruction includes funneling and unification venoplasty for multiple short hepatic veins; if donor portal vein anomaly is present, its reconstruction includes conjoined unification venoplasty for two or more portal vein orifices. This video clip that shows the surgical technique from bench to reperfusion was a case presentation of adult LDLT using a modified right liver graft from the patient's son. Our intention behind proposing the concept of right liver graft standardization is that it can be universally applicable and may guarantee nearly the same outcomes regardless of the surgeon's experience. We believe that this reconstruction model would be primarily applied to a majority of adult LDLT cases.
From The Horse's Mouth: Engaging With Geoscientists On Science
NASA Astrophysics Data System (ADS)
Katzenberger, J.; Morrow, C. A.; Arnott, J. C.
2011-12-01
"From the Horse's Mouth" is a project of the Aspen Global Change Institute (AGCI) that utilizes selected short video clips of scientists presenting and discussing their research in an interdisciplinary setting at AGCI as the core of an online interactive set of learning modules in the geosciences for grades 9-12 and 1st and 2nd year undergraduate students. The video archive and associated material as is has limited utility, but here we illustrate how it can be leveraged for educational purposes by a systematic mining of the resource integrated with a variety of supplemental user experiences. The project furthers several broad goals to: (a) improve the quality of formal and informal geoscience education with an emphasis on 9-12 and early undergraduate, (b) encourage and facilitate the engagement of geoscientists to strengthen STEM education by leveraging AGCI's interdisciplinary science program for educational purposes, (c) explore science as a human endeavor by providing a unique view of how scientists communicate in a research setting, potentially stimulating students to consider traditional and non-traditional geoscience careers, (d) promote student understanding of scientific methodology and inquiry, and (e) further student appreciation of the role of science in society, particularly related to understanding Earth system science and global change. The resource material at the core of this project is a videotape record of presentation and discussion among leading scientists from 35 countries participating in interdisciplinary workshops at AGCI on a broad array of geoscience topics over a period of 22 years. The unique archive represents approximately 1200 hours of video footage obtained over the course of 43 scientific workshops and 62 hours of public talks. The full spectrum of material represents scientists active on all continents with a diverse set of backgrounds and academic expertise in both natural and social sciences. We report on the video database resource, our data acquisition protocols, conceptual design for the learning modules, excerpts from the video archive illustrating both geoscience content utilized in educational module development and examples of video clips that explore the process of science and its nature as a human endeavor. A prototype of the user interface featuring a navigational strategy, a discussion of both content and process goals represented in the pilot material and its use in both formal and informal settings are presented.
Investigating Patterns for Self-Induced Emotion Recognition from EEG Signals.
Zhuang, Ning; Zeng, Ying; Yang, Kai; Zhang, Chi; Tong, Li; Yan, Bin
2018-03-12
Most current approaches to emotion recognition are based on neural signals elicited by affective materials such as images, sounds and videos. However, the application of neural patterns in the recognition of self-induced emotions remains uninvestigated. In this study we inferred the patterns and neural signatures of self-induced emotions from electroencephalogram (EEG) signals. The EEG signals of 30 participants were recorded while they watched 18 Chinese movie clips which were intended to elicit six discrete emotions, including joy, neutrality, sadness, disgust, anger and fear. After watching each movie clip the participants were asked to self-induce emotions by recalling a specific scene from each movie. We analyzed the important features, electrode distribution and average neural patterns of different self-induced emotions. Results demonstrated that features related to high-frequency rhythm of EEG signals from electrodes distributed in the bilateral temporal, prefrontal and occipital lobes have outstanding performance in the discrimination of emotions. Moreover, the six discrete categories of self-induced emotion exhibit specific neural patterns and brain topography distributions. We achieved an average accuracy of 87.36% in the discrimination of positive from negative self-induced emotions and 54.52% in the classification of emotions into six discrete categories. Our research will help promote the development of comprehensive endogenous emotion recognition methods.
Investigating Patterns for Self-Induced Emotion Recognition from EEG Signals
Zeng, Ying; Yang, Kai; Tong, Li; Yan, Bin
2018-01-01
Most current approaches to emotion recognition are based on neural signals elicited by affective materials such as images, sounds and videos. However, the application of neural patterns in the recognition of self-induced emotions remains uninvestigated. In this study we inferred the patterns and neural signatures of self-induced emotions from electroencephalogram (EEG) signals. The EEG signals of 30 participants were recorded while they watched 18 Chinese movie clips which were intended to elicit six discrete emotions, including joy, neutrality, sadness, disgust, anger and fear. After watching each movie clip the participants were asked to self-induce emotions by recalling a specific scene from each movie. We analyzed the important features, electrode distribution and average neural patterns of different self-induced emotions. Results demonstrated that features related to high-frequency rhythm of EEG signals from electrodes distributed in the bilateral temporal, prefrontal and occipital lobes have outstanding performance in the discrimination of emotions. Moreover, the six discrete categories of self-induced emotion exhibit specific neural patterns and brain topography distributions. We achieved an average accuracy of 87.36% in the discrimination of positive from negative self-induced emotions and 54.52% in the classification of emotions into six discrete categories. Our research will help promote the development of comprehensive endogenous emotion recognition methods. PMID:29534515
Shafir, Tal; Tsachor, Rachelle P; Welch, Kathleen B
2015-01-01
We have recently demonstrated that motor execution, observation, and imagery of movements expressing certain emotions can enhance corresponding affective states and therefore could be used for emotion regulation. But which specific movement(s) should one use in order to enhance each emotion? This study aimed to identify, using Laban Movement Analysis (LMA), the Laban motor elements (motor characteristics) that characterize movements whose execution enhances each of the basic emotions: anger, fear, happiness, and sadness. LMA provides a system of symbols describing its motor elements, which gives a written instruction (motif) for the execution of a movement or movement-sequence over time. Six senior LMA experts analyzed a validated set of video clips showing whole body dynamic expressions of anger, fear, happiness and sadness, and identified the motor elements that were common to (appeared in) all clips expressing the same emotion. For each emotion, we created motifs of different combinations of the motor elements common to all clips of the same emotion. Eighty subjects from around the world read and moved those motifs, to identify the emotion evoked when moving each motif and to rate the intensity of the evoked emotion. All subjects together moved and rated 1241 motifs, which were produced from 29 different motor elements. Using logistic regression, we found a set of motor elements associated with each emotion which, when moved, predicted the feeling of that emotion. Each emotion was predicted by a unique set of motor elements and each motor element predicted only one emotion. Knowledge of which specific motor elements enhance specific emotions can enable emotional self-regulation through adding some desired motor qualities to one's personal everyday movements (rather than mimicking others' specific movements) and through decreasing motor behaviors which include elements that enhance negative emotions.
Shafir, Tal; Tsachor, Rachelle P.; Welch, Kathleen B.
2016-01-01
We have recently demonstrated that motor execution, observation, and imagery of movements expressing certain emotions can enhance corresponding affective states and therefore could be used for emotion regulation. But which specific movement(s) should one use in order to enhance each emotion? This study aimed to identify, using Laban Movement Analysis (LMA), the Laban motor elements (motor characteristics) that characterize movements whose execution enhances each of the basic emotions: anger, fear, happiness, and sadness. LMA provides a system of symbols describing its motor elements, which gives a written instruction (motif) for the execution of a movement or movement-sequence over time. Six senior LMA experts analyzed a validated set of video clips showing whole body dynamic expressions of anger, fear, happiness and sadness, and identified the motor elements that were common to (appeared in) all clips expressing the same emotion. For each emotion, we created motifs of different combinations of the motor elements common to all clips of the same emotion. Eighty subjects from around the world read and moved those motifs, to identify the emotion evoked when moving each motif and to rate the intensity of the evoked emotion. All subjects together moved and rated 1241 motifs, which were produced from 29 different motor elements. Using logistic regression, we found a set of motor elements associated with each emotion which, when moved, predicted the feeling of that emotion. Each emotion was predicted by a unique set of motor elements and each motor element predicted only one emotion. Knowledge of which specific motor elements enhance specific emotions can enable emotional self-regulation through adding some desired motor qualities to one's personal everyday movements (rather than mimicking others' specific movements) and through decreasing motor behaviors which include elements that enhance negative emotions. PMID:26793147
21 CFR 884.1640 - Culdoscope and accessories.
Code of Federal Regulations, 2011 CFR
2011-04-01
... the female genital organs. This generic type of device may include trocar and cannula, instruments... instruments include: lens cleaning brush, biopsy brush, clip applier (without clips), applicator, cannula...
21 CFR 884.1640 - Culdoscope and accessories.
Code of Federal Regulations, 2010 CFR
2010-04-01
... the female genital organs. This generic type of device may include trocar and cannula, instruments... instruments include: lens cleaning brush, biopsy brush, clip applier (without clips), applicator, cannula...
CLIPS 6.0 - C LANGUAGE INTEGRATED PRODUCTION SYSTEM, VERSION 6.0 (UNIX VERSION)
NASA Technical Reports Server (NTRS)
Donnell, B.
1994-01-01
CLIPS, the C Language Integrated Production System, is a complete environment for developing expert systems -- programs which are specifically intended to model human expertise or knowledge. It is designed to allow artificial intelligence research, development, and delivery on conventional computers. CLIPS 6.0 provides a cohesive tool for handling a wide variety of knowledge with support for three different programming paradigms: rule-based, object-oriented, and procedural. Rule-based programming allows knowledge to be represented as heuristics, or "rules-of-thumb" which specify a set of actions to be performed for a given situation. Object-oriented programming allows complex systems to be modeled as modular components (which can be easily reused to model other systems or create new components). The procedural programming capabilities provided by CLIPS 6.0 allow CLIPS to represent knowledge in ways similar to those allowed in languages such as C, Pascal, Ada, and LISP. Using CLIPS 6.0, one can develop expert system software using only rule-based programming, only object-oriented programming, only procedural programming, or combinations of the three. CLIPS provides extensive features to support the rule-based programming paradigm including seven conflict resolution strategies, dynamic rule priorities, and truth maintenance. CLIPS 6.0 supports more complex nesting of conditional elements in the if portion of a rule ("and", "or", and "not" conditional elements can be placed within a "not" conditional element). In addition, there is no longer a limitation on the number of multifield slots that a deftemplate can contain. The CLIPS Object-Oriented Language (COOL) provides object-oriented programming capabilities. Features supported by COOL include classes with multiple inheritance, abstraction, encapsulation, polymorphism, dynamic binding, and message passing with message-handlers. CLIPS 6.0 supports tight integration of the rule-based programming features of CLIPS with COOL (that is, a rule can pattern match on objects created using COOL). CLIPS 6.0 provides the capability to define functions, overloaded functions, and global variables interactively. In addition, CLIPS can be embedded within procedural code, called as a subroutine, and integrated with languages such as C, FORTRAN and Ada. CLIPS can be easily extended by a user through the use of several well-defined protocols. CLIPS provides several delivery options for programs including the ability to generate stand alone executables or to load programs from text or binary files. CLIPS 6.0 provides support for the modular development and execution of knowledge bases with the defmodule construct. CLIPS modules allow a set of constructs to be grouped together such that explicit control can be maintained over restricting the access of the constructs by other modules. This type of control is similar to global and local scoping used in languages such as C or Ada. By restricting access to deftemplate and defclass constructs, modules can function as blackboards, permitting only certain facts and instances to be seen by other modules. Modules are also used by rules to provide execution control. The CRSV (Cross-Reference, Style, and Verification) utility included with previous version of CLIPS is no longer supported. The capabilities provided by this tool are now available directly within CLIPS 6.0 to aid in the development, debugging, and verification of large rule bases. COSMIC offers four distribution versions of CLIPS 6.0: UNIX (MSC-22433), VMS (MSC-22434), MACINTOSH (MSC-22429), and IBM PC (MSC-22430). Executable files, source code, utilities, documentation, and examples are included on the program media. All distribution versions include identical source code for the command line version of CLIPS 6.0. This source code should compile on any platform with an ANSI C compiler. Each distribution version of CLIPS 6.0, except that for the Macintosh platform, includes an executable for the command line version. For the UNIX version of CLIPS 6.0, the command line interface has been successfully implemented on a Sun4 running SunOS, a DECstation running DEC RISC ULTRIX, an SGI Indigo Elan running IRIX, a DEC Alpha AXP running OSF/1, and an IBM RS/6000 running AIX. Command line interface executables are included for Sun4 computers running SunOS 4.1.1 or later and for the DEC RISC ULTRIX platform. The makefiles may have to be modified slightly to be used on other UNIX platforms. The UNIX, Macintosh, and IBM PC versions of CLIPS 6.0 each have a platform specific interface. Source code, a makefile, and an executable for the Windows 3.1 interface version of CLIPS 6.0 are provided only on the IBM PC distribution diskettes. Source code, a makefile, and an executable for the Macintosh interface version of CLIPS 6.0 are provided only on the Macintosh distribution diskettes. Likewise, for the UNIX version of CLIPS 6.0, only source code and a makefile for an X-Windows interface are provided. The X-Windows interface requires MIT's X Window System, Version 11, Release 4 (X11R4), the Athena Widget Set, and the Xmu library. The source code for the Athena Widget Set is provided on the distribution medium. The X-Windows interface has been successfully implemented on a Sun4 running SunOS 4.1.2 with the MIT distribution of X11R4 (not OpenWindows), an SGI Indigo Elan running IRIX 4.0.5, and a DEC Alpha AXP running OSF/1 1.2. The VAX version of CLIPS 6.0 comes only with the generic command line interface. ASCII makefiles for the command line version of CLIPS are provided on all the distribution media for UNIX, VMS, and DOS. Four executables are provided with the IBM PC version: a windowed interface executable for Windows 3.1 built using Borland C++ v3.1, an editor for use with the windowed interface, a command line version of CLIPS for Windows 3.1, and a 386 command line executable for DOS built using Zortech C++ v3.1. All four executables are capable of utilizing extended memory and require an 80386 CPU or better. Users needing an 8086/8088 or 80286 executable must recompile the CLIPS source code themselves. Users who wish to recompile the DOS executable using Borland C++ or MicroSoft C must use a DOS extender program to produce an executable capable of using extended memory. The version of CLIPS 6.0 for IBM PC compatibles requires DOS v3.3 or later and/or Windows 3.1 or later. It is distributed on a set of three 1.4Mb 3.5 inch diskettes. A hard disk is required. The Macintosh version is distributed in compressed form on two 3.5 inch 1.4Mb Macintosh format diskettes, and requires System 6.0.5, or higher, and 1Mb RAM. The version for DEC VAX/VMS is available in VAX BACKUP format on a 1600 BPI 9-track magnetic tape (standard distribution medium) or a TK50 tape cartridge. The UNIX version is distributed in UNIX tar format on a .25 inch streaming magnetic tape cartridge (Sun QIC-24). For the UNIX version, alternate distribution media and formats are available upon request. The CLIPS 6.0 documentation includes a User's Guide and a three volume Reference Manual consisting of Basic and Advanced Programming Guides and an Interfaces Guide. An electronic version of the documentation is provided on the distribution medium for each version: in MicroSoft Word format for the Macintosh and PC versions of CLIPS, and in both PostScript format and MicroSoft Word for Macintosh format for the UNIX and DEC VAX versions of CLIPS. CLIPS was developed in 1986 and Version 6.0 was released in 1993.
CLIPS 6.0 - C LANGUAGE INTEGRATED PRODUCTION SYSTEM, VERSION 6.0 (IBM PC VERSION)
NASA Technical Reports Server (NTRS)
Donnell, B.
1994-01-01
CLIPS, the C Language Integrated Production System, is a complete environment for developing expert systems -- programs which are specifically intended to model human expertise or knowledge. It is designed to allow artificial intelligence research, development, and delivery on conventional computers. CLIPS 6.0 provides a cohesive tool for handling a wide variety of knowledge with support for three different programming paradigms: rule-based, object-oriented, and procedural. Rule-based programming allows knowledge to be represented as heuristics, or "rules-of-thumb" which specify a set of actions to be performed for a given situation. Object-oriented programming allows complex systems to be modeled as modular components (which can be easily reused to model other systems or create new components). The procedural programming capabilities provided by CLIPS 6.0 allow CLIPS to represent knowledge in ways similar to those allowed in languages such as C, Pascal, Ada, and LISP. Using CLIPS 6.0, one can develop expert system software using only rule-based programming, only object-oriented programming, only procedural programming, or combinations of the three. CLIPS provides extensive features to support the rule-based programming paradigm including seven conflict resolution strategies, dynamic rule priorities, and truth maintenance. CLIPS 6.0 supports more complex nesting of conditional elements in the if portion of a rule ("and", "or", and "not" conditional elements can be placed within a "not" conditional element). In addition, there is no longer a limitation on the number of multifield slots that a deftemplate can contain. The CLIPS Object-Oriented Language (COOL) provides object-oriented programming capabilities. Features supported by COOL include classes with multiple inheritance, abstraction, encapsulation, polymorphism, dynamic binding, and message passing with message-handlers. CLIPS 6.0 supports tight integration of the rule-based programming features of CLIPS with COOL (that is, a rule can pattern match on objects created using COOL). CLIPS 6.0 provides the capability to define functions, overloaded functions, and global variables interactively. In addition, CLIPS can be embedded within procedural code, called as a subroutine, and integrated with languages such as C, FORTRAN and Ada. CLIPS can be easily extended by a user through the use of several well-defined protocols. CLIPS provides several delivery options for programs including the ability to generate stand alone executables or to load programs from text or binary files. CLIPS 6.0 provides support for the modular development and execution of knowledge bases with the defmodule construct. CLIPS modules allow a set of constructs to be grouped together such that explicit control can be maintained over restricting the access of the constructs by other modules. This type of control is similar to global and local scoping used in languages such as C or Ada. By restricting access to deftemplate and defclass constructs, modules can function as blackboards, permitting only certain facts and instances to be seen by other modules. Modules are also used by rules to provide execution control. The CRSV (Cross-Reference, Style, and Verification) utility included with previous version of CLIPS is no longer supported. The capabilities provided by this tool are now available directly within CLIPS 6.0 to aid in the development, debugging, and verification of large rule bases. COSMIC offers four distribution versions of CLIPS 6.0: UNIX (MSC-22433), VMS (MSC-22434), MACINTOSH (MSC-22429), and IBM PC (MSC-22430). Executable files, source code, utilities, documentation, and examples are included on the program media. All distribution versions include identical source code for the command line version of CLIPS 6.0. This source code should compile on any platform with an ANSI C compiler. Each distribution version of CLIPS 6.0, except that for the Macintosh platform, includes an executable for the command line version. For the UNIX version of CLIPS 6.0, the command line interface has been successfully implemented on a Sun4 running SunOS, a DECstation running DEC RISC ULTRIX, an SGI Indigo Elan running IRIX, a DEC Alpha AXP running OSF/1, and an IBM RS/6000 running AIX. Command line interface executables are included for Sun4 computers running SunOS 4.1.1 or later and for the DEC RISC ULTRIX platform. The makefiles may have to be modified slightly to be used on other UNIX platforms. The UNIX, Macintosh, and IBM PC versions of CLIPS 6.0 each have a platform specific interface. Source code, a makefile, and an executable for the Windows 3.1 interface version of CLIPS 6.0 are provided only on the IBM PC distribution diskettes. Source code, a makefile, and an executable for the Macintosh interface version of CLIPS 6.0 are provided only on the Macintosh distribution diskettes. Likewise, for the UNIX version of CLIPS 6.0, only source code and a makefile for an X-Windows interface are provided. The X-Windows interface requires MIT's X Window System, Version 11, Release 4 (X11R4), the Athena Widget Set, and the Xmu library. The source code for the Athena Widget Set is provided on the distribution medium. The X-Windows interface has been successfully implemented on a Sun4 running SunOS 4.1.2 with the MIT distribution of X11R4 (not OpenWindows), an SGI Indigo Elan running IRIX 4.0.5, and a DEC Alpha AXP running OSF/1 1.2. The VAX version of CLIPS 6.0 comes only with the generic command line interface. ASCII makefiles for the command line version of CLIPS are provided on all the distribution media for UNIX, VMS, and DOS. Four executables are provided with the IBM PC version: a windowed interface executable for Windows 3.1 built using Borland C++ v3.1, an editor for use with the windowed interface, a command line version of CLIPS for Windows 3.1, and a 386 command line executable for DOS built using Zortech C++ v3.1. All four executables are capable of utilizing extended memory and require an 80386 CPU or better. Users needing an 8086/8088 or 80286 executable must recompile the CLIPS source code themselves. Users who wish to recompile the DOS executable using Borland C++ or MicroSoft C must use a DOS extender program to produce an executable capable of using extended memory. The version of CLIPS 6.0 for IBM PC compatibles requires DOS v3.3 or later and/or Windows 3.1 or later. It is distributed on a set of three 1.4Mb 3.5 inch diskettes. A hard disk is required. The Macintosh version is distributed in compressed form on two 3.5 inch 1.4Mb Macintosh format diskettes, and requires System 6.0.5, or higher, and 1Mb RAM. The version for DEC VAX/VMS is available in VAX BACKUP format on a 1600 BPI 9-track magnetic tape (standard distribution medium) or a TK50 tape cartridge. The UNIX version is distributed in UNIX tar format on a .25 inch streaming magnetic tape cartridge (Sun QIC-24). For the UNIX version, alternate distribution media and formats are available upon request. The CLIPS 6.0 documentation includes a User's Guide and a three volume Reference Manual consisting of Basic and Advanced Programming Guides and an Interfaces Guide. An electronic version of the documentation is provided on the distribution medium for each version: in MicroSoft Word format for the Macintosh and PC versions of CLIPS, and in both PostScript format and MicroSoft Word for Macintosh format for the UNIX and DEC VAX versions of CLIPS. CLIPS was developed in 1986 and Version 6.0 was released in 1993.
CLIPS 6.0 - C LANGUAGE INTEGRATED PRODUCTION SYSTEM, VERSION 6.0 (MACINTOSH VERSION)
NASA Technical Reports Server (NTRS)
Riley, G.
1994-01-01
CLIPS, the C Language Integrated Production System, is a complete environment for developing expert systems -- programs which are specifically intended to model human expertise or knowledge. It is designed to allow artificial intelligence research, development, and delivery on conventional computers. CLIPS 6.0 provides a cohesive tool for handling a wide variety of knowledge with support for three different programming paradigms: rule-based, object-oriented, and procedural. Rule-based programming allows knowledge to be represented as heuristics, or "rules-of-thumb" which specify a set of actions to be performed for a given situation. Object-oriented programming allows complex systems to be modeled as modular components (which can be easily reused to model other systems or create new components). The procedural programming capabilities provided by CLIPS 6.0 allow CLIPS to represent knowledge in ways similar to those allowed in languages such as C, Pascal, Ada, and LISP. Using CLIPS 6.0, one can develop expert system software using only rule-based programming, only object-oriented programming, only procedural programming, or combinations of the three. CLIPS provides extensive features to support the rule-based programming paradigm including seven conflict resolution strategies, dynamic rule priorities, and truth maintenance. CLIPS 6.0 supports more complex nesting of conditional elements in the if portion of a rule ("and", "or", and "not" conditional elements can be placed within a "not" conditional element). In addition, there is no longer a limitation on the number of multifield slots that a deftemplate can contain. The CLIPS Object-Oriented Language (COOL) provides object-oriented programming capabilities. Features supported by COOL include classes with multiple inheritance, abstraction, encapsulation, polymorphism, dynamic binding, and message passing with message-handlers. CLIPS 6.0 supports tight integration of the rule-based programming features of CLIPS with COOL (that is, a rule can pattern match on objects created using COOL). CLIPS 6.0 provides the capability to define functions, overloaded functions, and global variables interactively. In addition, CLIPS can be embedded within procedural code, called as a subroutine, and integrated with languages such as C, FORTRAN and Ada. CLIPS can be easily extended by a user through the use of several well-defined protocols. CLIPS provides several delivery options for programs including the ability to generate stand alone executables or to load programs from text or binary files. CLIPS 6.0 provides support for the modular development and execution of knowledge bases with the defmodule construct. CLIPS modules allow a set of constructs to be grouped together such that explicit control can be maintained over restricting the access of the constructs by other modules. This type of control is similar to global and local scoping used in languages such as C or Ada. By restricting access to deftemplate and defclass constructs, modules can function as blackboards, permitting only certain facts and instances to be seen by other modules. Modules are also used by rules to provide execution control. The CRSV (Cross-Reference, Style, and Verification) utility included with previous version of CLIPS is no longer supported. The capabilities provided by this tool are now available directly within CLIPS 6.0 to aid in the development, debugging, and verification of large rule bases. COSMIC offers four distribution versions of CLIPS 6.0: UNIX (MSC-22433), VMS (MSC-22434), MACINTOSH (MSC-22429), and IBM PC (MSC-22430). Executable files, source code, utilities, documentation, and examples are included on the program media. All distribution versions include identical source code for the command line version of CLIPS 6.0. This source code should compile on any platform with an ANSI C compiler. Each distribution version of CLIPS 6.0, except that for the Macintosh platform, includes an executable for the command line version. For the UNIX version of CLIPS 6.0, the command line interface has been successfully implemented on a Sun4 running SunOS, a DECstation running DEC RISC ULTRIX, an SGI Indigo Elan running IRIX, a DEC Alpha AXP running OSF/1, and an IBM RS/6000 running AIX. Command line interface executables are included for Sun4 computers running SunOS 4.1.1 or later and for the DEC RISC ULTRIX platform. The makefiles may have to be modified slightly to be used on other UNIX platforms. The UNIX, Macintosh, and IBM PC versions of CLIPS 6.0 each have a platform specific interface. Source code, a makefile, and an executable for the Windows 3.1 interface version of CLIPS 6.0 are provided only on the IBM PC distribution diskettes. Source code, a makefile, and an executable for the Macintosh interface version of CLIPS 6.0 are provided only on the Macintosh distribution diskettes. Likewise, for the UNIX version of CLIPS 6.0, only source code and a makefile for an X-Windows interface are provided. The X-Windows interface requires MIT's X Window System, Version 11, Release 4 (X11R4), the Athena Widget Set, and the Xmu library. The source code for the Athena Widget Set is provided on the distribution medium. The X-Windows interface has been successfully implemented on a Sun4 running SunOS 4.1.2 with the MIT distribution of X11R4 (not OpenWindows), an SGI Indigo Elan running IRIX 4.0.5, and a DEC Alpha AXP running OSF/1 1.2. The VAX version of CLIPS 6.0 comes only with the generic command line interface. ASCII makefiles for the command line version of CLIPS are provided on all the distribution media for UNIX, VMS, and DOS. Four executables are provided with the IBM PC version: a windowed interface executable for Windows 3.1 built using Borland C++ v3.1, an editor for use with the windowed interface, a command line version of CLIPS for Windows 3.1, and a 386 command line executable for DOS built using Zortech C++ v3.1. All four executables are capable of utilizing extended memory and require an 80386 CPU or better. Users needing an 8086/8088 or 80286 executable must recompile the CLIPS source code themselves. Users who wish to recompile the DOS executable using Borland C++ or MicroSoft C must use a DOS extender program to produce an executable capable of using extended memory. The version of CLIPS 6.0 for IBM PC compatibles requires DOS v3.3 or later and/or Windows 3.1 or later. It is distributed on a set of three 1.4Mb 3.5 inch diskettes. A hard disk is required. The Macintosh version is distributed in compressed form on two 3.5 inch 1.4Mb Macintosh format diskettes, and requires System 6.0.5, or higher, and 1Mb RAM. The version for DEC VAX/VMS is available in VAX BACKUP format on a 1600 BPI 9-track magnetic tape (standard distribution medium) or a TK50 tape cartridge. The UNIX version is distributed in UNIX tar format on a .25 inch streaming magnetic tape cartridge (Sun QIC-24). For the UNIX version, alternate distribution media and formats are available upon request. The CLIPS 6.0 documentation includes a User's Guide and a three volume Reference Manual consisting of Basic and Advanced Programming Guides and an Interfaces Guide. An electronic version of the documentation is provided on the distribution medium for each version: in MicroSoft Word format for the Macintosh and PC versions of CLIPS, and in both PostScript format and MicroSoft Word for Macintosh format for the UNIX and DEC VAX versions of CLIPS. CLIPS was developed in 1986 and Version 6.0 was released in 1993.
CLIPS 6.0 - C LANGUAGE INTEGRATED PRODUCTION SYSTEM, VERSION 6.0 (DEC VAX VMS VERSION)
NASA Technical Reports Server (NTRS)
Donnell, B.
1994-01-01
CLIPS, the C Language Integrated Production System, is a complete environment for developing expert systems -- programs which are specifically intended to model human expertise or knowledge. It is designed to allow artificial intelligence research, development, and delivery on conventional computers. CLIPS 6.0 provides a cohesive tool for handling a wide variety of knowledge with support for three different programming paradigms: rule-based, object-oriented, and procedural. Rule-based programming allows knowledge to be represented as heuristics, or "rules-of-thumb" which specify a set of actions to be performed for a given situation. Object-oriented programming allows complex systems to be modeled as modular components (which can be easily reused to model other systems or create new components). The procedural programming capabilities provided by CLIPS 6.0 allow CLIPS to represent knowledge in ways similar to those allowed in languages such as C, Pascal, Ada, and LISP. Using CLIPS 6.0, one can develop expert system software using only rule-based programming, only object-oriented programming, only procedural programming, or combinations of the three. CLIPS provides extensive features to support the rule-based programming paradigm including seven conflict resolution strategies, dynamic rule priorities, and truth maintenance. CLIPS 6.0 supports more complex nesting of conditional elements in the if portion of a rule ("and", "or", and "not" conditional elements can be placed within a "not" conditional element). In addition, there is no longer a limitation on the number of multifield slots that a deftemplate can contain. The CLIPS Object-Oriented Language (COOL) provides object-oriented programming capabilities. Features supported by COOL include classes with multiple inheritance, abstraction, encapsulation, polymorphism, dynamic binding, and message passing with message-handlers. CLIPS 6.0 supports tight integration of the rule-based programming features of CLIPS with COOL (that is, a rule can pattern match on objects created using COOL). CLIPS 6.0 provides the capability to define functions, overloaded functions, and global variables interactively. In addition, CLIPS can be embedded within procedural code, called as a subroutine, and integrated with languages such as C, FORTRAN and Ada. CLIPS can be easily extended by a user through the use of several well-defined protocols. CLIPS provides several delivery options for programs including the ability to generate stand alone executables or to load programs from text or binary files. CLIPS 6.0 provides support for the modular development and execution of knowledge bases with the defmodule construct. CLIPS modules allow a set of constructs to be grouped together such that explicit control can be maintained over restricting the access of the constructs by other modules. This type of control is similar to global and local scoping used in languages such as C or Ada. By restricting access to deftemplate and defclass constructs, modules can function as blackboards, permitting only certain facts and instances to be seen by other modules. Modules are also used by rules to provide execution control. The CRSV (Cross-Reference, Style, and Verification) utility included with previous version of CLIPS is no longer supported. The capabilities provided by this tool are now available directly within CLIPS 6.0 to aid in the development, debugging, and verification of large rule bases. COSMIC offers four distribution versions of CLIPS 6.0: UNIX (MSC-22433), VMS (MSC-22434), MACINTOSH (MSC-22429), and IBM PC (MSC-22430). Executable files, source code, utilities, documentation, and examples are included on the program media. All distribution versions include identical source code for the command line version of CLIPS 6.0. This source code should compile on any platform with an ANSI C compiler. Each distribution version of CLIPS 6.0, except that for the Macintosh platform, includes an executable for the command line version. For the UNIX version of CLIPS 6.0, the command line interface has been successfully implemented on a Sun4 running SunOS, a DECstation running DEC RISC ULTRIX, an SGI Indigo Elan running IRIX, a DEC Alpha AXP running OSF/1, and an IBM RS/6000 running AIX. Command line interface executables are included for Sun4 computers running SunOS 4.1.1 or later and for the DEC RISC ULTRIX platform. The makefiles may have to be modified slightly to be used on other UNIX platforms. The UNIX, Macintosh, and IBM PC versions of CLIPS 6.0 each have a platform specific interface. Source code, a makefile, and an executable for the Windows 3.1 interface version of CLIPS 6.0 are provided only on the IBM PC distribution diskettes. Source code, a makefile, and an executable for the Macintosh interface version of CLIPS 6.0 are provided only on the Macintosh distribution diskettes. Likewise, for the UNIX version of CLIPS 6.0, only source code and a makefile for an X-Windows interface are provided. The X-Windows interface requires MIT's X Window System, Version 11, Release 4 (X11R4), the Athena Widget Set, and the Xmu library. The source code for the Athena Widget Set is provided on the distribution medium. The X-Windows interface has been successfully implemented on a Sun4 running SunOS 4.1.2 with the MIT distribution of X11R4 (not OpenWindows), an SGI Indigo Elan running IRIX 4.0.5, and a DEC Alpha AXP running OSF/1 1.2. The VAX version of CLIPS 6.0 comes only with the generic command line interface. ASCII makefiles for the command line version of CLIPS are provided on all the distribution media for UNIX, VMS, and DOS. Four executables are provided with the IBM PC version: a windowed interface executable for Windows 3.1 built using Borland C++ v3.1, an editor for use with the windowed interface, a command line version of CLIPS for Windows 3.1, and a 386 command line executable for DOS built using Zortech C++ v3.1. All four executables are capable of utilizing extended memory and require an 80386 CPU or better. Users needing an 8086/8088 or 80286 executable must recompile the CLIPS source code themselves. Users who wish to recompile the DOS executable using Borland C++ or MicroSoft C must use a DOS extender program to produce an executable capable of using extended memory. The version of CLIPS 6.0 for IBM PC compatibles requires DOS v3.3 or later and/or Windows 3.1 or later. It is distributed on a set of three 1.4Mb 3.5 inch diskettes. A hard disk is required. The Macintosh version is distributed in compressed form on two 3.5 inch 1.4Mb Macintosh format diskettes, and requires System 6.0.5, or higher, and 1Mb RAM. The version for DEC VAX/VMS is available in VAX BACKUP format on a 1600 BPI 9-track magnetic tape (standard distribution medium) or a TK50 tape cartridge. The UNIX version is distributed in UNIX tar format on a .25 inch streaming magnetic tape cartridge (Sun QIC-24). For the UNIX version, alternate distribution media and formats are available upon request. The CLIPS 6.0 documentation includes a User's Guide and a three volume Reference Manual consisting of Basic and Advanced Programming Guides and an Interfaces Guide. An electronic version of the documentation is provided on the distribution medium for each version: in MicroSoft Word format for the Macintosh and PC versions of CLIPS, and in both PostScript format and MicroSoft Word for Macintosh format for the UNIX and DEC VAX versions of CLIPS. CLIPS was developed in 1986 and Version 6.0 was released in 1993.
Integrating an object system into CLIPS: Language design and implementation issues
NASA Technical Reports Server (NTRS)
Auburn, Mark
1990-01-01
This paper describes the reasons why an object system with integrated pattern-matching and object-oriented programming facilities is desirable for CLIPS and how it is possible to integrate such a system into CLIPS while maintaining the run-time performance and the low memory usage for which CLIPS is known. The requirements for an object system in CLIPS that includes object-oriented programming and integrated pattern-matching are discussed and various techniques for optimizing the object system and its integration with the pattern-matcher are presented.
Are YouTube videos accurate and reliable on basic life support and cardiopulmonary resuscitation?
Yaylaci, Serpil; Serinken, Mustafa; Eken, Cenker; Karcioglu, Ozgur; Yilmaz, Atakan; Elicabuk, Hayri; Dal, Onur
2014-10-01
The objective of this study is to investigate reliability and accuracy of the information on YouTube videos related to CPR and BLS in accord with 2010 CPR guidelines. YouTube was queried using four search terms 'CPR', 'cardiopulmonary resuscitation', 'BLS' and 'basic life support' between 2011 and 2013. Sources that uploaded the videos, the record time, the number of viewers in the study period, inclusion of human or manikins were recorded. The videos were rated if they displayed the correct order of resuscitative efforts in full accord with 2010 CPR guidelines or not. Two hundred and nine videos meeting the inclusion criteria after the search in YouTube with four search terms ('CPR', 'cardiopulmonary resuscitation', 'BLS' and 'basic life support') comprised the study sample subjected to the analysis. Median score of the videos is 5 (IQR: 3.5-6). Only 11.5% (n = 24) of the videos were found to be compatible with 2010 CPR guidelines with regard to sequence of interventions. Videos uploaded by 'Guideline bodies' had significantly higher rates of download when compared with the videos uploaded by other sources. Sources of the videos and date of upload (year) were not shown to have any significant effect on the scores received (P = 0.615 and 0.513, respectively). The videos' number of downloads did not differ according to the videos compatible with the guidelines (P = 0.832). The videos downloaded more than 10,000 times had a higher score than the others (P = 0.001). The majority of You-Tube video clips purporting to be about CPR are not relevant educational material. Of those that are focused on teaching CPR, only a small minority optimally meet the 2010 Resucitation Guidelines. © 2014 Australasian College for Emergency Medicine and Australasian Society for Emergency Medicine.