Sample records for video footage shows

  1. Using Video Surveillance Footage to Support Validity of Self-Reported Classroom Data

    ERIC Educational Resources Information Center

    Lee, Dabae; Arthur, Ian T.; Morrone, Anastasia S.

    2017-01-01

    The use of video surveillance footage presents a new possibility in educational research as a reliable and valid source of learning and teaching activities in classrooms. However, the unique nature of surveillance footage requires different approaches and poses distinctive challenges in utilizing it in research, yet no methodological guides are…

  2. Seafloor video footage and still-frame grabs from U.S. Geological Survey cruises in Hawaiian nearshore waters

    USGS Publications Warehouse

    Gibbs, Ann E.; Cochran, Susan A.; Tierney, Peter W.

    2013-01-01

    Underwater video footage was collected in nearshore waters (<60-meter depth) off the Hawaiian Islands from 2002 to 2011 as part of the U.S. Geological Survey (USGS) Coastal and Marine Geology Program's Pacific Coral Reef Project, to improve seafloor characterization and for the development and ground-truthing of benthic-habitat maps. This report includes nearly 53 hours of digital underwater video footage collected during four USGS cruises and more than 10,200 still images extracted from the videos, including still frames from every 10 seconds along transect lines, and still frames showing both an overview and a near-bottom view from fixed stations. Environmental Systems Research Institute (ESRI) shapefiles of individual video and still-image locations, and Google Earth kml files with explanatory text and links to the video and still images, are included. This report documents the various camera systems and methods used to collect the videos, and the techniques and software used to convert the analog video tapes into digital data in order to process the images for optimum viewing and to extract the still images, along with a brief summary of each survey cruise.

  3. Restored Moonwalk Footage Release

    NASA Image and Video Library

    2009-07-15

    Graphics showing how TV signals were sent from the Apollo 11 mission back to Earth are shown on a large video monitor above panelists at NASA's briefing where restored Apollo 11 moonwalk footage was revealed for the first time at the Newseum, Thursday, July 16, 2009, in Washington, DC. Photo Credit: (NASA/Carla Cioffi)

  4. Restored Moonwalk Footage Release

    NASA Image and Video Library

    2009-07-15

    A photograph from the 1960's showing Stan Lebar, former Westinghouse Electric program manager, holding two cameras used during the Apollo missions is seen on a large video monitor above panelists, including Stan Lebar, at NASA's briefing where restored Apollo 11 moonwalk footage was revealed for the first time at the Newseum, Thursday, July 16, 2009, in Washington, DC. Photo Credit: (NASA/Carla Cioffi)

  5. Stock Footage of Goddard Space Flight Center and Headquarters

    NASA Technical Reports Server (NTRS)

    1989-01-01

    Produced for Century Teleproductions in Boston, MA this video is a camera master showing various views, with natural sound, of the space flight center during the late spring. This finished footage is used in an interactive laser disc presentation that is used at Kennedy Space Center Visitor Center.

  6. Collaborative web-based annotation of video footage of deep-sea life, ecosystems and geological processes

    NASA Astrophysics Data System (ADS)

    Kottmann, R.; Ratmeyer, V.; Pop Ristov, A.; Boetius, A.

    2012-04-01

    More and more seagoing scientific expeditions use video-controlled research platforms such as Remote Operating Vehicles (ROV), Autonomous Underwater Vehicles (AUV), and towed camera systems. These produce many hours of video material which contains detailed and scientifically highly valuable footage of the biological, chemical, geological, and physical aspects of the oceans. Many of the videos contain unique observations of unknown life-forms which are rare, and which cannot be sampled and studied otherwise. To make such video material online accessible and to create a collaborative annotation environment the "Video Annotation and processing platform" (V-App) was developed. A first solely web-based installation for ROV videos is setup at the German Center for Marine Environmental Sciences (available at http://videolib.marum.de). It allows users to search and watch videos with a standard web browser based on the HTML5 standard. Moreover, V-App implements social web technologies allowing a distributed world-wide scientific community to collaboratively annotate videos anywhere at any time. It has several features fully implemented among which are: • User login system for fine grained permission and access control • Video watching • Video search using keywords, geographic position, depth and time range and any combination thereof • Video annotation organised in themes (tracks) such as biology and geology among others in standard or full screen mode • Annotation keyword management: Administrative users can add, delete, and update single keywords for annotation or upload sets of keywords from Excel-sheets • Download of products for scientific use This unique web application system helps making costly ROV videos online available (estimated cost range between 5.000 - 10.000 Euros per hour depending on the combination of ship and ROV). Moreover, with this system each expert annotation adds instantaneous available and valuable knowledge to otherwise uncharted

  7. Usability of aerial video footage for 3-D scene reconstruction and structural damage assessment

    NASA Astrophysics Data System (ADS)

    Cusicanqui, Johnny; Kerle, Norman; Nex, Francesco

    2018-06-01

    Remote sensing has evolved into the most efficient approach to assess post-disaster structural damage, in extensively affected areas through the use of spaceborne data. For smaller, and in particular, complex urban disaster scenes, multi-perspective aerial imagery obtained with unmanned aerial vehicles and derived dense color 3-D models are increasingly being used. These type of data allow the direct and automated recognition of damage-related features, supporting an effective post-disaster structural damage assessment. However, the rapid collection and sharing of multi-perspective aerial imagery is still limited due to tight or lacking regulations and legal frameworks. A potential alternative is aerial video footage, which is typically acquired and shared by civil protection institutions or news media and which tends to be the first type of airborne data available. Nevertheless, inherent artifacts and the lack of suitable processing means have long limited its potential use in structural damage assessment and other post-disaster activities. In this research the usability of modern aerial video data was evaluated based on a comparative quality and application analysis of video data and multi-perspective imagery (photos), and their derivative 3-D point clouds created using current photogrammetric techniques. Additionally, the effects of external factors, such as topography and the presence of smoke and moving objects, were determined by analyzing two different earthquake-affected sites: Tainan (Taiwan) and Pescara del Tronto (Italy). Results demonstrated similar usabilities for video and photos. This is shown by the short 2 cm of difference between the accuracies of video- and photo-based 3-D point clouds. Despite the low video resolution, the usability of these data was compensated for by a small ground sampling distance. Instead of video characteristics, low quality and application resulted from non-data-related factors, such as changes in the scene, lack of

  8. Innovative Solution to Video Enhancement

    NASA Technical Reports Server (NTRS)

    2001-01-01

    Through a licensing agreement, Intergraph Government Solutions adapted a technology originally developed at NASA's Marshall Space Flight Center for enhanced video imaging by developing its Video Analyst(TM) System. Marshall's scientists developed the Video Image Stabilization and Registration (VISAR) technology to help FBI agents analyze video footage of the deadly 1996 Olympic Summer Games bombing in Atlanta, Georgia. VISAR technology enhanced nighttime videotapes made with hand-held camcorders, revealing important details about the explosion. Intergraph's Video Analyst System is a simple, effective, and affordable tool for video enhancement and analysis. The benefits associated with the Video Analyst System include support of full-resolution digital video, frame-by-frame analysis, and the ability to store analog video in digital format. Up to 12 hours of digital video can be stored and maintained for reliable footage analysis. The system also includes state-of-the-art features such as stabilization, image enhancement, and convolution to help improve the visibility of subjects in the video without altering underlying footage. Adaptable to many uses, Intergraph#s Video Analyst System meets the stringent demands of the law enforcement industry in the areas of surveillance, crime scene footage, sting operations, and dash-mounted video cameras.

  9. Restored Moonwalk Footage Release

    NASA Image and Video Library

    2009-07-15

    Mike Inchalik, president of Lowry Digital, talks about the job of restoring Apollo 11 moonwalk footage at a NASA briefing where restored Apollo 11 moonwalk footage was revealed for the first time at the Newseum, Thursday, July 16, 2009, in Washington, DC. Photo Credit: (NASA/Bill Ingalls)

  10. Relative effects of posture and activity on human height estimation from surveillance footage.

    PubMed

    Ramstrand, Nerrolyn; Ramstrand, Simon; Brolund, Per; Norell, Kristin; Bergström, Peter

    2011-10-10

    Height estimations based on security camera footage are often requested by law enforcement authorities. While valid and reliable techniques have been established to determine vertical distances from video frames, there is a discrepancy between a person's true static height and their height as measured when assuming different postures or when in motion (e.g., walking). The aim of the research presented in this report was to accurately record the height of subjects as they performed a variety of activities typically observed in security camera footage and compare results to height recorded using a standard height measuring device. Forty-six able bodied adults participated in this study and were recorded using a 3D motion analysis system while performing eight different tasks. Height measurements captured using the 3D motion analysis system were compared to static height measurements in order to determine relative differences. It is anticipated that results presented in this report can be used by forensic image analysis experts as a basis for correcting height estimations of people captured on surveillance footage. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  11. 2015 RECS Square Footage Methodology

    EIA Publications

    2017-01-01

    The square footage, or size, of a home is an important characteristic in understanding its energy use. The amounts of energy used for major end uses such as space heating and air conditioning are strongly related to the size of the home. The Residential Energy Consumption Survey (RECS), conducted by the U.S. Energy Information Administration (EIA), collects information about the size of the responding housing units as part of the data collection protocol. The methods used to collect data on housing unit size produce square footage estimates that are unique to RECS because they are designed to capture the energy-consuming space within a home. This document discusses how the 2015 RECS square footage estimates were produced.

  12. Language from police body camera footage shows racial disparities in officer respect

    PubMed Central

    Voigt, Rob; Camp, Nicholas P.; Prabhakaran, Vinodkumar; Hamilton, William L.; Hetey, Rebecca C.; Griffiths, Camilla M.; Jurgens, David; Jurafsky, Dan; Eberhardt, Jennifer L.

    2017-01-01

    Using footage from body-worn cameras, we analyze the respectfulness of police officer language toward white and black community members during routine traffic stops. We develop computational linguistic methods that extract levels of respect automatically from transcripts, informed by a thin-slicing study of participant ratings of officer utterances. We find that officers speak with consistently less respect toward black versus white community members, even after controlling for the race of the officer, the severity of the infraction, the location of the stop, and the outcome of the stop. Such disparities in common, everyday interactions between police and the communities they serve have important implications for procedural justice and the building of police–community trust. PMID:28584085

  13. Language from police body camera footage shows racial disparities in officer respect.

    PubMed

    Voigt, Rob; Camp, Nicholas P; Prabhakaran, Vinodkumar; Hamilton, William L; Hetey, Rebecca C; Griffiths, Camilla M; Jurgens, David; Jurafsky, Dan; Eberhardt, Jennifer L

    2017-06-20

    Using footage from body-worn cameras, we analyze the respectfulness of police officer language toward white and black community members during routine traffic stops. We develop computational linguistic methods that extract levels of respect automatically from transcripts, informed by a thin-slicing study of participant ratings of officer utterances. We find that officers speak with consistently less respect toward black versus white community members, even after controlling for the race of the officer, the severity of the infraction, the location of the stop, and the outcome of the stop. Such disparities in common, everyday interactions between police and the communities they serve have important implications for procedural justice and the building of police-community trust.

  14. KSC Wildlife Show

    NASA Technical Reports Server (NTRS)

    1994-01-01

    This video highlights footage of the many forms of animal and plant life that inhabit the environs surrounding KSC. Shown are birds, alligators, butterflies, and plants as they react to shuttle launches and other activities eminating from KSC.

  15. Impacts of the 2011 Tohoku-oki tsunami along the Sendai coast protected by hard and soft seawalls; interpretations of satellite images, helicopter-borne video footage and field studies

    NASA Astrophysics Data System (ADS)

    Tappin, D. R.; Jordan, H. M.; Jordan, C. J.; Richmond, B. M.; Sugawara, D.; Goto, K.

    2012-12-01

    A combination of time-series satellite imagery, helicopter-borne video footage and field observation is used to identify the impact of a major tsunami on a low-lying coastal zone located in eastern Japan. A comparison is made between the coast protected by hard sea walls and the coast without. Changes to the coast are mapped from before and after imagery, and sedimentary processes identified from the video footage. The results are validated by field observations. The impact along a 'natural' coast, with minimal defences, is erosion focussed on the back beach. There is little erosion (or sedimentation) of the whole beach, and where active, erosion mainly forms V-shaped channels that are initiated during the tsunami flood and then further developed during backwash. Enigmatic, short lived, 'strand lines' are attributed to the slow fall of sea level after such a major tsunami. Backwash on such a low lying area takes place as sheet flood immediately after tsunami flooding has ceased, and then subsequently, when the water level landward of coastal ridges falls below their elevation, becomes confined to channels formed on the coastal margin by the initial tsunami impact. Immediately after the tsunami coastal reconstruction begins, sourced from the sediment recently flushed into the sea by tsunami backwash. Hard engineering structures are found to be small defence against highly energetic tsunami waves that overtop them. The main cause of damage is scouring at the landward base of concrete-faced embankments constructed to defend the coast from erosion, that results in foundation-weakening and collapse.

  16. Where Does RECS Square Footage Data Come From?

    EIA Publications

    2012-01-01

    The size of a home is a fixed characteristic strongly associated with the amount of energy consumed within it, particularly for space heating, air conditioning, lighting, and other appliances. As a part of the Residential Energy Consumption Survey (RECS), trained interviewers measure the square footage of each housing unit. RECS square footage data allow comparison of homes with varying characteristics. In-person measurements are vital because many alternate data sources, including property tax records, real estate listings, and, respondent estimates use varying definitions and under-estimate square footage as defined for the purposes of evaluating residential energy consumption.

  17. From Video to Photo

    NASA Technical Reports Server (NTRS)

    2004-01-01

    Ever wonder whether a still shot from a home video could serve as a "picture perfect" photograph worthy of being framed and proudly displayed on the mantle? Wonder no more. A critical imaging code used to enhance video footage taken from spaceborne imaging instruments is now available within a portable photography tool capable of producing an optimized, high-resolution image from multiple video frames.

  18. Person identification from aerial footage by a remote-controlled drone.

    PubMed

    Bindemann, Markus; Fysh, Matthew C; Sage, Sophie S K; Douglas, Kristina; Tummon, Hannah M

    2017-10-19

    Remote-controlled aerial drones (or unmanned aerial vehicles; UAVs) are employed for surveillance by the military and police, which suggests that drone-captured footage might provide sufficient information for person identification. This study demonstrates that person identification from drone-captured images is poor when targets are unfamiliar (Experiment 1), when targets are familiar and the number of possible identities is restricted by context (Experiment 2), and when moving footage is employed (Experiment 3). Person information such as sex, race and age is also difficult to access from drone-captured footage (Experiment 4). These findings suggest that such footage provides a particularly poor medium for person identification. This is likely to reflect the sub-optimal quality of such footage, which is subject to factors such as the height and velocity at which drones fly, viewing distance, unfavourable vantage points, and ambient conditions.

  19. Automatic Keyframe Summarization of User-Generated Video

    DTIC Science & Technology

    2014-06-01

    using the framework presented in this paper. 12 Scenery Technology has been developed that classifies the genre of a video. Here, video genres are...types of videos that shares similarities in content and structure. Many genres of video footage exist. Some examples include news, sports, movies...cartoons, and commercials. Rasheed et al. [42] classify video genres (comedy, action, drama, and horror) with low-level video statistics, such as average

  20. Debunking a Video on YouTube as an Authentic Research Experience

    NASA Astrophysics Data System (ADS)

    Davidowsky, Philip; Rogers, Michael

    2015-05-01

    Students are exposed to a variety of unrealistic physical experiences seen in movies, video games, and short online videos. A popular classroom activity has students examine footage to identify what aspects of physics are correctly and incorrectly represented.1-7 Some of the physical phenomena pictured might be tricks or illusions made easier to perform with the use of video, while others are removed from their historical context, leaving the audience to form misguided conclusions about what they saw with only the information in the video. One such video in which the late Eric Laithwaite, a successful British engineer and inventor, claims that a spinning wheel "becomes light as a feather" provides an opportunity for students to investigate Laithwaite's claim.8 The use of video footage can engage students in learning physics9 but also provide an opportunity for authentic research experiences.

  1. Coastal changes in the Sendai area from the impact of the 2011 Tōhoku-oki tsunami: Interpretations of time series satellite images, helicopter-borne video footage and field observations

    NASA Astrophysics Data System (ADS)

    Tappin, David R.; Evans, Hannah M.; Jordan, Colm J.; Richmond, Bruce; Sugawara, Daisuke; Goto, Kazuhisa

    2012-12-01

    A combination of time-series satellite imagery, helicopter-borne video footage and field observation is used to identify the impact of a major tsunami on a low-lying coastal zone located in eastern Japan. A comparison is made between the coast protected by armoured 'engineered' sea walls and the coast without. Changes are mapped from before and after imagery, and sedimentary processes identified from the video footage. The results are validated by field observations. The impact along a 'natural' coast, with minimal defences, is erosion focussed on the back beach. Along coasts with hard engineered protection constructed to defend against erosion, the presence of three to six metre high concrete-faced embankments results in severe erosion on their landward faces. The erosion is due to the tsunami wave accelerating through a hydraulic jump as it passes over the embankment, resulting in the formation of a ditch into which the foundations collapse. Engineered coastal defences are thus found to be small defence against highly energetic tsunami waves that overtop them. There is little erosion (or sedimentation) of the whole beach, and where active, it mainly forms V-shaped channels. These channels are probably initiated during tsunami inflow and then further developed during tsunami backflow. Tsunami backflow on such a low lying area takes place energetically as sheet flow immediately after tsunami flooding has ceased. Subsequently, when the water level landward of the coastal dune ridges falls below their elevation, flow becomes confined to rivers and breaches in the coast formed during tsunami inflow. Enigmatic, short lived, 'strand lines' are attributed to the slow fall of sea level after such a major tsunami. Immediately after the tsunami coastal reconstruction begins, sourced from the sediment recently flushed into the sea by tsunami backflow.

  2. Restored Moonwalk Footage Release

    NASA Image and Video Library

    2009-07-15

    Stan Lebar, former Westinghouse Electric program manager, left, talks about the Apollo era TV cameras such as the one on display in the foreground as Richard Nafzger, team lead and Goddard engineer, listens at NASA's briefing where restored Apollo 11 moonwalk footage was revealed for the first time at the Newseum, Thursday, July 16, 2009, in Washington, DC. Photo Credit: (NASA/Bill Ingalls)

  3. Restored Moonwalk Footage Release

    NASA Image and Video Library

    2009-07-15

    NASA moderator Mark Hess, left, directs reporters' questions to former Westinghouse Electric program manager Stan Lebar, second from left, team lead and Goddard engineer Richard Nafzger and president of Lowry Digital Mike Inchalik, far right, at a NASA briefing where restored Apollo 11 moonwalk footage was revealed for the first time at the Newseum, Thursday, July 16, 2009, in Washington, DC. Photo Credit: (NASA/Bill Ingalls)

  4. An evaluation of the efficacy of video displays for use with chimpanzees (Pan troglodytes).

    PubMed

    Hopper, Lydia M; Lambeth, Susan P; Schapiro, Steven J

    2012-05-01

    Video displays for behavioral research lend themselves particularly well to studies with chimpanzees (Pan troglodytes), as their vision is comparable to humans', yet there has been no formal test of the efficacy of video displays as a form of social information for chimpanzees. To address this, we compared the learning success of chimpanzees shown video footage of a conspecific compared to chimpanzees shown a live conspecific performing the same novel task. Footage of an unfamiliar chimpanzee operating a bidirectional apparatus was presented to 24 chimpanzees (12 males, 12 females), and their responses were compared to those of a further 12 chimpanzees given the same task but with no form of information. Secondly, we also compared the responses of the chimpanzees in the video display condition to responses of eight chimpanzees from a previously published study of ours, in which chimpanzees observed live models. Chimpanzees shown a video display were more successful than those in the control condition and showed comparable success to those that saw a live model. Regarding fine-grained copying (i.e. the direction that the door was pushed), only chimpanzees that observed a live model showed significant matching to the model's methods with their first response. Yet, when all the responses made by the chimpanzees were considered, comparable levels of matching were shown by chimpanzees in both the live and video conditions. © 2012 Wiley Periodicals, Inc.

  5. An Evaluation of the Efficacy of Video Displays for Use With Chimpanzees (Pan troglodytes)

    PubMed Central

    HOPPER, LYDIA M.; LAMBETH, SUSAN P.; SCHAPIRO, STEVEN J.

    2013-01-01

    Video displays for behavioral research lend themselves particularly well to studies with chimpanzees (Pan troglodytes), as their vision is comparable to humans’, yet there has been no formal test of the efficacy of video displays as a form of social information for chimpanzees. To address this, we compared the learning success of chimpanzees shown video footage of a conspecific compared to chimpanzees shown a live conspecific performing the same novel task. Footage of an unfamiliar chimpanzee operating a bidirectional apparatus was presented to 24 chimpanzees (12 males, 12 females), and their responses were compared to those of a further 12 chimpanzees given the same task but with no form of information. Secondly, we also compared the responses of the chimpanzees in the video display condition to responses of eight chimpanzees from a previously published study of ours, in which chimpanzees observed live models. Chimpanzees shown a video display were more successful than those in the control condition and showed comparable success to those that saw a live model. Regarding fine-grained copying (i.e. the direction that the door was pushed), only chimpanzees that observed a live model showed significant matching to the model’s methods with their first response. Yet, when all the responses made by the chimpanzees were considered, comparable levels of matching were shown by chimpanzees in both the live and video conditions. PMID:22318867

  6. The development of a tool for assessing the quality of closed circuit camera footage for use in forensic gait analysis.

    PubMed

    Birch, Ivan; Vernon, Wesley; Walker, Jeremy; Saxelby, Jai

    2013-10-01

    Gait analysis from closed circuit camera footage is now commonly used as evidence in criminal trials. The biomechanical analysis of human gait is a well established science in both clinical and laboratory settings. However, closed circuit camera footage is rarely of the quality of that taken in the more controlled clinical and laboratory environments. The less than ideal quality of much of this footage for use in gait analysis is associated with a range of issues, the combination of which can often render the footage unsuitable for use in gait analysis. The aim of this piece of work was to develop a tool for assessing the suitability of closed circuit camera footage for the purpose of forensic gait analysis. A Delphi technique was employed with a small sample of expert forensic gait analysis practitioners, to identify key quality elements of CCTV footage used in legal proceedings. Five elements of the footage were identified and then subdivided into 15 contributing sub-elements, each of which was scored using a 5-point Likert scale. A Microsoft Excel worksheet was developed to calculate automatically an overall score from the fifteen sub-element scores. Five expert witnesses experienced in using CCTV footage for gait analysis then trialled the prototype tool on current case footage. A repeatability study was also undertaken using standardized CCTV footage. The results showed the tool to be a simple and repeatable means of assessing the suitability of closed circuit camera footage for use in forensic gait analysis. The inappropriate use of poor quality footage could lead to challenges to the practice of forensic gait analysis. All parties involved in criminal proceedings must therefore understand the fitness for purpose of any footage used. The development of this tool could offer a method of achieving this goal, and help to assure the continued role of forensic gait analysis as an aid to the identification process. Copyright © 2013 Elsevier Ltd and Faculty of

  7. Caught on Camera: Special Education Classrooms and Video Surveillance

    ERIC Educational Resources Information Center

    Heintzelman, Sara C.; Bathon, Justin M.

    2017-01-01

    In Texas, state policy anticipates that installing video cameras in special education classrooms will decrease student abuse inflicted by teachers. Lawmakers assume that collecting video footage will prevent teachers from engaging in malicious actions and prosecute those who choose to harm children. At the request of a parent, Section 29.022 of…

  8. Overview: DVD-video disc set of seafloor transects during USGS research cruises in the Pacific Ocean

    USGS Publications Warehouse

    Chezar, Henry; Newman, Ivy

    2006-01-01

    Many USGS research programs involve the gathering of underwater seafloor video footage. This footage was captured on a variety of media, including Beta III and VHS tapes. Much of this media is now deteriorating, prompting the migration of this video footage onto DVD-Video discs. Advantages of using DVD-Video discs are: less storage space, ease of transport, wider distribution, and non-degradational viewing of the media. The videos in this particular collection (328 of them) were made on the ocean floor under President Reagan's Exclusive Economic Zone proclamation of 1983. There are now five copies of these 328 discs in existence: at the USGS libraries in Menlo Park, Calif., Denver, Colo., and Reston, Va.; at the USGS Publications Warehouse (masters from which to make copies for customers); and Hank Chezar's USGS Western Coastal and Marine Geology team archives. The purpose of Open-File Report 2004-1101 is to provide users with a listing of the available DVD-Video discs (with their Open-File Report numbers) along with a brief description of their associated USGS research activities. Each disc was created by first encoding the source video and audio into MPEG-2 streams using the MediaPress Pro hardware encoder. A menu for the disc was then made using Adobe Photoshop 6.0. The disc was then authored using DVD Studio Pro and subsequently written onto a DVD-R recordable disc.

  9. Dense mesh sampling for video-based facial animation

    NASA Astrophysics Data System (ADS)

    Peszor, Damian; Wojciechowska, Marzena

    2016-06-01

    The paper describes an approach for selection of feature points on three-dimensional, triangle mesh obtained using various techniques from several video footages. This approach has a dual purpose. First, it allows to minimize the data stored for the purpose of facial animation, so that instead of storing position of each vertex in each frame, one could store only a small subset of vertices for each frame and calculate positions of others based on the subset. Second purpose is to select feature points that could be used for anthropometry-based retargeting of recorded mimicry to another model, with sampling density beyond that which can be achieved using marker-based performance capture techniques. Developed approach was successfully tested on artificial models, models constructed using structured light scanner, and models constructed from video footages using stereophotogrammetry.

  10. Video documentation of experiments at the USGS debris-flow flume 1992–2017

    USGS Publications Warehouse

    Logan, Matthew; Iverson, Richard M.

    2007-11-23

    This set of videos presents about 18 hours of footage documenting the 163 experiments conducted at the USGS debris-flow flume from 1992 to 2017. Owing to improvements in video technology over the years, the quality of footage from recent experiments generally exceeds that from earlier experiments.Use the list below to access the individual videos, which are mostly grouped by date and subject matter. When a video is selected from the list, multiple video sequences are generally shown in succession, beginning with a far-field overview and proceeding to close-up views and post-experiment documentation.Interpretations and data from experiments at the USGS debris-flow flume are not provided here but can be found in published reports, many of which are available online at: https://profile.usgs.gov/riverson/A brief introduction to the flume facility is also available online in USGS Open-File Report 92–483 [http://pubs.er.usgs.gov/usgspubs/ofr/ofr92483].

  11. Debunking a Video on Youtube as an Authentic Research Experience

    ERIC Educational Resources Information Center

    Davidowsky, Philip; Rogers, Michael

    2015-01-01

    Students are exposed to a variety of unrealistic physical experiences seen in movies, video games, and short online videos. A popular classroom activity has students examine footage to identify what aspects of physics are correctly and incorrectly represented. Some of the physical phenomena pictured might be tricks or illusions made easier to…

  12. Effect of Video-Cases on the Acquisition of Situated Knowledge of Teachers

    ERIC Educational Resources Information Center

    Geerts, Walter M.; Steenbeek, Henderien W.; van Geert, Paul L. C.

    2018-01-01

    Video footage is frequently used at teacher education. According to Sherin and Dyer (2017), this is often done in a way that contradicts recent studies. According to them, video is suitable for observing and interpreting interactions in the classroom. This contributes to their situated knowledge, which allows expert teachers to act intuitively,…

  13. Immersive video for virtual tourism

    NASA Astrophysics Data System (ADS)

    Hernandez, Luis A.; Taibo, Javier; Seoane, Antonio J.

    2001-11-01

    This paper describes a new panoramic, 360 degree(s) video system and its use in a real application for virtual tourism. The development of this system has required to design new hardware for multi-camera recording, and software for video processing in order to elaborate the panorama frames and to playback the resulting high resolution video footage on a regular PC. The system makes use of new VR display hardware, such as WindowVR, in order to make the view dependent on the viewer's spatial orientation and so enhance immersiveness. There are very few examples of similar technologies and the existing ones are extremely expensive and/or impossible to be implemented on personal computers with acceptable quality. The idea of the system starts from the concept of Panorama picture, developed in technologies such as QuickTimeVR. This idea is extended to the concept of panorama frame that leads to panorama video. However, many problems are to be solved to implement this simple scheme. Data acquisition involves simultaneously footage recording in every direction, and latter processing to convert every set of frames in a single high resolution panorama frame. Since there is no common hardware capable of 4096x512 video playback at 25 fps rate, it must be stripped in smaller pieces which the system must manage to get the right frames of the right parts as the user movement demands it. As the system must be immersive, the physical interface to watch the 360 degree(s) video is a WindowVR, that is, a flat screen with an orientation tracker that the user holds in his hands, moving it like if it were a virtual window through which the city and its activity is being shown.

  14. An educational video to promote multi-factorial approaches for fall and injury prevention in long-term care facilities

    PubMed Central

    2014-01-01

    Background Older adults living in long term care (LTC) settings are vulnerable to fall-related injuries. There is a need to develop and implement evidence-based approaches to address fall injury prevention in LTC. Knowledge translation (KT) interventions to support the uptake of evidence-based approaches to fall injury prevention in LTC need to be responsive to the learning needs of LTC staff and use mediums, such as videos, that are accessible and easy-to-use. This article describes the development of two unique educational videos to promote fall injury prevention in long-term care (LTC) settings. These videos are unique from other fall prevention videos in that they include video footage of real life falls captured in the LTC setting. Methods Two educational videos were developed (2012–2013) to support the uptake of findings from a study exploring the causes of falls based on video footage captured in LTC facilities. The videos were developed by: (1) conducting learning needs assessment in LTC settings via six focus groups (2) liaising with LTC settings to identify learning priorities through unstructured conversations; and (3) aligning the content with principles of adult learning theory. Results The videos included footage of falls, interviews with older adults and fall injury prevention experts. The videos present evidence-based fall injury prevention recommendations aligned to the needs of LTC staff and: (1) highlight recommendations deemed by LTC staff as most urgent (learner-centered learning); (2) highlight negative impacts of falls on older adults (encourage meaning-making); and, (3) prompt LTC staff to reflect on fall injury prevention practices (encourage critical reflection). Conclusions Educational videos are an important tool available to researchers seeking to translate evidence-based recommendations into LTC settings. Additional research is needed to determine their impact on practice. PMID:24884899

  15. Video from Panel Discussion with Joseph Fraumeni and David Schottenfeld

    Cancer.gov

    Video footage from Panel Discussion with Joseph Fraumeni and David Schottenfeld on Cancer Epidemiology over the Last Half-Century and Thoughts on the Future. The discussion took place on May 11, 2012, when DCEG hosted Dr. Schottenfeld as a Visiting Scholar.

  16. Video ethnography during and after caesarean sections: methodological challenges.

    PubMed

    Stevens, Jeni; Schmied, Virginia; Burns, Elaine; Dahlen, Hannah G

    2017-07-01

    To describe the challenges of, and steps taken to successfully collect video ethnographic data during and after caesarean sections. Video ethnographic research uses real-time video footage to study a cultural group or phenomenon in the natural environment. It allows researchers to discover previously undocumented practices, which in-turn provides insight into strengths and weaknesses in practice. This knowledge can be used to translate evidence-based interventions into practice. Video ethnographic design. A video ethnographic approach was used to observe the contact between mothers and babies immediately after elective caesarean sections in a tertiary hospital in Sydney, Australia. Women, their support people and staff participated in the study. Data were collected via video footage and field notes in the operating theatre, recovery and the postnatal ward. Challenges faced whilst conducting video ethnographic research included attaining ethics approval, recruiting vast numbers of staff members and 'vulnerable' pregnant women, and endeavouring to be a 'fly on the wall' and a 'complete observer'. There were disadvantages being an 'insider' whilst conducting the research because occasionally staff members requested help with clinical tasks whilst collecting data; however, it was an advantage as it enabled ease of access to the environment and staff members that were to be recruited. Despite the challenges, video ethnographic research enabled the provision of unique data that could not be attained by any other means. Video ethnographic data are beneficial as it provides exceptionally rich data for in-depth analysis of interactions between the environment, equipment and people in the hospital environment. The analysis of this type of data can then be used to inform improvements for future care. © 2016 John Wiley & Sons Ltd.

  17. The effect of frame rate on the ability of experienced gait analysts to identify characteristics of gait from closed circuit television footage.

    PubMed

    Birch, Ivan; Vernon, Wesley; Burrow, Gordon; Walker, Jeremy

    2014-03-01

    Forensic gait analysis is increasingly being used as part of criminal investigations. A major issue is the quality of the closed circuit television (CCTV) footage used, particularly the frame rate which can vary from 25 frames per second to one frame every 4s. To date, no study has investigated the effect of frame rate on forensic gait analysis. A single subject was fitted with an ankle foot orthosis and recorded walking at 25 frames per second. 3D motion data were also collected, providing an absolute assessment of the gait characteristics. The CCTV footage was then edited to produce a set of eight additional pieces of footage, at various frame rates. Practitioners with knowledge of forensic gait analysis were recruited and instructed to record their observations regarding the characteristics of the subject's gait from the footage. They were sequentially sent web links to the nine pieces of footage, lowest frame rate first, and a simple observation recording form, over a period of 8 months. A sample-based Pearson product-moment correlation analysis of the results demonstrated a significant positive relationship between frame rate and scores (r=0.868, p=0.002). The results of this study show that frame rate affects the ability of experienced practitioners to identify characteristics of gait captured on CCTV footage. Every effort should therefore be made to ensure that CCTV footage likely to be used in criminal proceedings is captured at as high a frame rate as possible. © 2013.

  18. The Tacoma Narrows Bridge Collapse on Film and Video

    NASA Astrophysics Data System (ADS)

    Olson, Don; Hook, Joseph; Doescher, Russell; Wolf, Steven

    2015-11-01

    This month marks the 75th anniversary of the Tacoma Narrows Bridge collapse. During a gale on Nov. 7, 1940, the bridge exhibited remarkable oscillations before collapsing spectacularly (Figs. 1-5). Physicists over the years have spent a great deal of time and energy studying this event. By using open-source analysis tools and digitized footage of the disaster, physics students in both high school and college can continue in this tradition. Students can watch footage of "Galloping Gertie," ask scientific questions about the bridge's collapse, analyze data, and draw conclusions from that analysis. Students should be encouraged to pursue their own investigations, but the question that drove our inquiry was this: "When physics classes watch modern video showing the oscillations and the free fall of the bridge fragments, are these scenes sped up, slowed down, or at the correct speed compared to what was observed by the eyewitnesses on Nov. 7, 1940?"

  19. Square Footage Requirements for Use in Developing the Local Facilities Plans and State Capital Outlay Applications for Funding.

    ERIC Educational Resources Information Center

    Georgia State Dept. of Education, Atlanta. Facilities Services Unit.

    This document presents the space requirements for Georgia's elementary, middle, and high schools. All square footage requirements are computed by using inside dimensions of a room; the square footage of support spaces in suites may be included when computing the square footage of the suite. Examples of support spaces include storage rooms,…

  20. Showing R-Rated Videos in School.

    ERIC Educational Resources Information Center

    Zirkel, Perry A.

    1999-01-01

    Since 1990, there have been at least six published court decisions concerning teachers' use of controversial videos in public schools. A relevant district policy led the Colorado Supreme Court to uphold a teacher's termination for showing 12th graders an R-rated 1900 Bertolucci film on fascism. Implications are discussed. (MLH)

  1. The use of student-driven video projects as an educational and outreach tool

    NASA Astrophysics Data System (ADS)

    Bamzai, A.; Farrell, W.; Klemm, T.

    2014-12-01

    With recent technological advances, the barriers to filmmaking have been lowered, and it is now possible to record and edit video footage with a smartphone or a handheld camera and free software. Students accustomed to documenting their every-day experiences for multimedia-rich social networking sites feel excited and creatively inspired when asked to take on ownership of more complex video projects. With a small amount of guidance on shooting primary and secondary footage and an overview of basic interview skills, students are self-motivated to identify the learning themes with which they resonate most strongly and record their footage in a way that is true to their own experience. The South Central Climate Science Center (SC-CSC) is one of eight regional centers formed by the U.S. Department of the Interior in order to provide decision makers with the science, tools, and information they need to address the impacts of climate variability and change on their areas of responsibility. An important component of this mission is to innovate in the areas of translational science and science communication. This presentation will highlight how the SC-CSC used student-driven video projects to document our Early Career Researcher Workshop and our Undergraduate Internship for Underrepresented Minorities. These projects equipped the students with critical thinking and project management skills, while also providing a finished product that the SC-CSC can use for future outreach purposes.

  2. Educational Video Recording and Editing for The Hand Surgeon

    PubMed Central

    Rehim, Shady A.; Chung, Kevin C.

    2016-01-01

    Digital video recordings are increasingly used across various medical and surgical disciplines including hand surgery for documentation of patient care, resident education, scientific presentations and publications. In recent years, the introduction of sophisticated computer hardware and software technology has simplified the process of digital video production and improved means of disseminating large digital data files. However, the creation of high quality surgical video footage requires basic understanding of key technical considerations, together with creativity and sound aesthetic judgment of the videographer. In this article we outline the practical steps involved with equipment preparation, video recording, editing and archiving as well as guidance for the choice of suitable hardware and software equipment. PMID:25911212

  3. Learning Hierarchical Skills for Game Agents from Video of Human Behavior

    DTIC Science & Technology

    2009-01-01

    intelligent agents for computer games is an im- portant aspect of game development . However, traditional methods are expensive, and the resulting agents...Constructing autonomous agents is an essential task in game development . In this paper, we outlined a system that an- alyzes preprocessed video footage of

  4. Historical Footage of John Glenn Friendship 7

    NASA Technical Reports Server (NTRS)

    1962-01-01

    The Friendship mission launch on the 20th day of February marked the first time that an American attempts to orbit the Earth. Historical footage of John Glenn's suit up, ride out to the launch pad, countdown, liftoff, booster engine cutoff, and separation of the booster engine escape tower is shown. Views of the Earth, Glenn's manual control of the electrical fly-by wire system, and the recovery of the landing vehicle from the ocean are presented.

  5. Development of a Spanish-Language Hospice Video.

    PubMed

    Chung, Kyusuk; Augustin, Frankline; Esparza, Salvador

    2017-09-01

    The nation faces a persistent issue of delayed access to hospice care. Even though hospice enrollment is considered to be one of the most difficult medical decisions, physician clinics and hospitals lack tools for helping patients/families faced with making decisions about enrollment. Health-care literature lacks discussion of development of decision-making aids in the context of hospice decisions for minority ethnic groups, even though those groups have decisional needs that may differ from those of non-Hispanic whites. To fill the gap, we developed a video of a Latino hospice patient with footages showing how the patient was being taken care of by her family with support from a hospice disciplinary team. A primary objective of this article is to describe how focus groups, existing decision aids, and individual interviews were used to develop and improve a Spanish-language hospice educational video targeting Latino subgroups with linguistic, cultural, and educational barriers. These steps may provide guidelines for developing and revising health-related videos targeting other minority ethnic groups.

  6. Using Stereo Vision to Support the Automated Analysis of Surveillance Videos

    NASA Astrophysics Data System (ADS)

    Menze, M.; Muhle, D.

    2012-07-01

    Video surveillance systems are no longer a collection of independent cameras, manually controlled by human operators. Instead, smart sensor networks are developed, able to fulfil certain tasks on their own and thus supporting security personnel by automated analyses. One well-known task is the derivation of people's positions on a given ground plane from monocular video footage. An improved accuracy for the ground position as well as a more detailed representation of single salient people can be expected from a stereoscopic processing of overlapping views. Related work mostly relies on dedicated stereo devices or camera pairs with a small baseline. While this set-up is helpful for the essential step of image matching, the high accuracy potential of a wide baseline and the according good intersection geometry is not utilised. In this paper we present a stereoscopic approach, working on overlapping views of standard pan-tilt-zoom cameras which can easily be generated for arbitrary points of interest by an appropriate reconfiguration of parts of a sensor network. Experiments are conducted on realistic surveillance footage to show the potential of the suggested approach and to investigate the influence of different baselines on the quality of the derived surface model. Promising estimations of people's position and height are retrieved. Although standard matching approaches show helpful results, future work will incorporate temporal dependencies available from image sequences in order to reduce computational effort and improve the derived level of detail.

  7. Onboard Systems Record Unique Videos of Space Missions

    NASA Technical Reports Server (NTRS)

    2010-01-01

    Ecliptic Enterprises Corporation, headquartered in Pasadena, California, provided onboard video systems for rocket and space shuttle launches before it was tasked by Ames Research Center to craft the Data Handling Unit that would control sensor instruments onboard the Lunar Crater Observation and Sensing Satellite (LCROSS) spacecraft. The technological capabilities the company acquired on this project, as well as those gained developing a high-speed video system for monitoring the parachute deployments for the Orion Pad Abort Test Program at Dryden Flight Research Center, have enabled the company to offer high-speed and high-definition video for geosynchronous satellites and commercial space missions, providing remarkable footage that both informs engineers and inspires the imagination of the general public.

  8. An objective measure of hyperactivity aspects with compressed webcam video.

    PubMed

    Wehrmann, Thomas; Müller, Jörg Michael

    2015-01-01

    Objective measures of physical activity are currently not considered in clinical guidelines for the assessment of hyperactivity in the context of Attention-Deficit/Hyperactivity Disorder (ADHD) due to low and inconsistent associations between clinical ratings, missing age-related norm data and high technical requirements. This pilot study introduces a new objective measure for physical activity using compressed webcam video footage, which should be less affected by age-related variables. A pre-test established a preliminary standard procedure for testing a clinical sample of 39 children aged 6-16 years (21 with a clinical ADHD diagnosis, 18 without). Subjects were filmed for 6 min while solving a standardized cognitive performance task. Our webcam video-based video-activity score was compared with respect to two independent video-based movement ratings by students, ratings of Inattentiveness, Hyperactivity and Impulsivity by clinicians (DCL-ADHS) giving a clinical diagnosis of ADHD and parents (FBB-ADHD) and physical features (age, weight, height, BMI) using mean scores, correlations and multiple regression. Our video-activity score showed a high agreement (r = 0.81) with video-based movement ratings, but also considerable associations with age-related physical attributes. After controlling for age-related confounders, the video-activity score showed not the expected association with clinicians' or parents' hyperactivity ratings. Our preliminary conclusion is that our video-activity score assesses physical activity but not specific information related to hyperactivity. The general problem of defining and assessing hyperactivity with objective criteria remains.

  9. TRW Video News: Chandra X-ray Observatory

    NASA Technical Reports Server (NTRS)

    1999-01-01

    This NASA Kennedy Space Center sponsored video release presents live footage of the Chandra X-ray Observatory prior to STS-93 as well as several short animations recreating some of its activities in space. These animations include a Space Shuttle fly-by with Chandra, two perspectives of Chandra's deployment from the Shuttle, the Chandra deployment orbit sequence, the Initial Upper Stage (IUS) first stage burn, and finally a "beauty shot", which represents another animated view of Chandra in space.

  10. Distributing and Showing Farmer Learning Videos in Bangladesh

    ERIC Educational Resources Information Center

    Bentley, Jeffery W.; Van Mele, Paul; Harun-ar-Rashid, Md.; Krupnik, Timothy J.

    2016-01-01

    Purpose: To describe the results of showing farmer learning videos through different types of volunteers. Design/Methodology/Approach: Semi-structured interviews with volunteers from different occupational groups in Bangladesh, and a phone survey with 227 respondents. Findings: Each occupational group acted differently. Shop keepers, tillage…

  11. An Investigation into the Relationships between Higher Education Facility Square Footage and Student Enrollments, University Endowments, and Student Tuition

    ERIC Educational Resources Information Center

    Chapman, James David

    2012-01-01

    America's colleges and universities have expanded campus facilities by renovating and increasing square footage. This is in contrast to general construction activity during the same time period. This quantitative study investigates the relationship between university and college campus facility square footage per FTE and university…

  12. Judgments of Nonverbal Behaviour by Children with High-Functioning Autism Spectrum Disorder: Can They Detect Signs of Winning and Losing from Brief Video Clips?

    ERIC Educational Resources Information Center

    Ryan, Christian; Furley, Philip; Mulhall, Kathleen

    2016-01-01

    Typically developing children are able to judge who is winning or losing from very short clips of video footage of behaviour between active match play across a number of sports. Inferences from "thin slices" (short video clips) allow participants to make complex judgments about the meaning of posture, gesture and body language. This…

  13. Action Cam Footage from U.S. Spacewalk 41

    NASA Image and Video Library

    2017-05-09

    This footage was taken by NASA astronaut Peggy Whitson during a spacewalk on the International Space Station on Thursday, March 30. She was joined on the spacewalk by NASA astronaut Shane Kimbrough. The two spacewalkers reconnected cables and electrical connections on PMA-3 at its new home on top of the Harmony module. They also installed the second of the two upgraded computer relay boxes on the station’s truss and installed shields and covers on PMA-3 and the now-vacant common berthing mechanism port on Tranquility.

  14. The Biology and Space Exploration Video Series

    NASA Technical Reports Server (NTRS)

    William, Jacqueline M.; Murthy, Gita; Rapa, Steve; Hargens, Alan R.

    1995-01-01

    The Biology and Space Exploration video series illustrates NASA's commitment to increasing the public awareness and understanding of life sciences in space. The video series collection, which was initiated by Dr. Joan Vernikos at NASA headquarters and Dr. Alan Hargens at NASA Ames Research Center, will be distributed to universities and other institutions around the United States. The video series parallels the "Biology and Space Exploration" course taught by NASA Ames scientists at Stanford University, Palo Alto, California. In the past, students have shown considerable enthusiasm for this course and have gained a much better appreciation and understanding of space life sciences and exploration. However, due to the unique nature of the topics and the scarcity of available educational materials, most students in other universities around the country are unable to benefit from this educational experience. Therefore, with the assistance of Ames experts, we are producing a video series on selected aspects of life sciences in space to expose undergraduate students to the effects of gravity on living systems. Additionally, the video series collection contains space flight footage, graphics, charts, pictures, and interviews to make the materials interesting and intelligible to viewers.

  15. 25 CFR 256.11 - What are the occupancy and square footage standards for a dwelling provided with Category C...

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 25 Indians 1 2014-04-01 2014-04-01 false What are the occupancy and square footage standards for a..., DEPARTMENT OF THE INTERIOR HOUSING HOUSING IMPROVEMENT PROGRAM § 256.11 What are the occupancy and square... bedrooms Total dwelling square footage 1 (maximum) 1-3 2 2 900 4-6 2 3 1050 7 or more 2 4 31350 1 Total...

  16. 25 CFR 256.11 - What are the occupancy and square footage standards for a dwelling provided with Category C...

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 25 Indians 1 2011-04-01 2011-04-01 false What are the occupancy and square footage standards for a..., DEPARTMENT OF THE INTERIOR HOUSING HOUSING IMPROVEMENT PROGRAM § 256.11 What are the occupancy and square... bedrooms Total dwelling square footage 1 (maximum) 1-3 2 2 900 4-6 2 3 1050 7 or more 2 4 31350 1 Total...

  17. 25 CFR 256.11 - What are the occupancy and square footage standards for a dwelling provided with Category C...

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 25 Indians 1 2010-04-01 2010-04-01 false What are the occupancy and square footage standards for a..., DEPARTMENT OF THE INTERIOR HOUSING HOUSING IMPROVEMENT PROGRAM § 256.11 What are the occupancy and square... bedrooms Total dwelling square footage 1 (maximum) 1-3 2 2 900 4-6 2 3 1050 7 or more 2 4 31350 1 Total...

  18. 25 CFR 256.11 - What are the occupancy and square footage standards for a dwelling provided with Category C...

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 25 Indians 1 2013-04-01 2013-04-01 false What are the occupancy and square footage standards for a..., DEPARTMENT OF THE INTERIOR HOUSING HOUSING IMPROVEMENT PROGRAM § 256.11 What are the occupancy and square... bedrooms Total dwelling square footage 1 (maximum) 1-3 2 2 900 4-6 2 3 1050 7 or more 2 4 31350 1 Total...

  19. 25 CFR 256.11 - What are the occupancy and square footage standards for a dwelling provided with Category C...

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 25 Indians 1 2012-04-01 2011-04-01 true What are the occupancy and square footage standards for a..., DEPARTMENT OF THE INTERIOR HOUSING HOUSING IMPROVEMENT PROGRAM § 256.11 What are the occupancy and square... bedrooms Total dwelling square footage 1 (maximum) 1-3 2 2 900 4-6 2 3 1050 7 or more 2 4 31350 1 Total...

  20. Ubiquitous UAVs: a cloud based framework for storing, accessing and processing huge amount of video footage in an efficient way

    NASA Astrophysics Data System (ADS)

    Efstathiou, Nectarios; Skitsas, Michael; Psaroudakis, Chrysostomos; Koutras, Nikolaos

    2017-09-01

    Nowadays, video surveillance cameras are used for the protection and monitoring of a huge number of facilities worldwide. An important element in such surveillance systems is the use of aerial video streams originating from onboard sensors located on Unmanned Aerial Vehicles (UAVs). Video surveillance using UAVs represent a vast amount of video to be transmitted, stored, analyzed and visualized in a real-time way. As a result, the introduction and development of systems able to handle huge amount of data become a necessity. In this paper, a new approach for the collection, transmission and storage of aerial videos and metadata is introduced. The objective of this work is twofold. First, the integration of the appropriate equipment in order to capture and transmit real-time video including metadata (i.e. position coordinates, target) from the UAV to the ground and, second, the utilization of the ADITESS Versatile Media Content Management System (VMCMS-GE) for storing of the video stream and the appropriate metadata. Beyond the storage, VMCMS-GE provides other efficient management capabilities such as searching and processing of videos, along with video transcoding. For the evaluation and demonstration of the proposed framework we execute a use case where the surveillance of critical infrastructure and the detection of suspicious activities is performed. Collected video Transcodingis subject of this evaluation as well.

  1. Video analysis of concussion injury mechanism in under-18 rugby

    PubMed Central

    Hendricks, Sharief; O'Connor, Sam; Lambert, Michael; Brown, James C; Burger, Nicholas; Mc Fie, Sarah; Readhead, Clint; Viljoen, Wayne

    2016-01-01

    Background Understanding the mechanism of injury is necessary for the development of effective injury prevention strategies. Video analysis of injuries provides valuable information on the playing situation and athlete-movement patterns, which can be used to formulate these strategies. Therefore, we conducted a video analysis of the mechanism of concussion injury in junior-level rugby union and compared it with a representative and matched non-injury sample. Methods Injury reports for 18 concussion events were collected from the 2011 to 2013 under-18 Craven Week tournaments. Also, video footage was recorded for all 3 years. On the basis of the injury events, a representative ‘control’ sample of matched non-injury events in the same players was identified. The video footage, which had been recorded at each tournament, was then retrospectively analysed and coded. 10 injury events (5 tackle, 4 ruck, 1 aerial collision) and 83 non-injury events were analysed. Results All concussions were a result of contact with an opponent and 60% of players were unaware of the impending contact. For the measurement of head position on contact, 43% had a ‘down’ position, 29% the ‘up and forward’ and 29% the ‘away’ position (n=7). The speed of the injured tackler was observed as ‘slow’ in 60% of injurious tackles (n=5). In 3 of the 4 rucks in which injury occurred (75%), the concussed player was acting defensively either in the capacity of ‘support’ (n=2) or as the ‘jackal’ (n=1). Conclusions Training interventions aimed at improving peripheral vision, strengthening of the cervical muscles, targeted conditioning programmes to reduce the effects of fatigue, and emphasising safe and effective playing techniques have the potential to reduce the risk of sustaining a concussion injury. PMID:27900149

  2. Shrinkage and footage loss from drying 4/4-inch hard maple lumber.

    Treesearch

    Daniel E. Dunmire

    1968-01-01

    Equations are presented for estimating shrinkage and resulting footage losses due to drying hard maple lumber. The equations, based on board shrinkage data taken from a representative lumber sample, are chiefly intended for use with lots of hard maple lumber, such as carloads, truckloads, or kiln loads, but also can be used for estimating the average shrinkage of...

  3. Can high-intensity exercise be more pleasant?: attentional dissociation using music and video.

    PubMed

    Jones, Leighton; Karageorghis, Costas I; Ekkekakis, Panteleimon

    2014-10-01

    Theories suggest that external stimuli (e.g., auditory and visual) may be rendered ineffective in modulating attention when exercise intensity is high. We examined the effects of music and parkland video footage on psychological measures during and after stationary cycling at two intensities: 10% of maximal capacity below ventilatory threshold and 5% above. Participants (N = 34) were exposed to four conditions at each intensity: music only, video only, music and video, and control. Analyses revealed main effects of condition and exercise intensity for affective valence and perceived activation (p < .001), state attention (p < .05), and exercise enjoyment (p < .001). The music-only and music-and-video conditions led to the highest valence and enjoyment scores during and after exercise regardless of intensity. Findings indicate that attentional manipulations can exert a salient influence on affect and enjoyment even at intensities slightly above ventilatory threshold.

  4. Collaborative Meaning-Making Using Video Footage: Teachers and Researchers Analyse Children's Working Theories about Friendship

    ERIC Educational Resources Information Center

    Hedges, Helen; Cooper, Maria

    2017-01-01

    Children represent their efforts to make sense of their social worlds in various ways. Having, making and being friends are common foci of children's interactions and identity development. These efforts may become visible through analysing video-recorded snippets of children's play. In particular, repeated viewing of episodes of children's…

  5. Incorporating Video Feedback into Self-Management Training to Promote Generalization of Social Initiations by Children with Autism

    ERIC Educational Resources Information Center

    Deitchman, Carole; Reeve, Sharon A.; Reeve, Kenneth F.; Progar, Patrick R.

    2010-01-01

    Self-monitoring is a well-studied and widely used self-management skill in which a person observes and records his or her own behavior. Video feedback (VFB) occurs when an instructor videotapes a child's performances and reviews the footage with the child and potentially allows the child to score or evaluate their own behavior. A multiple-probe…

  6. Use of Video Analysis System for Working Posture Evaluations

    NASA Technical Reports Server (NTRS)

    McKay, Timothy D.; Whitmore, Mihriban

    1994-01-01

    In a work environment, it is important to identify and quantify the relationship among work activities, working posture, and workplace design. Working posture may impact the physical comfort and well-being of individuals, as well as performance. The Posture Video Analysis Tool (PVAT) is an interactive menu and button driven software prototype written in Supercard (trademark). Human Factors analysts are provided with a predefined set of options typically associated with postural assessments and human performance issues. Once options have been selected, the program is used to evaluate working posture and dynamic tasks from video footage. PVAT has been used to evaluate postures from Orbiter missions, as well as from experimental testing of prototype glove box designs. PVAT can be used for video analysis in a number of industries, with little or no modification. It can contribute to various aspects of workplace design such as training, task allocations, procedural analyses, and hardware usability evaluations. The major advantage of the video analysis approach is the ability to gather data, non-intrusively, in restricted-access environments, such as emergency and operation rooms, contaminated areas, and control rooms. Video analysis also provides the opportunity to conduct preliminary evaluations of existing work areas.

  7. Graphic Depictions: Portrayals of Mental Illness in Video Games.

    PubMed

    Shapiro, Samuel; Rotter, Merrill

    2016-11-01

    Although studies have examined portrayals of mental illness in the mass media, little attention has been paid to such portrayals in video games. In this descriptive study, the fifty highest-selling video games in each year from 2011 to 2013 were surveyed through application of search terms to the Wikia search engine, with subsequent review of relevant footage on YouTube. Depiction categories were then assigned based on the extent of portrayal and qualitative characteristics compared against mental illness stereotypes in cinema. Twenty-three of the 96 surveyed games depicted at least one character with mental illness. Forty-two characters were identified as portraying mental illness, with most characters classified under a "homicidal maniac" stereotype, although many characters did not clearly reflect cinema stereotypes and were subcategorized based on the shared traits. Video games contain frequent and varied portrayals of mental illness, with depictions most commonly linking mental illness to dangerous and violent behaviors. © 2016 American Academy of Forensic Sciences.

  8. A Framework of Simple Event Detection in Surveillance Video

    NASA Astrophysics Data System (ADS)

    Xu, Weiguang; Zhang, Yafei; Lu, Jianjiang; Tian, Yulong; Wang, Jiabao

    Video surveillance is playing more and more important role in people's social life. Real-time alerting of threaten events and searching interesting content in stored large scale video footage needs human operator to pay full attention on monitor for long time. The labor intensive mode has limit the effectiveness and efficiency of the system. A framework of simple event detection is presented advance the automation of video surveillance. An improved inner key point matching approach is used to compensate motion of background in real-time; frame difference are used to detect foreground; HOG based classifiers are used to classify foreground object into people and car; mean-shift is used to tracking the recognized objects. Events are detected based on predefined rules. The maturity of the algorithms guarantee the robustness of the framework, and the improved approach and the easily checked rules enable the framework to work in real-time. Future works to be done are also discussed.

  9. 4K Video of Colorful Liquid in Space

    NASA Image and Video Library

    2015-10-09

    Once again, astronauts on the International Space Station dissolved an effervescent tablet in a floating ball of water, and captured images using a camera capable of recording four times the resolution of normal high-definition cameras. The higher resolution images and higher frame rate videos can reveal more information when used on science investigations, giving researchers a valuable new tool aboard the space station. This footage is one of the first of its kind. The cameras are being evaluated for capturing science data and vehicle operations by engineers at NASA's Marshall Space Flight Center in Huntsville, Alabama.

  10. A bio-inspired system for spatio-temporal recognition in static and video imagery

    NASA Astrophysics Data System (ADS)

    Khosla, Deepak; Moore, Christopher K.; Chelian, Suhas

    2007-04-01

    This paper presents a bio-inspired method for spatio-temporal recognition in static and video imagery. It builds upon and extends our previous work on a bio-inspired Visual Attention and object Recognition System (VARS). The VARS approach locates and recognizes objects in a single frame. This work presents two extensions of VARS. The first extension is a Scene Recognition Engine (SCE) that learns to recognize spatial relationships between objects that compose a particular scene category in static imagery. This could be used for recognizing the category of a scene, e.g., office vs. kitchen scene. The second extension is the Event Recognition Engine (ERE) that recognizes spatio-temporal sequences or events in sequences. This extension uses a working memory model to recognize events and behaviors in video imagery by maintaining and recognizing ordered spatio-temporal sequences. The working memory model is based on an ARTSTORE1 neural network that combines an ART-based neural network with a cascade of sustained temporal order recurrent (STORE)1 neural networks. A series of Default ARTMAP classifiers ascribes event labels to these sequences. Our preliminary studies have shown that this extension is robust to variations in an object's motion profile. We evaluated the performance of the SCE and ERE on real datasets. The SCE module was tested on a visual scene classification task using the LabelMe2 dataset. The ERE was tested on real world video footage of vehicles and pedestrians in a street scene. Our system is able to recognize the events in this footage involving vehicles and pedestrians.

  11. Examining the Quality of Preservice Science Teachers' Written Reflections When Using Video Recordings, Audio Recordings, and Memories of a Teaching Event

    ERIC Educational Resources Information Center

    Calandra, Brendan; Brantley-Dias, Laurie; Yerby, Johnathan; Demir, Kadir

    2018-01-01

    A group of preservice science teachers edited video footage of their practice teaching to identify and isolate critical incidents. They then wrote guided reflection papers on those critical incidents using different forms of media prompts while they wrote. The authors used a counterbalanced research design to compare the quality of writing that…

  12. Using Student Learning and Development Outcomes to Evaluate a First-Year Undergraduate Group Video Project

    PubMed Central

    Jensen, Murray; Mattheis, Allison; Johnson, Brady

    2012-01-01

    Students in an interdisciplinary undergraduate introductory course were required to complete a group video project focused on nutrition and healthy eating. A mixed-methods approach to data collection involved observing and rating video footage of group work sessions and individual and focus group interviews. These data were analyzed and used to evaluate the effectiveness of the assignment in light of two student learning outcomes and two student development outcomes at the University of Minnesota. Positive results support the continued inclusion of the project within the course, and recommend the assignment to other programs as a viable means of promoting both content learning and affective behavioral objectives. PMID:22383619

  13. Using student learning and development outcomes to evaluate a first-year undergraduate group video project.

    PubMed

    Jensen, Murray; Mattheis, Allison; Johnson, Brady

    2012-01-01

    Students in an interdisciplinary undergraduate introductory course were required to complete a group video project focused on nutrition and healthy eating. A mixed-methods approach to data collection involved observing and rating video footage of group work sessions and individual and focus group interviews. These data were analyzed and used to evaluate the effectiveness of the assignment in light of two student learning outcomes and two student development outcomes at the University of Minnesota. Positive results support the continued inclusion of the project within the course, and recommend the assignment to other programs as a viable means of promoting both content learning and affective behavioral objectives.

  14. Problem-based learning using patient-simulated videos showing daily life for a comprehensive clinical approach

    PubMed Central

    Ohira, Yoshiyuki; Uehara, Takanori; Noda, Kazutaka; Suzuki, Shingo; Shikino, Kiyoshi; Kajiwara, Hideki; Kondo, Takeshi; Hirota, Yusuke; Ikusaka, Masatomi

    2017-01-01

    Objectives We examined whether problem-based learning tutorials using patient-simulated videos showing daily life are more practical for clinical learning, compared with traditional paper-based problem-based learning, for the consideration rate of psychosocial issues and the recall rate for experienced learning. Methods Twenty-two groups with 120 fifth-year students were each assigned paper-based problem-based learning and video-based problem-based learning using patient-simulated videos. We compared target achievement rates in questionnaires using the Wilcoxon signed-rank test and discussion contents diversity using the Mann-Whitney U test. A follow-up survey used a chi-square test to measure students’ recall of cases in three categories: video, paper, and non-experienced. Results Video-based problem-based learning displayed significantly higher achievement rates for imagining authentic patients (p=0.001), incorporating a comprehensive approach including psychosocial aspects (p<0.001), and satisfaction with sessions (p=0.001). No significant differences existed in the discussion contents diversity regarding the International Classification of Primary Care Second Edition codes and chapter types or in the rate of psychological codes. In a follow-up survey comparing video and paper groups to non-experienced groups, the rates were higher for video (χ2=24.319, p<0.001) and paper (χ2=11.134, p=0.001). Although the video rate tended to be higher than the paper rate, no significant difference was found between the two. Conclusions Patient-simulated videos showing daily life facilitate imagining true patients and support a comprehensive approach that fosters better memory. The clinical patient-simulated video method is more practical and clinical problem-based tutorials can be implemented if we create patient-simulated videos for each symptom as teaching materials.  PMID:28245193

  15. Problem-based learning using patient-simulated videos showing daily life for a comprehensive clinical approach.

    PubMed

    Ikegami, Akiko; Ohira, Yoshiyuki; Uehara, Takanori; Noda, Kazutaka; Suzuki, Shingo; Shikino, Kiyoshi; Kajiwara, Hideki; Kondo, Takeshi; Hirota, Yusuke; Ikusaka, Masatomi

    2017-02-27

    We examined whether problem-based learning tutorials using patient-simulated videos showing daily life are more practical for clinical learning, compared with traditional paper-based problem-based learning, for the consideration rate of psychosocial issues and the recall rate for experienced learning. Twenty-two groups with 120 fifth-year students were each assigned paper-based problem-based learning and video-based problem-based learning using patient-simulated videos. We compared target achievement rates in questionnaires using the Wilcoxon signed-rank test and discussion contents diversity using the Mann-Whitney U test. A follow-up survey used a chi-square test to measure students' recall of cases in three categories: video, paper, and non-experienced. Video-based problem-based learning displayed significantly higher achievement rates for imagining authentic patients (p=0.001), incorporating a comprehensive approach including psychosocial aspects (p<0.001), and satisfaction with sessions (p=0.001). No significant differences existed in the discussion contents diversity regarding the International Classification of Primary Care Second Edition codes and chapter types or in the rate of psychological codes. In a follow-up survey comparing video and paper groups to non-experienced groups, the rates were higher for video (χ 2 =24.319, p<0.001) and paper (χ 2 =11.134, p=0.001). Although the video rate tended to be higher than the paper rate, no significant difference was found between the two. Patient-simulated videos showing daily life facilitate imagining true patients and support a comprehensive approach that fosters better memory. The clinical patient-simulated video method is more practical and clinical problem-based tutorials can be implemented if we create patient-simulated videos for each symptom as teaching materials.

  16. Assessing Caribbean Shallow and Mesophotic Reef Fish Communities Using Baited-Remote Underwater Video (BRUV) and Diver-Operated Video (DOV) Survey Techniques.

    PubMed

    Andradi-Brown, Dominic A; Macaya-Solis, Consuelo; Exton, Dan A; Gress, Erika; Wright, Georgina; Rogers, Alex D

    2016-01-01

    Fish surveys form the backbone of reef monitoring and management initiatives throughout the tropics, and understanding patterns in biases between techniques is crucial if outputs are to address key objectives optimally. Often biases are not consistent across natural environmental gradients such as depth, leading to uncertainty in interpretation of results. Recently there has been much interest in mesophotic reefs (reefs from 30-150 m depth) as refuge habitats from fishing pressure, leading to many comparisons of reef fish communities over depth gradients. Here we compare fish communities using stereo-video footage recorded via baited remote underwater video (BRUV) and diver-operated video (DOV) systems on shallow and mesophotic reefs in the Mesoamerican Barrier Reef, Caribbean. We show inconsistent responses across families, species and trophic groups between methods across the depth gradient. Fish species and family richness were higher using BRUV at both depth ranges, suggesting that BRUV is more appropriate for recording all components of the fish community. Fish length distributions were not different between methods on shallow reefs, yet BRUV recorded more small fish on mesophotic reefs. However, DOV consistently recorded greater relative fish community biomass of herbivores, suggesting that studies focusing on herbivores should consider using DOV. Our results highlight the importance of considering what component of reef fish community researchers and managers are most interested in surveying when deciding which survey technique to use across natural gradients such as depth.

  17. Assessing Caribbean Shallow and Mesophotic Reef Fish Communities Using Baited-Remote Underwater Video (BRUV) and Diver-Operated Video (DOV) Survey Techniques

    PubMed Central

    Macaya-Solis, Consuelo; Exton, Dan A.; Gress, Erika; Wright, Georgina; Rogers, Alex D.

    2016-01-01

    Fish surveys form the backbone of reef monitoring and management initiatives throughout the tropics, and understanding patterns in biases between techniques is crucial if outputs are to address key objectives optimally. Often biases are not consistent across natural environmental gradients such as depth, leading to uncertainty in interpretation of results. Recently there has been much interest in mesophotic reefs (reefs from 30–150 m depth) as refuge habitats from fishing pressure, leading to many comparisons of reef fish communities over depth gradients. Here we compare fish communities using stereo-video footage recorded via baited remote underwater video (BRUV) and diver-operated video (DOV) systems on shallow and mesophotic reefs in the Mesoamerican Barrier Reef, Caribbean. We show inconsistent responses across families, species and trophic groups between methods across the depth gradient. Fish species and family richness were higher using BRUV at both depth ranges, suggesting that BRUV is more appropriate for recording all components of the fish community. Fish length distributions were not different between methods on shallow reefs, yet BRUV recorded more small fish on mesophotic reefs. However, DOV consistently recorded greater relative fish community biomass of herbivores, suggesting that studies focusing on herbivores should consider using DOV. Our results highlight the importance of considering what component of reef fish community researchers and managers are most interested in surveying when deciding which survey technique to use across natural gradients such as depth. PMID:27959907

  18. ADDITIONAL FOOTAGE FROM COVERAGE OF THE FIRST MEETING OF THE NATIONAL SPACE COUNCIL

    NASA Image and Video Library

    2017-10-05

    Additional footage from coverage of the first meeting of the National Space Council, held on Oct. 5 at the Smithsonian National Air and Space Museum’s Steven F. Udvar-Hazy Center in Chantilly, Virginia. Vice President Mike Pence is the chair of the council. Participants included NASA’s Acting Administrator Robert Lightfoot, as well as a number of Trump Administration cabinet members and senior officials, and aerospace industry leaders.

  19. A Novel Approach to High Definition, High-Contrast Video Capture in Abdominal Surgery

    PubMed Central

    Cosman, Peter H.; Shearer, Christopher J.; Hugh, Thomas J.; Biankin, Andrew V.; Merrett, Neil D.

    2007-01-01

    Objective: The aim of this study was to define the best available option for video capture of surgical procedures for educational and archival purposes, with a view to identifying methods of capturing high-quality footage and identifying common pitfalls. Summary Background Data: Several options exist for those who wish to record operative surgical techniques on video. While high-end equipment is an unnecessary expense for most surgical units, several techniques are readily available that do not require industrial-grade audiovisual recording facilities, but not all are suited to every surgical application. Methods: We surveyed and evaluated the available technology for video capture in surgery. Our evaluation included analyses of video resolution, depth of field, contrast, exposure, image stability, and frame composition, as well as considerations of cost, accessibility, utility, feasibility, and economies of scale. Results: Several video capture options were identified, and the strengths and shortcomings of each were catalogued. None of the commercially available options was deemed suitable for high-quality video capture of abdominal surgical procedures. A novel application of off-the-shelf technology was devised to address these issues. Conclusions: Excellent quality video capture of surgical procedures within deep body cavities is feasible using commonly available equipment and technology, with minimal technical difficulty. PMID:17414600

  20. Excessive users of violent video games do not show emotional desensitization: an fMRI study.

    PubMed

    Szycik, Gregor R; Mohammadi, Bahram; Hake, Maria; Kneer, Jonas; Samii, Amir; Münte, Thomas F; Te Wildt, Bert T

    2017-06-01

    Playing violent video games have been linked to long-term emotional desensitization. We hypothesized that desensitization effects in excessive users of violent video games should lead to decreased brain activations to highly salient emotional pictures in emotional sensitivity brain regions. Twenty-eight male adult subjects showing excessive long-term use of violent video games and age and education matched control participants were examined in two experiments using standardized emotional pictures of positive, negative and neutral valence. No group differences were revealed even at reduced statistical thresholds which speaks against desensitization of emotion sensitive brain regions as a result of excessive use of violent video games.

  1. Original footage of the Chilean miners with manganism published in Neurology in 1967.

    PubMed

    Miranda, Marcelo; Bustamante, M Leonor; Mena, Francisco; Lees, Andrew

    2015-12-15

    Manganism has captured the imagination of neurologists for more than a century because of its similarities to Parkinson disease and its indirect but seminal role in the "l-dopa miracle." We present unpublished footage of the original case series reported in Neurology® in 1967 by Mena and Cotzias depicting the typical neurologic signs of manganism in 4 Chilean miners and their response to high doses of l-dopa. © 2015 American Academy of Neurology.

  2. Presumed filter-feeding in a deep-sea benthic shrimp (Decapoda, Caridea, Stylodactylidae), with records of the deepest occurrence of carideans.

    PubMed

    Wicksten, Mary; De Grave, Sammy; France, Scott; Kelley, Christopher

    2017-01-01

    Using the remotely operated vehicle Deep Discoverer , we observed a large stylodactylid shrimp resting on a sedimented sea floor at 4826 m in the Marianas Trench Marine National Monument. The shrimp was not collected but most closely resembled Bathystylodactylus bathyalis , known previously only from a single broken specimen. Video footage shows the shrimp facing into the current and extending its upraised and fringed first and second pereopods, presumably capturing passing particles. The video footage is the first ever to show a living deep-sea stylodactylid and constitutes the deepest record for the family. We provide a list of the deepest reports of caridean shrimps world-wide.

  3. Use of mobile video show for community behavior change on maternal and newborn health in rural Ethiopia.

    PubMed

    Desta, Binyam Fekadu; Mohammed, Hajira; Barry, Danika; Frew, Aynalem Hailemichael; Hepburn, Kenneth; Claypoole, Christine

    2014-01-01

    A number of factors affect Ethiopia's efforts to meet Millennium Development Goals 4 and 5 to reduce maternal and newborn mortality. The Maternal and Newborn Health in Ethiopia Partnership (MaNHEP) project, as part of its overall strategy, implemented behavior change communication interventions to increase women's demand for and use of antenatal, birth, and postnatal services. Seeking to reach "media-dark" areas, MaNHEP implemented a mobile video show focused on maternal and newborn health. We report on the effect of the mobile video show on community knowledge, attitudes, and beliefs regarding maternal and newborn health, especially regarding care-seeking behavior and use of a skilled attendant for birth and postnatal care. Two main data sources are used: qualitative data gathered through mobile video show participant discussions in 31 randomly selected kebeles (villages with about 1000 households) and focus groups in 4 kebeles (2 from each region), and quantitative data generated from 510 randomly selected adults participating in MaNHEP's endline survey. Qualitative data were thematically analyzed by the research team, and the accuracy of the transcriptions and categorization was also checked. The mobile video show reached a total of 28,389 mostly young or adult females in 51 kebeles. At endline, mobile video show attendees (vs nonattendees) reported significantly (P < .001) higher rates of recall of key MaNHEP messages about use of health extension workers for pregnancy registration, labor and birth notification, and postnatal care. Qualitative analysis yielded 3 overarching themes: mirrors to the community (the portrayal is accurate); call to action (we have to change this); and improvement ideas (suggested positive actions). The entertaining nature and local organization of the mobile video show event encouraged attendance. Building the video around recognizable characters (particularly the husbands) contributed to bringing about desired changes in people

  4. Video-based self-review: comparing Google Glass and GoPro technologies.

    PubMed

    Paro, John A M; Nazareli, Rahim; Gurjala, Anadev; Berger, Aaron; Lee, Gordon K

    2015-05-01

    Professionals in a variety of specialties use video-based review as a method of constant self-evaluation. We believe critical self-reflection will allow a surgical trainee to identify methods for improvement throughout residency and beyond. We have used 2 new popular technologies to evaluate their role in accomplishing the previously mentioned objectives. Our group investigated Google Glass and GoPro cameras. Medical students, residents, and faculty were invited to wear each of the devices during a scheduled operation. After the case, each participant was asked to comment on a number of features of the device including comfort, level of distraction/interference with operating, ease of video acquisition, and battery life. Software and hardware specifications were compiled and compared by the authors. A "proof-of-concept" was also performed using the video-conferencing abilities of Google Glass to perform a simulated flap check. The technical specifications of the 2 cameras favor GoPro over Google Glass. Glass records in 720p with 5-MP still shots, and the GoPro records in 1080p with 12-MP still shots. Our tests of battery life showed more than 2 hours of continuous video with GoPro, and less than 1 hour for Glass. Favorable features of Google Glass included comfort and relative ease of use; they could not comfortably wear loupes while operating, and would have preferred longer hands-free video recording. The GoPro was slightly more cumbersome and required a nonsterile team member to activate all pictures or video; however, loupes could be worn. Google Glass was successfully used in the hospital for a simulated flap check, with overall audio and video being transmitted--fine detail was lost, however. There are benefits and limitations to each of the devices tested. Google Glass is in its infancy and may gain a larger intraoperative role in the future. We plan to use Glass as a way for trainees to easily acquire intraoperative footage as a means to "review tape" and

  5. Bringing science from the top of the world to the rest of the world: using video to describe earthquake research in Nepal following the devastating 2015 M7.8 Gorkha earthquake

    NASA Astrophysics Data System (ADS)

    Karplus, M. S.; Barajas, A.; Garibay, L.

    2016-12-01

    In response to the April 25, 2015 M7.8 earthquake on the Main Himalayan Thrust in Nepal, NSF Geosciences funded a rapid seismological response project entitled NAMASTE (Nepal Array Measuring Aftershock Seismicity Trailing Earthquake). This project included the deployment, maintenance, and demobilization of a network of 45 temporary seismic stations from June 2015 to May 2016. During the demobilization of the seismic network, video footage was recorded to tell the story of the NAMASTE team's seismic research in Nepal using short movies. In this presentation, we will describe these movies and discuss our strategies for effectively communicating this research to both the academic and general public with the goals of promoting earthquake hazards and international awareness and inspiring enthusiasm about learning and participating in science research. For example, an initial screening of these videos took place for an Introduction to Geology class at the University of Texas at El Paso to obtain feedback from approximately 100 first-year students with only a basic geology background. The feedback was then used to inform final cuts of the video suitable for a range of audiences, as well as to help guide future videography of field work. The footage is also being cut into a short, three-minute video to be featured on the website of The University of Texas at El Paso, home to several of the NAMASTE team researchers.

  6. Can Video Self-Modeling Improve Affected Limb Reach and Grasp Ability in Stroke Patients?

    PubMed

    Steel, Kylie Ann; Mudie, Kurt; Sandoval, Remi; Anderson, David; Dogramaci, Sera; Rehmanjan, Mohammad; Birznieks, Ingvars

    2018-01-01

    The authors examined whether feedforward video self-modeling (FF VSM) would improve control over the affected limb, movement self-confidence, movement self-consciousness, and well-being in 18 stroke survivors. Participants completed a cup transport task and 2 questionnaires related to psychological processes pre- and postintervention. Pretest video footage of the unaffected limb performing the task was edited to create a best-of or mirror-reversed training DVD, creating the illusion that patients were performing proficiently with the affected limb. The training yielded significant improvements for the forward movement of the affected limb compared to the unaffected limb. Significant improvements were also seen in movement self-confidence, movement self-consciousness, and well-being. FF VSM appears to be a viable way to improve motor ability in populations with movement disorders.

  7. 77 FR 8811 - Takes of Marine Mammals Incidental to Specified Activities; St. George Reef Light Station...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-02-15

    ... provide the most accurate means of documenting species composition, age and sex class of pinnipeds using...; Fate of the animal(s); and Photographs or video footage of the animal(s) (if equipment is available... discovery. The SGRLPS will provide photographs or video footage (if available) or other documentation of the...

  8. 78 FR 71576 - Takes of Marine Mammals Incidental to Specified Activities; St. George Reef Light Station...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-29

    ... species composition, age and sex class of pinnipeds using the project site during human activity periods...; Fate of the animal(s); and Photographs or video footage of the animal(s) (if equipment is available... hours of the discovery. The Society will provide photographs or video footage (if available) or other...

  9. Satellite Video Shows Movement of Major U.S. Winter Storm

    NASA Image and Video Library

    2014-02-12

    A new NASA video of NOAA's GOES satellite imagery shows three days of movement of the massive winter storm that stretches from the southern U.S. to the northeast. Visible and infrared imagery from NOAA's GOES-East or GOES-13 satellite from Feb. 10 at 1815 UTC/1:15 p.m. EST to Feb. 12 to 1845 UTC/1:45 p.m. EST were compiled into a video made by NASA/NOAA's GOES Project at NASA's Goddard Space Flight Center in Greenbelt, Md. In the video, viewers can see the development and movement of the clouds associated with the progression of the frontal system and related low pressure areas that make up the massive storm. The video also shows the snow covered ground over the Great Lakes region and Ohio Valley that stretches to northern New England. The clouds and fallen snow data from NOAA's GOES-East satellite were overlaid on a true-color image of land and ocean created by data from the Moderate Resolution Imaging Spectroradiometer or MODIS instrument that flies aboard NASA's Aqua and Terra satellites. On February 12 at 10 a.m. EST, NOAA's National Weather Service or NWS continued to issue watches and warnings from Texas to New England. Specifically, NWS cited Winter Storm Warnings and Winter Weather Advisories were in effect from eastern Texas eastward across the interior section of southeastern U.S. states and across much of the eastern seaboard including the Appalachians. Winter storm watches are in effect for portions of northern New England as well as along the western slopes of northern and central Appalachians. For updates on local forecasts, watches and warnings, visit NOAA's www.weather.gov webpage. NOAA's Weather Prediction Center or WPC noted the storm is expected to bring "freezing rain spreading into the Carolinas, significant snow accumulations are expected in the interior Mid-Atlantic states tonight into Thursday and ice storm warnings and freezing rain advisories are in effect across much of central Georgia. GOES satellites provide the kind of continuous

  10. The accuracy and reproducibility of video assessment in the pitch-side management of concussion in elite rugby.

    PubMed

    Fuller, G W; Kemp, S P T; Raftery, M

    2017-03-01

    To investigate the accuracy and reliability of side-line video review of head impact events to aid identification of concussion in elite sport. Diagnostic accuracy and inter-rater agreement study. Immediate care, match day and team doctors involved in the 2015 Rugby Union World Cup viewed 20 video clips showing broadcaster's footage of head impact events occurring during elite Rugby matches. Subjects subsequently recorded whether any criteria warranting permanent removal from play or medical room head injury assessment were present. The accuracy of these ratings were compared to consensus expert opinion by calculating mean sensitivity and specificity across raters. The reproducibility of doctor's decisions was additionally assessed using raw agreement and Gwets AC1 chance corrected agreement coefficient. Forty rugby medicine doctors were included in the study. Compared to the expert reference standard overall sensitivity and specificity of doctors decisions were 77.5% (95% CI 73.1-81.5%) and 53.3% (95% CI 48.2-58.2%) respectively. Overall there was raw agreement of 67.8% (95% CI 57.9-77.7%) between doctors across all video clips. Chance corrected Gwets AC1 agreement coefficient was 0.39 (95% CI 0.17-0.62), indicating fair agreement. Rugby World Cup doctors' demonstrated moderate accuracy and fair reproducibility in head injury event decision making when assessing video clips of head impact events. The use of real-time video may improve the identification, decision making and management of concussion in elite sports. Copyright © 2016 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.

  11. Facial Comparison from CCTV footage: The competence and confidence of the jury.

    PubMed

    Walker, Heather; Tough, Ann

    2015-12-01

    CCTV footage is commonly used in the court room to help visualise the crime in question and to help identify the offender. Unfortunately the majority of surveillance cameras produce such poor quality images that the task of identifying individuals can be extremely difficult. This study aimed at determining whether the task of identifying the offender in CCTV footage was one which a jury should be competent to do, or whether expert evidence would be beneficial in such cases. The ability of potential jury members, the general public, was tested by asking participants to play the role of a jury member by means of an online survey. Potential jury members viewed CCTV in which a simulated offence took place, and were subsequently asked to compare still images of a defendant to the offender to try to determine if they were competent and confident about making a judgement as to whether the defendant committed the crime. Factors such as age, gender and profession of the potential jury members were considered, as well as the type of crime committed, in order to establish if these play any role in the decision made by potential jury members. These factors did not appear to play a significant role; however confidence was also investigated and it became very evident that this was a factor that must be taken into consideration when determining the requirement for expert contribution in facial comparisons. Jury members may well be willing and competent to a basic level in carrying out a facial comparison but if they lack a certain level of confidence in their ability and decision making then this task is more suitable for an expert with experience and skills in this field. Copyright © 2015 The Chartered Society of Forensic Sciences. Published by Elsevier Ireland Ltd. All rights reserved.

  12. NFL Films audio, video, and film production facilities

    NASA Astrophysics Data System (ADS)

    Berger, Russ; Schrag, Richard C.; Ridings, Jason J.

    2003-04-01

    The new NFL Films 200,000 sq. ft. headquarters is home for the critically acclaimed film production that preserves the NFL's visual legacy week-to-week during the football season, and is also the technical plant that processes and archives football footage from the earliest recorded media to the current network broadcasts. No other company in the country shoots more film than NFL Films, and the inclusion of cutting-edge video and audio formats demands that their technical spaces continually integrate the latest in the ever-changing world of technology. This facility houses a staggering array of acoustically sensitive spaces where music and sound are equal partners with the visual medium. Over 90,000 sq. ft. of sound critical technical space is comprised of an array of sound stages, music scoring stages, audio control rooms, music writing rooms, recording studios, mixing theaters, video production control rooms, editing suites, and a screening theater. Every production control space in the building is designed to monitor and produce multi channel surround sound audio. An overview of the architectural and acoustical design challenges encountered for each sophisticated listening, recording, viewing, editing, and sound critical environment will be discussed.

  13. Habitat diversity in the Northeastern Gulf of Mexico: Selected video clips from the Gulfstream Natural Gas Pipeline digital archive

    USGS Publications Warehouse

    Raabe, Ellen A.; D'Anjou, Robert; Pope, Domonique K.; Robbins, Lisa L.

    2011-01-01

    This project combines underwater video with maps and descriptions to illustrate diverse seafloor habitats from Tampa Bay, Florida, to Mobile Bay, Alabama. A swath of seafloor was surveyed with underwater video to 100 meters (m) water depth in 1999 and 2000 as part of the Gulfstream Natural Gas System Survey. The U.S. Geological Survey (USGS) in St. Petersburg, Florida, in cooperation with Eckerd College and the Florida Department of Environmental Protection (FDEP), produced an archive of analog-to-digital underwater movies. Representative clips of seafloor habitats were selected from hundreds of hours of underwater footage. The locations of video clips were mapped to show the distribution of habitat and habitat transitions. The numerous benthic habitats in the northeastern Gulf of Mexico play a vital role in the region's economy, providing essential resources for tourism, natural gas, recreational water sports (fishing, boating, scuba diving), materials, fresh food, energy, a source of sand for beach renourishment, and more. These submerged natural resources are important to the economy but are often invisible to the general public. This product provides a glimpse of the seafloor with sample underwater video, maps, and habitat descriptions. It was developed to depict the range and location of seafloor habitats in the region but is limited by depth and by the survey track. It should not be viewed as comprehensive, but rather as a point of departure for inquiries and appreciation of marine resources and seafloor habitats. Further information is provided in the Resources section.

  14. National Anthem

    NASA Technical Reports Server (NTRS)

    1991-01-01

    A montage of video clips over the years, footage shows the spacecrews, launch, and landing for different orbiters and missions. Clips include the Endeavour and Atlantis Orbiters and are shown to the music of the American National Anthem.

  15. The Video Interaction Guidance approach applied to teaching communication skills in dentistry.

    PubMed

    Quinn, S; Herron, D; Menzies, R; Scott, L; Black, R; Zhou, Y; Waller, A; Humphris, G; Freeman, R

    2016-05-01

    To examine dentists' views of a novel video review technique to improve communication skills in complex clinical situations. Dentists (n = 3) participated in a video review known as Video Interaction Guidance to encourage more attuned interactions with their patients (n = 4). Part of this process is to identify where dentists and patients reacted positively and effectively. Each dentist was presented with short segments of video footage taken during an appointment with a patient with intellectual disabilities and communication difficulties. Having observed their interactions with patients, dentists were asked to reflect on their communication strategies with the assistance of a trained VIG specialist. Dentists reflected that their VIG session had been insightful and considered the review process as beneficial to communication skills training in dentistry. They believed that this technique could significantly improve the way dentists interact and communicate with patients. The VIG sessions increased their awareness of the communication strategies they use with their patients and were perceived as neither uncomfortable nor threatening. The VIG session was beneficial in this exploratory investigation because the dentists could identify when their interactions were most effective. Awareness of their non-verbal communication strategies and the need to adopt these behaviours frequently were identified as key benefits of this training approach. One dentist suggested that the video review method was supportive because it was undertaken by a behavioural scientist rather than a professional counterpart. Some evidence supports the VIG approach in this specialist area of communication skills and dental training. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  16. In-flight Video Captured by External Tank Camera System

    NASA Technical Reports Server (NTRS)

    2005-01-01

    In this July 26, 2005 video, Earth slowly fades into the background as the STS-114 Space Shuttle Discovery climbs into space until the External Tank (ET) separates from the orbiter. An External Tank ET Camera System featuring a Sony XC-999 model camera provided never before seen footage of the launch and tank separation. The camera was installed in the ET LO2 Feedline Fairing. From this position, the camera had a 40% field of view with a 3.5 mm lens. The field of view showed some of the Bipod area, a portion of the LH2 tank and Intertank flange area, and some of the bottom of the shuttle orbiter. Contained in an electronic box, the battery pack and transmitter were mounted on top of the Solid Rocker Booster (SRB) crossbeam inside the ET. The battery pack included 20 Nickel-Metal Hydride batteries (similar to cordless phone battery packs) totaling 28 volts DC and could supply about 70 minutes of video. Located 95 degrees apart on the exterior of the Intertank opposite orbiter side, there were 2 blade S-Band antennas about 2 1/2 inches long that transmitted a 10 watt signal to the ground stations. The camera turned on approximately 10 minutes prior to launch and operated for 15 minutes following liftoff. The complete camera system weighs about 32 pounds. Marshall Space Flight Center (MSFC), Johnson Space Center (JSC), Goddard Space Flight Center (GSFC), and Kennedy Space Center (KSC) participated in the design, development, and testing of the ET camera system.

  17. Teaching school teachers to recognize respiratory distress in asthmatic children.

    PubMed

    Sapien, Robert E; Fullerton-Gleason, L; Allen, N

    2004-10-01

    To demonstrate that school teachers can be taught to recognize respiratory distress in asthmatic children. Forty-five school teachers received a one-hour educational session on childhood asthma. Each education session consisted of two portions, video footage of asthmatic children exhibiting respiratory distress and didactic. Pre- and posttests on general asthma knowledge, signs of respiratory distress on video footage and comfort level with asthma knowledge and medications were administered. General asthma knowledge median scores increased significantly, pre = 60% correct, post = 70% (p < 0.0001). The ability to visually recognize respiratory distress also significantly improved (pre-median = 66.7% correct, post = 88.9% [p < 0.0001]). Teachers' comfort level with asthma knowledge and medications improved. Using video footage, school teachers can be taught to visually recognize respiratory distress in asthmatic children. Improvement in visual recognition of respiratory distress was greater than improvement in didactic asthma information.

  18. Spontaneous Brain Activity Did Not Show the Effect of Violent Video Games on Aggression: A Resting-State fMRI Study.

    PubMed

    Pan, Wei; Gao, Xuemei; Shi, Shuo; Liu, Fuqu; Li, Chao

    2017-01-01

    A great many of empirical researches have proved that longtime exposure to violent video game can lead to a series of negative effects. Although research has focused on the neural basis of the correlation between violent video game and aggression, little is known whether the spontaneous brain activity is associated with violent video game exposure. To address this question, we measured the spontaneous brain activity using resting-state functional magnetic resonance imaging (fMRI). We used the amplitude of low-frequency fluctuations (ALFF) and fractional ALFF (fALFF) to quantify spontaneous brain activity. The results showed there is no significant difference in ALFF, or fALFF, between violent video game group and the control part, indicating that long time exposure to violent video games won't significantly influence spontaneous brain activity, especially the core brain regions such as execution control, moral judgment and short-term memory. This implies the adverse impact of violent video games is exaggerated.

  19. Spontaneous Brain Activity Did Not Show the Effect of Violent Video Games on Aggression: A Resting-State fMRI Study

    PubMed Central

    Pan, Wei; Gao, Xuemei; Shi, Shuo; Liu, Fuqu; Li, Chao

    2018-01-01

    A great many of empirical researches have proved that longtime exposure to violent video game can lead to a series of negative effects. Although research has focused on the neural basis of the correlation between violent video game and aggression, little is known whether the spontaneous brain activity is associated with violent video game exposure. To address this question, we measured the spontaneous brain activity using resting-state functional magnetic resonance imaging (fMRI). We used the amplitude of low-frequency fluctuations (ALFF) and fractional ALFF (fALFF) to quantify spontaneous brain activity. The results showed there is no significant difference in ALFF, or fALFF, between violent video game group and the control part, indicating that long time exposure to violent video games won’t significantly influence spontaneous brain activity, especially the core brain regions such as execution control, moral judgment and short-term memory. This implies the adverse impact of violent video games is exaggerated. PMID:29375416

  20. Are traditional methods of determining nest predators and nest fates reliable? An experiment with Wood Thrushes (Hylocichla mustelina) using miniature video cameras

    USGS Publications Warehouse

    Williams, Gary E.; Wood, P.B.

    2002-01-01

    We used miniature infrared video cameras to monitor Wood Thrush (Hylocichla mustelina) nests during 1998–2000. We documented nest predators and examined whether evidence at nests can be used to predict predator identities and nest fates. Fifty-six nests were monitored; 26 failed, with 3 abandoned and 23 depredated. We predicted predator class (avian, mammalian, snake) prior to review of video footage and were incorrect 57% of the time. Birds and mammals were underrepresented whereas snakes were over-represented in our predictions. We documented ≥9 nest-predator species, with the southern flying squirrel (Glaucomys volans) taking the most nests (n = 8). During 2000, we predicted fate (fledge or fail) of 27 nests; 23 were classified correctly. Traditional methods of monitoring nests appear to be effective for classifying success or failure of nests, but ineffective at classifying nest predators.

  1. VIDEO REVIEW: Maths in a Box video: Take-off - moving bodies with constant mass

    NASA Astrophysics Data System (ADS)

    Marks, Ken

    1999-09-01

    I write this review as a PGCE maths tutor, and therefore from the perspective of using parts of this series at A-level. The sample video, `Take-off - moving bodies with constant mass', is a good example of combining real footage with commentary as the viewer is invited to think about modelling the take-off of an aircraft. The style is reminiscent of Open University presentations and here the challenge is to determine the necessary length of the runway. The video is split into two sections. The first, commentary, section works quite well, although it jars a bit to hear Newton's Third Law put across as `Action and reaction are equal and opposite'; this is a familiar offering but one that still causes mystification in the sixth form. The viewer is invited to think about setting up equations, and reminded that the chain rule will be necessary to solve the differential equation generated from Newton's Second Law. This gives a good indication of the level of mathematics required. Unfortunately the flow is then somewhat disturbed by a strong emphasis on boundary conditions. If the student can cope with the general level of calculus required, this aspect of the challenge would also seem to fit more naturally into the second section of the video. This second section looks at setting up the equations and `solutions'. It can be used after classroom discussion, and takes the viewer through three, increasingly sophisticated, models involving functions for drag and resistance forces. On the whole this is clear and helpful, but for some reason the solutions each stop with an equation linking the length of the runway to the take-off velocity, failing to make use of the second equation to eliminate this intermediate variable. All in all, it is a useful addition to resources for A-level, particularly if students are also following the sort of mechanics syllabus (within mathematics) that emphasizes modelling.

  2. MSFC Historic Resource Reel

    NASA Image and Video Library

    2013-12-11

    Name/Title of Video: Marshall Space Flight Center Historic Resource Reel Description: A brief collection of film and video b-roll of historic events and programs associated with NASA's Marshall Space Flight Center in Huntsville, Ala. For more information and/or more footage of these events, please contact the Marshall Center Public & Employee Communications Office. Graphic Information:file footage PAO Name:News Chief Jennifer Stanfield or MSFC Historian Mike Wright Phone Number:256-544-0034 Email Address: jennifer.stanfield@nasa.gov or mike.d.wright@nasa.gov

  3. President Clinton's Statement on the Comprehensive Nuclear Test Ban Treaty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clinton, Bill

    This is video footage of President Clinton delivering a statement to the press on signing the Comprehensive Nuclear Test Ban Treaty and answering press pool questions before departing Kansas City, Missouri. This footage is official public record produced by the White House Television (WHTV) crew, provided by the Clinton Presidential Library.

  4. Social learning in nest-building birds watching live-streaming video demonstrators.

    PubMed

    Guillette, Lauren M; Healy, Susan D

    2018-02-13

    Determining the role that social learning plays in construction behaviours, such as nest building or tool manufacture, could be improved if more experimental control could be gained over the exact public information that is provided by the demonstrator, to the observing individual. Using video playback allows the experimenter to choose what information is provided, but will only be useful in determining the role of social learning if observers attend to, and learn from, videos in a manner that is similar to live demonstration. The goal of the current experiment was to test whether live-streamed video presentations of nest building by zebra finches Taeniopygia guttata would lead observers to copy the material choice demonstrated to them. Here, males that had not previously built a nest were given an initial preference test between materials of two colours. Those observers then watched live-stream footage of a familiar demonstrator building a nest with material of the colour that the observer did not prefer. After this experience, observers were given the chance to build a nest with materials of the two colours. Although two-thirds of the observer males preferred material of the demonstrated colour after viewing the demonstrator build a nest with material of that colour more than they had previously, their preference for the demonstrated material was not as strong as that of observers that had viewed live demonstrator builders in a previous experiment. Our results suggest researchers should proceed with caution before using video demonstration in tests of social learning. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  5. Spaceship Skylab: Wings of Discovery

    NASA Technical Reports Server (NTRS)

    2001-01-01

    This video shows footage from the missions on the Skylab space station. The resident astronauts are seen as they perform spacewalks and various scientific experiments, including solar studies, Earth observations, metal alloy creation, and the effects of microgravity on the human body. The importance of these experiments is described.

  6. 78 FR 34370 - Revisions to Electric Quarterly Report Filing Process; Notice of Availability of Video Showing...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-07

    ... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission [Docket No. RM12-3-000] Revisions to Electric Quarterly Report Filing Process; Notice of Availability of Video Showing How To File Electric Quarterly Reports Using the Web Interface Take notice that the Federal Energy Regulatory Commission (Commission) is making available on its Web site ...

  7. New Inspiring Planetarium Show Introduces ALMA to the Public

    NASA Astrophysics Data System (ADS)

    2009-03-01

    As part of a wide range of education and public outreach activities for the International Year of Astronomy 2009 (IYA2009), ESO, together with the Association of French Language Planetariums (APLF), has produced a 30-minute planetarium show, In Search of our Cosmic Origins. It is centred on the global ground-based astronomical Atacama Large Millimeter/submillimeter Array (ALMA) project and represents a unique chance for planetariums to be associated with the IYA2009. ESO PR Photo 09a/09 Logo of the ALMA Planetarium Show ESO PR Photo 09b/09 Galileo's first observations with a telescope ESO PR Photo 09c/09 The ALMA Observatory ESO PR Photo 09d/09 The Milky Way band ESO PR Video 09a/09 Trailer in English ALMA is the leading telescope for observing the cool Universe -- the relic radiation of the Big Bang, and the molecular gas and dust that constitute the building blocks of stars, planetary systems, galaxies and life itself. It is currently being built in the extremely arid environment of the Chajnantor plateau, at 5000 metres altitude in the Chilean Andes, and will start scientific observations around 2011. ALMA, the largest current astronomical project, is a revolutionary telescope, comprising a state-of-the-art array of 66 giant 12-metre and 7-metre diameter antennas observing at millimetre and submillimetre wavelengths. In Search of our Cosmic Origins highlights the unprecedented window on the Universe that this facility will open for astronomers. "The show gives viewers a fascinating tour of the highest observatory on Earth, and takes them from there out into our Milky Way, and beyond," says Douglas Pierce-Price, the ALMA Public Information Officer at ESO. Edited by world fulldome experts Mirage3D, the emphasis of the new planetarium show is on the incomparable scientific adventure of the ALMA project. A young female astronomer guides the audience through a story that includes unique animations and footage, leading the viewer from the first observations by Galileo

  8. Northern Goshawk diet in Minnesota: An Analysis using video recording systems

    USGS Publications Warehouse

    Smithers, B.L.; Boal, C.W.; Andersen, D.E.

    2005-01-01

    We used video-recording systems to collect diet information at 13 Northern Goshawk (Accipiter gentilis) nests in Minnesota during the 2000, 2001, and 2002 breeding seasons. We collected 4871 hr of video footage, from which 652 prey deliveries were recorded. The majority of prey deliveries identified were mammals (62%), whereas birds (38%) composed a smaller proportion of diet. Mammals accounted for 61% of biomass delivered, and avian prey items accounted for 39% of prey biomass. Sciurids and leporids accounted for 70% of the identified prey. Red squirrel (Tamiasciurus hudsonicus), eastern chipmunk (Tamias striatus), and snowshoe hare (Lepus americanus) were the dominant mammals identified in the diet, while American Crow (Corvus brachyrhynchos) and Ruffed Grouse (Bonasa umbellus) were the dominant avian prey delivered to nests. On average, breeding goshawks delivered 2.12 prey items/d, and each delivery averaged 275 g for a total of 551 g delivered/d. However, daily (P < 0.001) and hourly (P = 0.01) delivery rates varied among nests. Delivery rates (P = 0.01) and biomass delivered (P = 0.038) increased with brood size. Diversity and equitability of prey used was similar among nests and was low throughout the study area, most likely due to the dominance of red squirrel in the diet. ?? 2005 The Raptor Research Foundation, Inc.

  9. Task relevance predicts gaze in videos of real moving scenes.

    PubMed

    Howard, Christina J; Gilchrist, Iain D; Troscianko, Tom; Behera, Ardhendu; Hogg, David C

    2011-09-01

    Low-level stimulus salience and task relevance together determine the human fixation priority assigned to scene locations (Fecteau and Munoz in Trends Cogn Sci 10(8):382-390, 2006). However, surprisingly little is known about the contribution of task relevance to eye movements during real-world visual search where stimuli are in constant motion and where the 'target' for the visual search is abstract and semantic in nature. Here, we investigate this issue when participants continuously search an array of four closed-circuit television (CCTV) screens for suspicious events. We recorded eye movements whilst participants watched real CCTV footage and moved a joystick to continuously indicate perceived suspiciousness. We find that when multiple areas of a display compete for attention, gaze is allocated according to relative levels of reported suspiciousness. Furthermore, this measure of task relevance accounted for twice the amount of variance in gaze likelihood as the amount of low-level visual changes over time in the video stimuli.

  10. Feral Cats Are Better Killers in Open Habitats, Revealed by Animal-Borne Video.

    PubMed

    McGregor, Hugh; Legge, Sarah; Jones, Menna E; Johnson, Christopher N

    2015-01-01

    One of the key gaps in understanding the impacts of predation by small mammalian predators on prey is how habitat structure affects the hunting success of small predators, such as feral cats. These effects are poorly understood due to the difficulty of observing actual hunting behaviours. We attached collar-mounted video cameras to feral cats living in a tropical savanna environment in northern Australia, and measured variation in hunting success among different microhabitats (open areas, dense grass and complex rocks). From 89 hours of footage, we recorded 101 hunting events, of which 32 were successful. Of these kills, 28% were not eaten. Hunting success was highly dependent on microhabitat structure surrounding prey, increasing from 17% in habitats with dense grass or complex rocks to 70% in open areas. This research shows that habitat structure has a profound influence on the impacts of small predators on their prey. This has broad implications for management of vegetation and disturbance processes (like fire and grazing) in areas where feral cats threaten native fauna. Maintaining complex vegetation cover can reduce predation rates of small prey species from feral cat predation.

  11. Feral Cats Are Better Killers in Open Habitats, Revealed by Animal-Borne Video

    PubMed Central

    McGregor, Hugh; Legge, Sarah; Jones, Menna E.; Johnson, Christopher N.

    2015-01-01

    One of the key gaps in understanding the impacts of predation by small mammalian predators on prey is how habitat structure affects the hunting success of small predators, such as feral cats. These effects are poorly understood due to the difficulty of observing actual hunting behaviours. We attached collar-mounted video cameras to feral cats living in a tropical savanna environment in northern Australia, and measured variation in hunting success among different microhabitats (open areas, dense grass and complex rocks). From 89 hours of footage, we recorded 101 hunting events, of which 32 were successful. Of these kills, 28% were not eaten. Hunting success was highly dependent on microhabitat structure surrounding prey, increasing from 17% in habitats with dense grass or complex rocks to 70% in open areas. This research shows that habitat structure has a profound influence on the impacts of small predators on their prey. This has broad implications for management of vegetation and disturbance processes (like fire and grazing) in areas where feral cats threaten native fauna. Maintaining complex vegetation cover can reduce predation rates of small prey species from feral cat predation. PMID:26288224

  12. Video Denoising via Dynamic Video Layering

    NASA Astrophysics Data System (ADS)

    Guo, Han; Vaswani, Namrata

    2018-07-01

    Video denoising refers to the problem of removing "noise" from a video sequence. Here the term "noise" is used in a broad sense to refer to any corruption or outlier or interference that is not the quantity of interest. In this work, we develop a novel approach to video denoising that is based on the idea that many noisy or corrupted videos can be split into three parts - the "low-rank layer", the "sparse layer", and a small residual (which is small and bounded). We show, using extensive experiments, that our denoising approach outperforms the state-of-the-art denoising algorithms.

  13. Porn video shows, local brew, and transactional sex: HIV risk among youth in Kisumu, Kenya

    PubMed Central

    2011-01-01

    Background Kisumu has shown a rising HIV prevalence over the past sentinel surveillance surveys, and most new infections are occurring among youth. We conducted a qualitative study to explore risk situations that can explain the high HIV prevalence among youth in Kisumu town, Kenya Methods We conducted in-depth interviews with 150 adolescents aged 15 to 20, held 4 focus group discussions, and made 48 observations at places where youth spend their free time. Results Porn video shows and local brew dens were identified as popular events where unprotected multipartner, concurrent, coerced and transactional sex occurs between adolescents. Video halls - rooms with a TV and VCR - often show pornography at night for a very small fee, and minors are allowed. Forced sex, gang rape and multiple concurrent relationships characterised the sexual encounters of youth, frequently facilitated by the abuse of alcohol, which is available for minors at low cost in local brew dens. For many sexually active girls, their vulnerability to STI/HIV infection is enhanced due to financial inequality, gender-related power difference and cultural norms. The desire for love and sexual pleasure also contributed to their multiple concurrent partnerships. A substantial number of girls and young women engaged in transactional sex, often with much older working partners. These partners had a stronger socio-economic position than young women, enabling them to use money/gifts as leverage for sex. Condom use was irregular during all types of sexual encounters. Conclusions In Kisumu, local brew dens and porn video halls facilitate risky sexual encounters between youth. These places should be regulated and monitored by the government. Our study strongly points to female vulnerabilities and the role of men in perpetuating the local epidemic. Young men should be targeted in prevention activities, to change their attitudes related to power and control in relationships. Girls should be empowered how to

  14. Porn video shows, local brew, and transactional sex: HIV risk among youth in Kisumu, Kenya.

    PubMed

    Njue, Carolyne; Voeten, Helene A C M; Remes, Pieter

    2011-08-08

    Kisumu has shown a rising HIV prevalence over the past sentinel surveillance surveys, and most new infections are occurring among youth. We conducted a qualitative study to explore risk situations that can explain the high HIV prevalence among youth in Kisumu town, Kenya We conducted in-depth interviews with 150 adolescents aged 15 to 20, held 4 focus group discussions, and made 48 observations at places where youth spend their free time. Porn video shows and local brew dens were identified as popular events where unprotected multipartner, concurrent, coerced and transactional sex occurs between adolescents. Video halls - rooms with a TV and VCR - often show pornography at night for a very small fee, and minors are allowed. Forced sex, gang rape and multiple concurrent relationships characterised the sexual encounters of youth, frequently facilitated by the abuse of alcohol, which is available for minors at low cost in local brew dens. For many sexually active girls, their vulnerability to STI/HIV infection is enhanced due to financial inequality, gender-related power difference and cultural norms. The desire for love and sexual pleasure also contributed to their multiple concurrent partnerships. A substantial number of girls and young women engaged in transactional sex, often with much older working partners. These partners had a stronger socio-economic position than young women, enabling them to use money/gifts as leverage for sex. Condom use was irregular during all types of sexual encounters. In Kisumu, local brew dens and porn video halls facilitate risky sexual encounters between youth. These places should be regulated and monitored by the government. Our study strongly points to female vulnerabilities and the role of men in perpetuating the local epidemic. Young men should be targeted in prevention activities, to change their attitudes related to power and control in relationships. Girls should be empowered how to negotiate safe sex, and their poverty should

  15. Low emotional response to traumatic footage is associated with an absence of analogue flashbacks: An individual participant data meta-analysis of 16 trauma film paradigm experiments

    PubMed Central

    Clark, Ian A.; Mackay, Clare E.; Holmes, Emily A.

    2015-01-01

    Most people will experience or witness a traumatic event. A common occurrence after trauma is the experience of involuntary emotional memories of the traumatic event, herewith “flashbacks”. Some individuals, however, report no flashbacks. Prospective work investigating psychological factors associated with an absence of flashbacks is lacking. We performed an individual participant data meta-analysis on 16 experiments (n = 458) using the trauma film paradigm to investigate the association of emotional response to traumatic film footage and commonly collected baseline characteristics (trait anxiety, current depression, trauma history) with an absence of analogue flashbacks. An absence of analogue flashbacks was associated with low emotional response to the traumatic film footage and, to a lesser extent, low trait anxiety and low current depression levels. Trauma history and recognition memory for the film were not significantly associated with an absence of analogue flashbacks. Understanding why some individuals report an absence of flashbacks may aid preventative treatments against flashback development. PMID:24920083

  16. Low emotional response to traumatic footage is associated with an absence of analogue flashbacks: an individual participant data meta-analysis of 16 trauma film paradigm experiments.

    PubMed

    Clark, Ian A; Mackay, Clare E; Holmes, Emily A

    2015-01-01

    Most people will experience or witness a traumatic event. A common occurrence after trauma is the experience of involuntary emotional memories of the traumatic event, herewith "flashbacks". Some individuals, however, report no flashbacks. Prospective work investigating psychological factors associated with an absence of flashbacks is lacking. We performed an individual participant data meta-analysis on 16 experiments (n = 458) using the trauma film paradigm to investigate the association of emotional response to traumatic film footage and commonly collected baseline characteristics (trait anxiety, current depression, trauma history) with an absence of analogue flashbacks. An absence of analogue flashbacks was associated with low emotional response to the traumatic film footage and, to a lesser extent, low trait anxiety and low current depression levels. Trauma history and recognition memory for the film were not significantly associated with an absence of analogue flashbacks. Understanding why some individuals report an absence of flashbacks may aid preventative treatments against flashback development.

  17. Rational pharmacotherapy training for fourth-year medical students.

    PubMed

    Gelal, Ayse; Gumustekin, Mukaddes; Arici, M Aylin; Gidener, Sedef

    2013-01-01

    In this study we aimed to evaluate the impact of Rational Pharmacotherapy (RPT) course program, reinforced by video footages, on the rational pharmacotherapy skills of the students. RPT course program has been conducted in Dokuz Eylul University School of Medicine since 2008/9. The course has been organised in accordance with World Health Organisation (WHO) Good Prescribing Guide. The aim of the course was to improve the problem solving skills (methodology for selection of the (p)ersonel-drug, prescription writing and informing patient about his illness and drugs) and communication skills of students. The impact of the course has been measured by pre/post-test design by an objective structured clinical examination (OSCE). In academic year 2010/11, to further improve OSCE score of the students we added doctor-patient communication video footages to the RPT course programme. During training, the students were asked to evaluate the doctor-patient communication and prescription on two video footages using a checklist followed by group discussions. Total post-test OSCE score was significantly higher for 2010/11 academic year students (n = 147) than it was for 2009/10 year students (n = 131). The 2010/11 academic year students performed significantly better than the 2009/10 academic year students on four steps of OSCE. These steps were "defining the patient's problem", "specifying the therapeutic objective", "specifying the non-pharmacological treatment" and "choosing a (drug) treatment, taking all relevant patient characteristics into account". The present study demonstrated that the implementation of video footages and group discussions to WHO/Good Prescribing Method improved the fourth-year medical students' performance in rational pharmacotherapy skills.

  18. Acceptable bit-rates for human face identification from CCTV imagery

    NASA Astrophysics Data System (ADS)

    Tsifouti, Anastasia; Triantaphillidou, Sophie; Bilissi, Efthimia; Larabi, Mohamed-Chaker

    2013-01-01

    The objective of this investigation is to produce recommendations for acceptable bit-rates of CCTV footage of people onboard London buses. The majority of CCTV recorders on buses use a proprietary format based on the H.264/AVC video coding standard, exploiting both spatial and temporal redundancy. Low bit-rates are favored in the CCTV industry but they compromise the image usefulness of the recorded imagery. In this context usefulness is defined by the presence of enough facial information remaining in the compressed image to allow a specialist to identify a person. The investigation includes four steps: 1) Collection of representative video footage. 2) The grouping of video scenes based on content attributes. 3) Psychophysical investigations to identify key scenes, which are most affected by compression. 4) Testing of recording systems using the key scenes and further psychophysical investigations. The results are highly dependent upon scene content. For example, very dark and very bright scenes were the most challenging to compress, requiring higher bit-rates to maintain useful information. The acceptable bit-rates are also found to be dependent upon the specific CCTV system used to compress the footage, presenting challenges in drawing conclusions about universal `average' bit-rates.

  19. VIDEO MODELING BY EXPERTS WITH VIDEO FEEDBACK TO ENHANCE GYMNASTICS SKILLS

    PubMed Central

    Boyer, Eva; Miltenberger, Raymond G; Batsche, Catherine; Fogel, Victoria

    2009-01-01

    The effects of combining video modeling by experts with video feedback were analyzed with 4 female competitive gymnasts (7 to 10 years old) in a multiple baseline design across behaviors. During the intervention, after the gymnast performed a specific gymnastics skill, she viewed a video segment showing an expert gymnast performing the same skill and then viewed a video replay of her own performance of the skill. The results showed that all gymnasts demonstrated improved performance across three gymnastics skills following exposure to the intervention. PMID:20514194

  20. Video modeling by experts with video feedback to enhance gymnastics skills.

    PubMed

    Boyer, Eva; Miltenberger, Raymond G; Batsche, Catherine; Fogel, Victoria

    2009-01-01

    The effects of combining video modeling by experts with video feedback were analyzed with 4 female competitive gymnasts (7 to 10 years old) in a multiple baseline design across behaviors. During the intervention, after the gymnast performed a specific gymnastics skill, she viewed a video segment showing an expert gymnast performing the same skill and then viewed a video replay of her own performance of the skill. The results showed that all gymnasts demonstrated improved performance across three gymnastics skills following exposure to the intervention.

  1. STS-112 Crew Training Clip

    NASA Astrophysics Data System (ADS)

    2002-09-01

    Footage shows the crew of STS-112 (Jeffrey Ashby, Commander; Pamela Melroy, Pilot; David Wolf, Piers Sellers, Sandra Magnus, and Fyodor Yurchikhin, Mission Specialists) during several parts of their training. The video is arranged into short segments. In 'Topside Activities at the NBL', Wolf and Sellers are fitted with EVA suits for pool training. 'Pre-Launch Bailout Training in CCT II' shows all six crew members exiting from the hatch on a model of a shuttle orbiter cockpit. 'EVA Training in the VR Lab' shows a crew member training with a virtual reality simulator, interspersed with footage of Magnus, and Wolf with Melroy, at monitors. There is a 'Crew Photo Session', and 'Pam Melroy and Sandy Magnus at the SES Dome' also features a virtual reality simulator. The final two segments of the video involve hands-on training. 'Post Landing Egress at the FFT' shows the crew suiting up into their flight suits, and being raised on a harness, to practice rapelling from the cockpit hatch. 'EVA Prep and Post at the ISS Airlock' shows the crew assembling an empty EVA suit onboard a model of a module. The crew tests oxygen masks, and Sellers is shown on an exercise bicycle with an oxygen mask, with his heart rate monitored (not shown).

  2. STS-112 Crew Training Clip

    NASA Technical Reports Server (NTRS)

    2002-01-01

    Footage shows the crew of STS-112 (Jeffrey Ashby, Commander; Pamela Melroy, Pilot; David Wolf, Piers Sellers, Sandra Magnus, and Fyodor Yurchikhin, Mission Specialists) during several parts of their training. The video is arranged into short segments. In 'Topside Activities at the NBL', Wolf and Sellers are fitted with EVA suits for pool training. 'Pre-Launch Bailout Training in CCT II' shows all six crew members exiting from the hatch on a model of a shuttle orbiter cockpit. 'EVA Training in the VR Lab' shows a crew member training with a virtual reality simulator, interspersed with footage of Magnus, and Wolf with Melroy, at monitors. There is a 'Crew Photo Session', and 'Pam Melroy and Sandy Magnus at the SES Dome' also features a virtual reality simulator. The final two segments of the video involve hands-on training. 'Post Landing Egress at the FFT' shows the crew suiting up into their flight suits, and being raised on a harness, to practice rapelling from the cockpit hatch. 'EVA Prep and Post at the ISS Airlock' shows the crew assembling an empty EVA suit onboard a model of a module. The crew tests oxygen masks, and Sellers is shown on an exercise bicycle with an oxygen mask, with his heart rate monitored (not shown).

  3. Tiny videos: a large data set for nonparametric video retrieval and frame classification.

    PubMed

    Karpenko, Alexandre; Aarabi, Parham

    2011-03-01

    In this paper, we present a large database of over 50,000 user-labeled videos collected from YouTube. We develop a compact representation called "tiny videos" that achieves high video compression rates while retaining the overall visual appearance of the video as it varies over time. We show that frame sampling using affinity propagation-an exemplar-based clustering algorithm-achieves the best trade-off between compression and video recall. We use this large collection of user-labeled videos in conjunction with simple data mining techniques to perform related video retrieval, as well as classification of images and video frames. The classification results achieved by tiny videos are compared with the tiny images framework [24] for a variety of recognition tasks. The tiny images data set consists of 80 million images collected from the Internet. These are the largest labeled research data sets of videos and images available to date. We show that tiny videos are better suited for classifying scenery and sports activities, while tiny images perform better at recognizing objects. Furthermore, we demonstrate that combining the tiny images and tiny videos data sets improves classification precision in a wider range of categories.

  4. Ranking Highlights in Personal Videos by Analyzing Edited Videos.

    PubMed

    Sun, Min; Farhadi, Ali; Chen, Tseng-Hung; Seitz, Steve

    2016-11-01

    We present a fully automatic system for ranking domain-specific highlights in unconstrained personal videos by analyzing online edited videos. A novel latent linear ranking model is proposed to handle noisy training data harvested online. Specifically, given a targeted domain such as "surfing," our system mines the YouTube database to find pairs of raw and their corresponding edited videos. Leveraging the assumption that an edited video is more likely to contain highlights than the trimmed parts of the raw video, we obtain pair-wise ranking constraints to train our model. The learning task is challenging due to the amount of noise and variation in the mined data. Hence, a latent loss function is incorporated to mitigate the issues caused by the noise. We efficiently learn the latent model on a large number of videos (about 870 min in total) using a novel EM-like procedure. Our latent ranking model outperforms its classification counterpart and is fairly competitive compared with a fully supervised ranking system that requires labels from Amazon Mechanical Turk. We further show that a state-of-the-art audio feature mel-frequency cepstral coefficients is inferior to a state-of-the-art visual feature. By combining both audio-visual features, we obtain the best performance in dog activity, surfing, skating, and viral video domains. Finally, we show that impressive highlights can be detected without additional human supervision for seven domains (i.e., skating, surfing, skiing, gymnastics, parkour, dog activity, and viral video) in unconstrained personal videos.

  5. Evaluation of stereoscopic medical video content on an autostereoscopic display for undergraduate medical education

    NASA Astrophysics Data System (ADS)

    Ilgner, Justus F. R.; Kawai, Takashi; Shibata, Takashi; Yamazoe, Takashi; Westhofen, Martin

    2006-02-01

    Introduction: An increasing number of surgical procedures are performed in a microsurgical and minimally-invasive fashion. However, the performance of surgery, its possibilities and limitations become difficult to teach. Stereoscopic video has evolved from a complex production process and expensive hardware towards rapid editing of video streams with standard and HDTV resolution which can be displayed on portable equipment. This study evaluates the usefulness of stereoscopic video in teaching undergraduate medical students. Material and methods: From an earlier study we chose two clips each of three different microsurgical operations (tympanoplasty type III of the ear, endonasal operation of the paranasal sinuses and laser chordectomy for carcinoma of the larynx). This material was added by 23 clips of a cochlear implantation, which was specifically edited for a portable computer with an autostereoscopic display (PC-RD1-3D, SHARP Corp., Japan). The recording and synchronization of left and right image was performed at the University Hospital Aachen. The footage was edited stereoscopically at the Waseda University by means of our original software for non-linear editing of stereoscopic 3-D movies. Then the material was converted into the streaming 3-D video format. The purpose of the conversion was to present the video clips by a file type that does not depend on a television signal such as PAL or NTSC. 25 4th year medical students who participated in the general ENT course at Aachen University Hospital were asked to estimate depth clues within the six video clips plus cochlear implantation clips. Another 25 4th year students who were shown the material monoscopically on a conventional laptop served as control. Results: All participants noted that the additional depth information helped with understanding the relation of anatomical structures, even though none had hands-on experience with Ear, Nose and Throat operations before or during the course. The monoscopic

  6. Automated Video-Based Traffic Count Analysis.

    DOT National Transportation Integrated Search

    2016-01-01

    The goal of this effort has been to develop techniques that could be applied to the : detection and tracking of vehicles in overhead footage of intersections. To that end we : have developed and published techniques for vehicle tracking based on dete...

  7. TDRS-1 Going Strong at 20

    NASA Technical Reports Server (NTRS)

    2003-01-01

    This video presents an overview of the first Tracking and Data Relay Satellite (TDRS-1) in the form of text, computer animations, footage, and an interview with its program manager. Launched by the Space Shuttle Challenger in 1983, TDRS-1 was the first of a network of satellites used for relaying data to and from scientific spacecraft. Most of this short video is silent, and consists of footage and animation of the deployment of TDRS-1, written and animated explanations of what TDRS satellites do, and samples of the astronomical and Earth science data they transmit. The program manager explains in the final segment of the video the improvement TDRS satellites brought to communication with manned space missions, including alleviation of blackout during reentry, and also the role TDRS-1 played in providing telemedicine for a breast cancer patient in Antarctica.

  8. Dashboard Videos

    NASA Astrophysics Data System (ADS)

    Gleue, Alan D.; Depcik, Chris; Peltier, Ted

    2012-11-01

    Last school year, I had a web link emailed to me entitled "A Dashboard Physics Lesson." The link, created and posted by Dale Basier on his Lab Out Loud blog, illustrates video of a car's speedometer synchronized with video of the road. These two separate video streams are compiled into one video that students can watch and analyze. After seeing this website and video, I decided to create my own dashboard videos to show to my high school physics students. I have produced and synchronized 12 separate dashboard videos, each about 10 minutes in length, driving around the city of Lawrence, KS, and Douglas County, and posted them to a website.2 Each video reflects different types of driving: both positive and negative accelerations and constant speeds. As shown in Fig. 1, I was able to capture speed, distance, and miles per gallon from my dashboard instrumentation. By linking this with a stopwatch, each of these quantities can be graphed with respect to time. I anticipate and hope that teachers will find these useful in their own classrooms, i.e., having physics students watch the videos and create their own motion maps (distance-time, speed-time) for study.

  9. That's a "Wrap."

    ERIC Educational Resources Information Center

    Gillespie, Patricia

    1995-01-01

    A secondary teacher in Hawaii's Kamehameha schools describes how she teaches her students about video and television production. Because it is a school for native Hawaiians, the program emphasizes cultural documentation. Through the program, students learn to research, interview, organize video footage, write, rewrite, and use technology. (SM)

  10. Nonchronological video synopsis and indexing.

    PubMed

    Pritch, Yael; Rav-Acha, Alex; Peleg, Shmuel

    2008-11-01

    The amount of captured video is growing with the increased numbers of video cameras, especially the increase of millions of surveillance cameras that operate 24 hours a day. Since video browsing and retrieval is time consuming, most captured video is never watched or examined. Video synopsis is an effective tool for browsing and indexing of such a video. It provides a short video representation, while preserving the essential activities of the original video. The activity in the video is condensed into a shorter period by simultaneously showing multiple activities, even when they originally occurred at different times. The synopsis video is also an index into the original video by pointing to the original time of each activity. Video Synopsis can be applied to create a synopsis of an endless video streams, as generated by webcams and by surveillance cameras. It can address queries like "Show in one minute the synopsis of this camera broadcast during the past day''. This process includes two major phases: (i) An online conversion of the endless video stream into a database of objects and activities (rather than frames). (ii) A response phase, generating the video synopsis as a response to the user's query.

  11. Feedforward self-modeling enhances skill acquisition in children learning trampoline skills.

    PubMed

    Ste-Marie, Diane M; Vertes, Kelly; Rymal, Amanda M; Martini, Rose

    2011-01-01

    The purpose of this research was to examine whether children would benefit from a feedforward self-modeling (FSM) video and to explore possible explanatory mechanisms for the potential benefits, using a self-regulation framework. To this end, children were involved in learning two five-skill trampoline routines. For one of the routines, a FSM video was provided during acquisition, whereas only verbal instructions were provided for the alternate routine. The FSM involved editing video footage such that it showed the learner performing the trampoline routine at a higher skill level than their current capability. Analyses of the data showed that while physical performance benefits were observed for the routine that was learned with the FSM video, no differences were obtained in relation to the self-regulatory measures. Thus, the FSM video enhanced motor skill acquisition, but this could not be explained by changes to the varied self-regulatory processes examined.

  12. Scientists feature their work in Arctic-focused short videos by FrontierScientists

    NASA Astrophysics Data System (ADS)

    Nielsen, L.; O'Connell, E.

    2013-12-01

    Whether they're guiding an unmanned aerial vehicle into a volcanic plume to sample aerosols, or documenting core drilling at a frozen lake in Siberia formed 3.6 million years ago by a massive meteorite impact, Arctic scientists are using video to enhance and expand their science and science outreach. FrontierScientists (FS), a forum for showcasing scientific work, produces and promotes radically different video blogs featuring Arctic scientists. Three- to seven- minute multimedia vlogs help deconstruct researcher's efforts and disseminate stories, communicating scientific discoveries to our increasingly connected world. The videos cover a wide range of current field work being performed in the Arctic. All videos are freely available to view or download from the FrontierScientists.com website, accessible via any internet browser or via the FrontierScientists app. FS' filming process fosters a close collaboration between the scientist and the media maker. Film creation helps scientists reach out to the public, communicate the relevance of their scientific findings, and craft a discussion. Videos keep audience tuned in; combining field footage, pictures, audio, and graphics with a verbal explanation helps illustrate ideas, allowing one video to reach people with different learning strategies. The scientists' stories are highlighted through social media platforms online. Vlogs grant scientists a voice, letting them illustrate their own work while ensuring accuracy. Each scientific topic on FS has its own project page where easy-to-navigate videos are featured prominently. Video sets focus on different aspects of a researcher's work or follow one of their projects into the field. We help the scientist slip the answers to their five most-asked questions into the casual script in layman's terms in order to free the viewers' minds to focus on new concepts. Videos are accompanied by written blogs intended to systematically demystify related facts so the scientists can focus

  13. Apollo 13: Houston, We've Got a Problem

    NASA Technical Reports Server (NTRS)

    1991-01-01

    This video contains historical footage of the flight of Apollo-13, the fifth Lunar Mission and the third spacecraft that was to land on the Moon. Apollo-13's launch date was April 11, 1970. On the 13th of April, after docking with the Lunar Module, the astronauts, Jim Lovell, Fred Haise, and Jack Swiggert, discovered that their oxygen tanks had ruptured and ended up entering and returning to Earth in the Lunar Module instead of the Command Module. There is footage of inside module and Mission Control shots, personal commentary by the astronauts concerning the problems as they developed, national news footage and commentary, and a post-flight Presidential Address by President Richard Nixon. Film footage of the approach to the Moon and departing from Earth, and air-to-ground communication with Mission Control is included.

  14. Apollo 13: Houston, we've got a problem

    NASA Astrophysics Data System (ADS)

    1991-04-01

    This video contains historical footage of the flight of Apollo-13, the fifth Lunar Mission and the third spacecraft that was to land on the Moon. Apollo-13's launch date was April 11, 1970. On the 13th of April, after docking with the Lunar Module, the astronauts, Jim Lovell, Fred Haise, and Jack Swiggert, discovered that their oxygen tanks had ruptured and ended up entering and returning to Earth in the Lunar Module instead of the Command Module. There is footage of inside module and Mission Control shots, personal commentary by the astronauts concerning the problems as they developed, national news footage and commentary, and a post-flight Presidential Address by President Richard Nixon. Film footage of the approach to the Moon and departing from Earth, and air-to-ground communication with Mission Control is included.

  15. Video game players show higher performance but no difference in speed of attention shifts.

    PubMed

    Mack, David J; Wiesmann, Helene; Ilg, Uwe J

    2016-09-01

    Video games have become both a widespread leisure activity and a substantial field of research. In a variety of tasks, video game players (VGPs) perform better than non-video game players (NVGPs). This difference is most likely explained by an alteration of the basic mechanisms underlying visuospatial attention. More specifically, the present study hypothesizes that VGPs are able to shift attention faster than NVGPs. Such alterations in attention cannot be disentangled from changes in stimulus-response mappings in reaction time based measurements. Therefore, we used a spatial cueing task with varying cue lead times (CLTs) to investigate the speed of covert attention shifts of 98 male participants divided into 36 NVGPs and 62 VGPs based on their weekly gaming time. VGPs exhibited higher peak and mean performance than NVGPs. However, we did not find any differences in the speed of covert attention shifts as measured by the CLT needed to achieve peak performance. Thus, our results clearly rule out faster stimulus-response mappings as an explanation for the higher performance of VGPs in line with previous studies. More importantly, our data do not support the notion of faster attention shifts in VGPs as another possible explanation. Copyright © 2016 Elsevier B.V. All rights reserved.

  16. Video Modeling by Experts with Video Feedback to Enhance Gymnastics Skills

    ERIC Educational Resources Information Center

    Boyer, Eva; Miltenberger, Raymond G.; Batsche, Catherine; Fogel, Victoria

    2009-01-01

    The effects of combining video modeling by experts with video feedback were analyzed with 4 female competitive gymnasts (7 to 10 years old) in a multiple baseline design across behaviors. During the intervention, after the gymnast performed a specific gymnastics skill, she viewed a video segment showing an expert gymnast performing the same skill…

  17. Pornography classification: The hidden clues in video space-time.

    PubMed

    Moreira, Daniel; Avila, Sandra; Perez, Mauricio; Moraes, Daniel; Testoni, Vanessa; Valle, Eduardo; Goldenstein, Siome; Rocha, Anderson

    2016-11-01

    As web technologies and social networks become part of the general public's life, the problem of automatically detecting pornography is into every parent's mind - nobody feels completely safe when their children go online. In this paper, we focus on video-pornography classification, a hard problem in which traditional methods often employ still-image techniques - labeling frames individually prior to a global decision. Frame-based approaches, however, ignore significant cogent information brought by motion. Here, we introduce a space-temporal interest point detector and descriptor called Temporal Robust Features (TRoF). TRoF was custom-tailored for efficient (low processing time and memory footprint) and effective (high classification accuracy and low false negative rate) motion description, particularly suited to the task at hand. We aggregate local information extracted by TRoF into a mid-level representation using Fisher Vectors, the state-of-the-art model of Bags of Visual Words (BoVW). We evaluate our original strategy, contrasting it both to commercial pornography detection solutions, and to BoVW solutions based upon other space-temporal features from the scientific literature. The performance is assessed using the Pornography-2k dataset, a new challenging pornographic benchmark, comprising 2000 web videos and 140h of video footage. The dataset is also a contribution of this work and is very assorted, including both professional and amateur content, and it depicts several genres of pornography, from cartoon to live action, with diverse behavior and ethnicity. The best approach, based on a dense application of TRoF, yields a classification error reduction of almost 79% when compared to the best commercial classifier. A sparse description relying on TRoF detector is also noteworthy, for yielding a classification error reduction of over 69%, with 19× less memory footprint than the dense solution, and yet can also be implemented to meet real-time requirements

  18. Observation interventions as a means to manipulate collective efficacy in groups.

    PubMed

    Bruton, Adam M; Mellalieu, Stephen D; Shearer, David A

    2014-02-01

    The purpose of this multistudy investigation was to examine observation as an intervention for the manipulation of individual collective efficacy beliefs. Study 1 compared the effects of positive, neutral, and negative video footage of practice trials from an obstacle course task on collective efficacy beliefs in assigned groups. The content of the observation intervention (i.e., positive, neutral, and negative video footage) significantly influenced the direction of change in collective efficacy (p < .05). Study 2 assessed the influence of content familiarity (own team/sport vs. unfamiliar team/sport) on individual collective efficacy perceptions when observing positive footage of competitive basketball performance. Collective efficacy significantly increased for both the familiar and unfamiliar conditions postintervention, with the largest increase for the familiar condition (p < .05). The studies support the use of observation as an intervention to enhance individual perceptions of collective efficacy in group-based activities. The findings suggest that observations of any group displaying positive group characteristics are likely to increase collective efficacy beliefs; however, observation of one's own team leads to the greatest increases.

  19. Event Completion: Event Based Inferences Distort Memory in a Matter of Seconds

    ERIC Educational Resources Information Center

    Strickland, Brent; Keil, Frank

    2011-01-01

    We present novel evidence that implicit causal inferences distort memory for events only seconds after viewing. Adults watched videos of someone launching (or throwing) an object. However, the videos omitted the moment of contact (or release). Subjects falsely reported seeing the moment of contact when it was implied by subsequent footage but did…

  20. Early Vocal Development in Autism Spectrum Disorder, Rett Syndrome, and Fragile X Syndrome: Insights from Studies using Retrospective Video Analysis.

    PubMed

    Roche, Laura; Zhang, Dajie; Bartl-Pokorny, Katrin D; Pokorny, Florian B; Schuller, Björn W; Esposito, Gianluca; Bölte, Sven; Roeyers, Herbert; Poustka, Luise; Gugatschka, Markus; Waddington, Hannah; Vollmann, Ralf; Einspieler, Christa; Marschik, Peter B

    2018-03-01

    This article provides an overview of studies assessing the early vocalisations of children with autism spectrum disorder (ASD), Rett syndrome (RTT), and fragile X syndrome (FXS) using retrospective video analysis (RVA) during the first two years of life. Electronic databases were systematically searched and a total of 23 studies were selected. These studies were then categorised according to whether children were later diagnosed with ASD (13 studies), RTT (8 studies), or FXS (2 studies), and then described in terms of (a) participant characteristics, (b) control group characteristics, (c) video footage, (d) behaviours analysed, and (e) main findings. This overview supports the use of RVA in analysing the early development of vocalisations in children later diagnosed with ASD, RTT or FXS, and provides an in-depth analysis of vocalisation presentation, complex vocalisation production, and the rate and/or frequency of vocalisation production across the three disorders. Implications are discussed in terms of extending crude vocal analyses to more precise methods that might provide more powerful means by which to discriminate between disorders during early development. A greater understanding of the early manifestation of these disorders may then lead to improvements in earlier detection.

  1. Feedforward Self-Modeling Enhances Skill Acquisition in Children Learning Trampoline Skills

    PubMed Central

    Ste-Marie, Diane M.; Vertes, Kelly; Rymal, Amanda M.; Martini, Rose

    2011-01-01

    The purpose of this research was to examine whether children would benefit from a feedforward self-modeling (FSM) video and to explore possible explanatory mechanisms for the potential benefits, using a self-regulation framework. To this end, children were involved in learning two five-skill trampoline routines. For one of the routines, a FSM video was provided during acquisition, whereas only verbal instructions were provided for the alternate routine. The FSM involved editing video footage such that it showed the learner performing the trampoline routine at a higher skill level than their current capability. Analyses of the data showed that while physical performance benefits were observed for the routine that was learned with the FSM video, no differences were obtained in relation to the self-regulatory measures. Thus, the FSM video enhanced motor skill acquisition, but this could not be explained by changes to the varied self-regulatory processes examined. PMID:21779270

  2. JPL-20171011-OCOf-0002-NASA Pinpoints Cause of Earths Recent CO2 Spike

    NASA Image and Video Library

    2017-10-12

    Video File: New research from NASA's Orbiting Carbon Observatory-2 (OCO-2) satellite shows that the impacts of heat and drought during the 2015-16 El Niño on Earth's tropical regions were responsible for the largest increase in atmospheric CO2 in at least 2,000 years. Animations showing change from 2014-2017, summertime changes in CO2, OCO-2 spacecraft. Footage of Amazon rainforest. Interview with Annemarie Eldering, OCO-2 Deputy Project Scientist, JPL.

  3. Eclipse Photo/Video Coverage

    NASA Image and Video Library

    2017-08-21

    On Monday, Aug. 21, NASA provided coast-to-coast coverage of the solar eclipse across America – featuring views of the phenomenon from unique vantage points, including from the ground, from aircraft, and from spacecraft including the ISS, during a live broadcast seen on NASA Television and the agency’s website. This is footage from the Kennedy Space Center Visitor Complex, KARS Park at Kennedy, and the Vehicle Assembly Building.

  4. Let's Play: Exploring Literacy Practices in an Emerging Videogame Paratext

    ERIC Educational Resources Information Center

    Burwell, Catherine; Miller, Thomas

    2016-01-01

    This article explores the literacy practices associated with Let's Play videos (or LPs) on YouTube. A hybrid of digital gaming and video, LPs feature gameplay footage accompanied by simultaneous commentary recorded by the player. Players may set out to promote, review, critique or satirize a game. In recent years, LPs have become hugely popular…

  5. Long-Time Exposure to Violent Video Games Does Not Show Desensitization on Empathy for Pain: An fMRI Study.

    PubMed

    Gao, Xuemei; Pan, Wei; Li, Chao; Weng, Lei; Yao, Mengyun; Chen, Antao

    2017-01-01

    As a typical form of empathy, empathy for pain refers to the perception and appraisal of others' pain, as well as the corresponding affective responses. Numerous studies investigated the factors affecting the empathy for pain, in which the exposure to violent video games (VVGs) could change players' empathic responses to painful situations. However, it remains unclear whether exposure to VVG influences the empathy for pain. In the present study, in terms of the exposure experience to VVG, two groups of participants (18 in VVG group, VG; 17 in non-VVG group, NG) were screened from nearly 200 video game experience questionnaires. And then, the functional magnetic resonance imaging data were recorded when they were viewing painful and non-painful stimuli. The results showed that the perception of others' pain were not significantly different in brain regions between groups, from which we could infer that the desensitization effect of VVGs was overrated.

  6. STS-111 Flight Day 2 Highlights

    NASA Technical Reports Server (NTRS)

    2002-01-01

    On Flight Day 2 of STS-111, the crew of Endeavour (Kenneth Cockrell, Commander; Paul Lockhart, Pilot; Franklin Chang-Diaz, Mission Specialist; Philippe Perrin, Mission Specialist) and the Expedition 5 crew (Valery Korzun, Commander; Peggy Whitson, Flight Engineer; Sergei Treschev, Flight Engineer), having successfully entered orbit around the Earth, begin to maneuver towards the International Space Station (ISS), where the Expedition 5 crew will replace the Expedition 4 crew. Live video is shown of the Earth from several vantage points aboard the Shuttle. The center-line camera, which will allow Shuttle pilots to align the docking apparatus with that on the ISS, provides footage of the Earth. Chang-Diaz participates in an interview, in Spanish, conducted from the ground via radio communications, with Cockrell also appearing. Footage of the Earth includes: Daytime video of the Eastern United States with some cloud cover as Endeavour passes over the Florida panhandle, Georgia, and the Carolinas; Daytime video of Lake Michigan unobscured by cloud cover; Nighttime low-light camera video of Madrid, Spain.

  7. Discriminating talent-identified junior Australian football players using a video decision-making task.

    PubMed

    Woods, Carl T; Raynor, Annette J; Bruce, Lyndell; McDonald, Zane

    2016-01-01

    This study examined if a video decision-making task could discriminate talent-identified junior Australian football players from their non-talent-identified counterparts. Participants were recruited from the 2013 under 18 (U18) West Australian Football League competition and classified into two groups: talent-identified (State U18 Academy representatives; n = 25; 17.8 ± 0.5 years) and non-talent-identified (non-State U18 Academy selection; n = 25; 17.3 ± 0.6 years). Participants completed a video decision-making task consisting of 26 clips sourced from the Australian Football League game-day footage, recording responses on a sheet provided. A score of "1" was given for correct and "0" for incorrect responses, with the participants total score used as the criterion value. One-way analysis of variance tested the main effect of "status" on the task criterion, whilst a bootstrapped receiver operating characteristic (ROC) curve assessed the discriminant ability of the task. An area under the curve (AUC) of 1 (100%) represented perfect discrimination. Between-group differences were evident (P < 0.05) and the ROC curve was maximised with a score of 15.5/26 (60%) (AUC = 89.0%), correctly classifying 92% and 76% of the talent-identified and non-talent-identified participants, respectively. Future research should investigate the mechanisms leading to the superior decision-making observed in the talent-identified group.

  8. STS-26 Post-Flight Crew Press Conference

    NASA Technical Reports Server (NTRS)

    1988-01-01

    This video tape contains footage selected and narrated by the STS-26 crew including launch, TDRS-C/IUS (Tracking and Data Relay Satellite C / Inertial Upper Stage) deployment, onboard activities, and landing.

  9. Witnessing trauma in the newsroom: posttraumatic symptoms in television journalists exposed to violent news clips.

    PubMed

    Weidmann, Anke; Papsdorf, Jenny

    2010-04-01

    Employees working in television newsrooms are exposed to video footage of violent events on a daily basis. It is yet unknown whether they subsequently develop symptoms of posttraumatic stress disorder as has been shown for other populations exposed to trauma through television. We conducted an internet-based survey with 81 employees. Nearly 80% of the sample reported being familiar with recurring intrusive memories. However, the sample's overall posttraumatic stress disorder symptoms were low, although participants with a prior trauma, more general work stress, and a greater exposure to footage had a tendency to show more severe symptoms. Regarding general mental health, there were no differences compared with a journalistic control group. Results suggest that the population as such is not at a particular risk of developing mental problems.

  10. An Innovative Streaming Video System With a Point-of-View Head Camera Transmission of Surgeries to Smartphones and Tablets: An Educational Utility.

    PubMed

    Chaves, Rafael Oliveira; de Oliveira, Pedro Armando Valente; Rocha, Luciano Chaves; David, Joacy Pedro Franco; Ferreira, Sanmari Costa; Santos, Alex de Assis Santos Dos; Melo, Rômulo Müller Dos Santos; Yasojima, Edson Yuzur; Brito, Marcus Vinicius Henriques

    2017-10-01

    In order to engage medical students and residents from public health centers to utilize the telemedicine features of surgery on their own smartphones and tablets as an educational tool, an innovative streaming system was developed with the purpose of streaming live footage from open surgeries to smartphones and tablets, allowing the visualization of the surgical field from the surgeon's perspective. The current study aims to describe the results of an evaluation on level 1 of Kirkpatrick's Model for Evaluation of the streaming system usage during gynecological surgeries, based on the perception of medical students and gynecology residents. Consisted of a live video streaming (from the surgeon's point of view) of gynecological surgeries for smartphones and tablets, one for each volunteer. The volunteers were able to connect to the local wireless network, created by the streaming system, through an access password and watch the video transmission on a web browser on their smartphones. Then, they answered a Likert-type questionnaire containing 14 items about the educational applicability of the streaming system, as well as comparing it to watching an in loco procedure. This study is formally approved by the local ethics commission (Certificate No. 53175915.7.0000.5171/2016). Twenty-one volunteers participated, totalizing 294 items answered, in which 94.2% were in agreement with the items affirmative, 4.1% were neutral, and only 1.7% answers corresponded to negative impressions. Cronbach's α was .82, which represents a good reliability level. Spearman's coefficients were highly significant in 4 comparisons and moderately significant in the other 20 comparisons. This study presents a local streaming video system of live surgeries to smartphones and tablets and shows its educational utility, low cost, and simple usage, which offers convenience and satisfactory image resolution, thus being potentially applicable in surgical teaching.

  11. Long-Time Exposure to Violent Video Games Does Not Show Desensitization on Empathy for Pain: An fMRI Study

    PubMed Central

    Gao, Xuemei; Pan, Wei; Li, Chao; Weng, Lei; Yao, Mengyun; Chen, Antao

    2017-01-01

    As a typical form of empathy, empathy for pain refers to the perception and appraisal of others’ pain, as well as the corresponding affective responses. Numerous studies investigated the factors affecting the empathy for pain, in which the exposure to violent video games (VVGs) could change players’ empathic responses to painful situations. However, it remains unclear whether exposure to VVG influences the empathy for pain. In the present study, in terms of the exposure experience to VVG, two groups of participants (18 in VVG group, VG; 17 in non-VVG group, NG) were screened from nearly 200 video game experience questionnaires. And then, the functional magnetic resonance imaging data were recorded when they were viewing painful and non-painful stimuli. The results showed that the perception of others’ pain were not significantly different in brain regions between groups, from which we could infer that the desensitization effect of VVGs was overrated. PMID:28512439

  12. STS-107 Flight Day 15 Highlights

    NASA Astrophysics Data System (ADS)

    2003-01-01

    This video shows the activities of the STS-107 crew on flight day 15 of the Columbia orbiter's final mission. The crew includes Commander Rick Husband, Pilot William McCool, Mission Specialists Michael Anderson, David Brown, Laurel Clark, and Kalpana Chawla, and Payload Specialist Ilan Ramon. The primary activities of flight day 15 are crew interviews, and operating the Water Mist Fire Suppression (MIST) experiment. Early in the video, astronauts McCool and Ramon respond together to a question. Much of the video is taken up by an interview of astronauts Brown, Anderson, and McCool. Two parts of the video show the MIST experiment in operation, operated the first time by astronaut Brown. Another part of the video is narrated by Mission Specialist Clark, who identifies views of Mount Vesuvius, and an atoll in the south Pacific. In this part, Payload Specialist Ramon is seen on an exercise machine, Commander Husband shows body fluid samples from the crew taken during the mission, and Clark demonstrates how the crew eats meals. The video ends with footage from earlier in the mission which shows a deployed radiator in the shuttle's payload bay that reflects an image of the Earth.

  13. Developing a Promotional Video

    ERIC Educational Resources Information Center

    Epley, Hannah K.

    2014-01-01

    There is a need for Extension professionals to show clientele the benefits of their program. This article shares how promotional videos are one way of reaching audiences online. An example is given on how a promotional video has been used and developed using iMovie software. Tips are offered for how professionals can create a promotional video and…

  14. Reconstruction of the Genesis Entry

    NASA Technical Reports Server (NTRS)

    Desai, Prasun N.; Qualls, Garry D.; Schoenenberger, Mark

    2005-01-01

    This paper provides an overview of the findings from a reconstruction analysis of the Genesis capsule entry. First, a comparison of the atmospheric properties (density and winds) encountered during the entry to the pre-entry profile is presented. The analysis that was performed on the video footage (obtained from the tracking stations at UTTR) during the descent is then described from which the Mach number at the onset of the capsule tumble was estimated following the failure of the drogue parachute deployment. Next, an assessment of the Genesis capsule aerodynamics that was extracted from the video footage is discussed, followed by a description of the capsule hypersonic attitude that must have occurred during the entry based on examination of the recovered capsule heatshield. Lastly, the entry trajectory reconstruction that was performed is presented.

  15. Videos of conspecifics elicit interactive looking patterns and facial expressions in monkeys

    PubMed Central

    Mosher, Clayton P.; Zimmerman, Prisca E.; Gothard, Katalin M.

    2014-01-01

    A broader understanding of the neural basis of social behavior in primates requires the use of species-specific stimuli that elicit spontaneous, but reproducible and tractable behaviors. In this context of natural behaviors, individual variation can further inform about the factors that influence social interactions. To approximate natural social interactions similar to those documented by field studies, we used unedited video footage to induce in viewer monkeys spontaneous facial expressions and looking patterns in the laboratory setting. Three adult male monkeys, previously behaviorally and genetically (5-HTTLPR) characterized (Gibboni et al., 2009), were monitored while they watched 10 s video segments depicting unfamiliar monkeys (movie monkeys) displaying affiliative, neutral, and aggressive behaviors. The gaze and head orientation of the movie monkeys alternated between ‘averted’ and ‘directed’ at the viewer. The viewers were not reinforced for watching the movies, thus their looking patterns indicated their interest and social engagement with the stimuli. The behavior of the movie monkey accounted for differences in the looking patterns and facial expressions displayed by the viewers. We also found multiple significant differences in the behavior of the viewers that correlated with their interest in these stimuli. These socially relevant dynamic stimuli elicited spontaneous social behaviors, such as eye-contact induced reciprocation of facial expression, gaze aversion, and gaze following, that were previously not observed in response to static images. This approach opens a unique opportunity to understanding the mechanisms that trigger spontaneous social behaviors in humans and non-human primates. PMID:21688888

  16. Videos of conspecifics elicit interactive looking patterns and facial expressions in monkeys.

    PubMed

    Mosher, Clayton P; Zimmerman, Prisca E; Gothard, Katalin M

    2011-08-01

    A broader understanding of the neural basis of social behavior in primates requires the use of species-specific stimuli that elicit spontaneous, but reproducible and tractable behaviors. In this context of natural behaviors, individual variation can further inform about the factors that influence social interactions. To approximate natural social interactions similar to those documented by field studies, we used unedited video footage to induce in viewer monkeys spontaneous facial expressions and looking patterns in the laboratory setting. Three adult male monkeys (Macaca mulatta), previously behaviorally and genetically (5-HTTLPR) characterized, were monitored while they watched 10 s video segments depicting unfamiliar monkeys (movie monkeys) displaying affiliative, neutral, and aggressive behaviors. The gaze and head orientation of the movie monkeys alternated between "averted" and "directed" at the viewer. The viewers were not reinforced for watching the movies, thus their looking patterns indicated their interest and social engagement with the stimuli. The behavior of the movie monkey accounted for differences in the looking patterns and facial expressions displayed by the viewers. We also found multiple significant differences in the behavior of the viewers that correlated with their interest in these stimuli. These socially relevant dynamic stimuli elicited spontaneous social behaviors, such as eye-contact induced reciprocation of facial expression, gaze aversion, and gaze following, that were previously not observed in response to static images. This approach opens a unique opportunity to understanding the mechanisms that trigger spontaneous social behaviors in humans and nonhuman primates. (PsycINFO Database Record (c) 2011 APA, all rights reserved).

  17. Adapting astronomical source detection software to help detect animals in thermal images obtained by unmanned aerial systems

    NASA Astrophysics Data System (ADS)

    Longmore, S. N.; Collins, R. P.; Pfeifer, S.; Fox, S. E.; Mulero-Pazmany, M.; Bezombes, F.; Goodwind, A.; de Juan Ovelar, M.; Knapen, J. H.; Wich, S. A.

    2017-02-01

    In this paper we describe an unmanned aerial system equipped with a thermal-infrared camera and software pipeline that we have developed to monitor animal populations for conservation purposes. Taking a multi-disciplinary approach to tackle this problem, we use freely available astronomical source detection software and the associated expertise of astronomers, to efficiently and reliably detect humans and animals in aerial thermal-infrared footage. Combining this astronomical detection software with existing machine learning algorithms into a single, automated, end-to-end pipeline, we test the software using aerial video footage taken in a controlled, field-like environment. We demonstrate that the pipeline works reliably and describe how it can be used to estimate the completeness of different observational datasets to objects of a given type as a function of height, observing conditions etc. - a crucial step in converting video footage to scientifically useful information such as the spatial distribution and density of different animal species. Finally, having demonstrated the potential utility of the system, we describe the steps we are taking to adapt the system for work in the field, in particular systematic monitoring of endangered species at National Parks around the world.

  18. A novel key-frame extraction approach for both video summary and video index.

    PubMed

    Lei, Shaoshuai; Xie, Gang; Yan, Gaowei

    2014-01-01

    Existing key-frame extraction methods are basically video summary oriented; yet the index task of key-frames is ignored. This paper presents a novel key-frame extraction approach which can be available for both video summary and video index. First a dynamic distance separability algorithm is advanced to divide a shot into subshots based on semantic structure, and then appropriate key-frames are extracted in each subshot by SVD decomposition. Finally, three evaluation indicators are proposed to evaluate the performance of the new approach. Experimental results show that the proposed approach achieves good semantic structure for semantics-based video index and meanwhile produces video summary consistent with human perception.

  19. Laser Geodynamics Satellite- B-roll footage (No Sound)

    NASA Image and Video Library

    2016-05-04

    This 1975 NASA video highlights the development of LAser GEOdynamics Satellite (LAGEOS I). LAGEOS I is a passive satellite constructed from brass and aluminum and contains 426 individual precision reflectors made from fused silica glass. The mirrored surface of the satellite was designed to reflect laser beams from ground stations for accurate ranging measurements. LAGEOS I was launched on May 4, 1976 from Vandenberg Air Force Base, California. The two-foot diameter, 900-pound satellite orbited the Earth from pole to pole, measuring the movements of the Earth's surface relative to earthquakes, continental drift, and other geophysical phenomena. Scientists at NASA's Marshall Space Flight Center in Huntsville, Alabama came up with the idea for the satellite and built it at the Marshall Center.

  20. Evaluation of DOTD's Existing Queue Estimation Procedures : Research Project Capsule

    DOT National Transportation Integrated Search

    2017-10-01

    The primary objective of this study is to evaluate the effectiveness of DOTDs queue estimation procedures by comparing results with those obtained directly from site observations through video camera footage or other means. Actual queue start time...

  1. President Kennedy's Speech at Rice University

    NASA Technical Reports Server (NTRS)

    1988-01-01

    This video tape presents unedited film footage of President John F. Kennedy's speech at Rice University, Houston, Texas, September 12, 1962. The speech expresses the commitment of the United States to landing an astronaut on the Moon.

  2. XTE Solid Motor Installation at Pad 17-A, Cape Canaveral Air Station

    NASA Technical Reports Server (NTRS)

    1995-01-01

    This NASA Kennedy Space Center video presents live footage of the installation of the XTE (X-Ray Timing Explorer) Solid Rocket Motor at Launch Pad 17-A. The installation takes place at Cape Canaveral Air Station, Florida.

  3. Intelligent keyframe extraction for video printing

    NASA Astrophysics Data System (ADS)

    Zhang, Tong

    2004-10-01

    Nowadays most digital cameras have the functionality of taking short video clips, with the length of video ranging from several seconds to a couple of minutes. The purpose of this research is to develop an algorithm which extracts an optimal set of keyframes from each short video clip so that the user could obtain proper video frames to print out. In current video printing systems, keyframes are normally obtained by evenly sampling the video clip over time. Such an approach, however, may not reflect highlights or regions of interest in the video. Keyframes derived in this way may also be improper for video printing in terms of either content or image quality. In this paper, we present an intelligent keyframe extraction approach to derive an improved keyframe set by performing semantic analysis of the video content. For a video clip, a number of video and audio features are analyzed to first generate a candidate keyframe set. These features include accumulative color histogram and color layout differences, camera motion estimation, moving object tracking, face detection and audio event detection. Then, the candidate keyframes are clustered and evaluated to obtain a final keyframe set. The objective is to automatically generate a limited number of keyframes to show different views of the scene; to show different people and their actions in the scene; and to tell the story in the video shot. Moreover, frame extraction for video printing, which is a rather subjective problem, is considered in this work for the first time, and a semi-automatic approach is proposed.

  4. Deep RNNs for video denoising

    NASA Astrophysics Data System (ADS)

    Chen, Xinyuan; Song, Li; Yang, Xiaokang

    2016-09-01

    Video denoising can be described as the problem of mapping from a specific length of noisy frames to clean one. We propose a deep architecture based on Recurrent Neural Network (RNN) for video denoising. The model learns a patch-based end-to-end mapping between the clean and noisy video sequences. It takes the corrupted video sequences as the input and outputs the clean one. Our deep network, which we refer to as deep Recurrent Neural Networks (deep RNNs or DRNNs), stacks RNN layers where each layer receives the hidden state of the previous layer as input. Experiment shows (i) the recurrent architecture through temporal domain extracts motion information and does favor to video denoising, and (ii) deep architecture have large enough capacity for expressing mapping relation between corrupted videos as input and clean videos as output, furthermore, (iii) the model has generality to learned different mappings from videos corrupted by different types of noise (e.g., Poisson-Gaussian noise). By training on large video databases, we are able to compete with some existing video denoising methods.

  5. Vehicle-triggered video compression/decompression for fast and efficient searching in large video databases

    NASA Astrophysics Data System (ADS)

    Bulan, Orhan; Bernal, Edgar A.; Loce, Robert P.; Wu, Wencheng

    2013-03-01

    Video cameras are widely deployed along city streets, interstate highways, traffic lights, stop signs and toll booths by entities that perform traffic monitoring and law enforcement. The videos captured by these cameras are typically compressed and stored in large databases. Performing a rapid search for a specific vehicle within a large database of compressed videos is often required and can be a time-critical life or death situation. In this paper, we propose video compression and decompression algorithms that enable fast and efficient vehicle or, more generally, event searches in large video databases. The proposed algorithm selects reference frames (i.e., I-frames) based on a vehicle having been detected at a specified position within the scene being monitored while compressing a video sequence. A search for a specific vehicle in the compressed video stream is performed across the reference frames only, which does not require decompression of the full video sequence as in traditional search algorithms. Our experimental results on videos captured in a local road show that the proposed algorithm significantly reduces the search space (thus reducing time and computational resources) in vehicle search tasks within compressed video streams, particularly those captured in light traffic volume conditions.

  6. Automated face detection for occurrence and occupancy estimation in chimpanzees.

    PubMed

    Crunchant, Anne-Sophie; Egerer, Monika; Loos, Alexander; Burghardt, Tilo; Zuberbühler, Klaus; Corogenes, Katherine; Leinert, Vera; Kulik, Lars; Kühl, Hjalmar S

    2017-03-01

    Surveying endangered species is necessary to evaluate conservation effectiveness. Camera trapping and biometric computer vision are recent technological advances. They have impacted on the methods applicable to field surveys and these methods have gained significant momentum over the last decade. Yet, most researchers inspect footage manually and few studies have used automated semantic processing of video trap data from the field. The particular aim of this study is to evaluate methods that incorporate automated face detection technology as an aid to estimate site use of two chimpanzee communities based on camera trapping. As a comparative baseline we employ traditional manual inspection of footage. Our analysis focuses specifically on the basic parameter of occurrence where we assess the performance and practical value of chimpanzee face detection software. We found that the semi-automated data processing required only 2-4% of the time compared to the purely manual analysis. This is a non-negligible increase in efficiency that is critical when assessing the feasibility of camera trap occupancy surveys. Our evaluations suggest that our methodology estimates the proportion of sites used relatively reliably. Chimpanzees are mostly detected when they are present and when videos are filmed in high-resolution: the highest recall rate was 77%, for a false alarm rate of 2.8% for videos containing only chimpanzee frontal face views. Certainly, our study is only a first step for transferring face detection software from the lab into field application. Our results are promising and indicate that the current limitation of detecting chimpanzees in camera trap footage due to lack of suitable face views can be easily overcome on the level of field data collection, that is, by the combined placement of multiple high-resolution cameras facing reverse directions. This will enable to routinely conduct chimpanzee occupancy surveys based on camera trapping and semi

  7. Strategic Communication Through Design: A Narrative Approach

    DTIC Science & Technology

    2009-12-01

    detainees at Abu Grahib and their actions were not indicative of approved United States policy. This incident of abuse, documented by the perpetrators...strategic effects after gaining international exposure due to new media. Abu Gharib is an example of how events at a tactical level can be used by...of Abu Musab al-Zarqawi on June 8, 2006. This characterization was soon to be proved false. Cell phone video footage showed the grim details of the

  8. Operation Enduring Freedom. Joint Center for Operational Analysis Journal, Volume 11, Issue 3, Fall 2009

    DTIC Science & Technology

    2009-01-01

    began a period known as the Great Game , which was a century and a half long competition for Afghanistan by Britain and Russia. Each of the countries...It is yet to be determined whether or not the United States is capable of sustaining initial success, or if the Great Game will continue...showed Malik Noorafzal video footage of the World Trade Center towers collapsing. He had never seen this and it made a deep impression. He had heard

  9. NASA Dryden's Lori Losey was named NASA's 2004 Videographer of the Year in part for her camera work during NASA's AirSAR 2004 science mission in Chile.

    NASA Image and Video Library

    2004-03-11

    Lori Losey, an employee of Arcata Associates at Dryden, was honored with NASA's 2004 Videographer of the Year award for her work in two of the three categories in the NASA video competition, public affairs and documentation. In the public affairs category, Losey received a first-place citation for her footage of an Earth Science mission that was flown aboard NASA's DC-8 Flying Laboratory in South America last year. Her footage not only depicted the work of the scientists aboard the aircraft and on the ground, but she also obtained spectacular footage of flora and fauna in the mission's target area that helped communicate the environmental research goals of the project. Losey also took first place in the documentation category for her acquisition of technical videography of the X-45A Unmanned Combat Air Vehicle flight tests. The video, shot with a hand-held camera from the rear seat of a NASA F/A-18 mission support aircraft, demonstrated her capabilities in recording precise technical visual data in a very challenging airborne environment. The award was presented to Losey during a NASA reception at the National Association of Broadcasters convention in Las Vegas April 19. A three-judge panel evaluated entries for public affairs, documentation and production videography on professional excellence, technical quality, originality, creativity within restrictions of the project, and applicability to NASA and its mission. Entries consisted of a continuous video sequence or three views of the same subject for a maximum of three minutes duration. Linda Peters, Arcata Associates' Video Systems Supervisor at NASA Dryden, noted, "Lori is a talented videographer who has demonstrated extraordinary abilities with the many opportunities she has received in her career at NASA." Losey's award was the second major NASA video award won by members of the Dryden video team in two years. Steve Parcel took first place in the documentation category last year for his camera and editing

  10. Video quality assesment using M-SVD

    NASA Astrophysics Data System (ADS)

    Tao, Peining; Eskicioglu, Ahmet M.

    2007-01-01

    Objective video quality measurement is a challenging problem in a variety of video processing application ranging from lossy compression to printing. An ideal video quality measure should be able to mimic the human observer. We present a new video quality measure, M-SVD, to evaluate distorted video sequences based on singular value decomposition. A computationally efficient approach is developed for full-reference (FR) video quality assessment. This measure is tested on the Video Quality Experts Group (VQEG) phase I FR-TV test data set. Our experiments show the graphical measure displays the amount of distortion as well as the distribution of error in all frames of the video sequence while the numerical measure has a good correlation with perceived video quality outperforms PSNR and other objective measures by a clear margin.

  11. Housing conditions influence cortical and behavioural reactions of sheep in response to videos showing social interactions of different valence.

    PubMed

    Vögeli, Sabine; Wolf, Martin; Wechsler, Beat; Gygax, Lorenz

    2015-05-01

    Mood, as a long-term affective state, is thought to modulate short-term emotional reactions in animals, but the details of this interplay have hardly been investigated experimentally. Apart from a basic interest in this affective system, mood is likely to have an important impact on animal welfare, as bad mood may taint all emotional experience. In the present study about mood - emotion interaction, 29 sheep were kept under predictable, stimulus-rich or unpredictable, stimulus-poor housing conditions, to induce different mood states. In an experiment, the animals were confronted with video sequences of social interactions of conspecifics showing agonistic interactions, ruminating or tolerantly co-feeding as stimuli of different valences. Emotional reactions were assessed by measuring frontal brain activity using functional near-infrared spectroscopy and by recording behavioral reactions. Attentiveness of the sheep decreased from videos showing agonistic interactions to ruminating sheep to those displaying co-feeding sheep. Seeing agonistic interactions was also associated with a deactivation of the frontal cortex, specifically in animals living under predictable, stimulus-rich housing conditions. These sheep generally showed less attentiveness and locomotor activity and they had their ears in a forward position less often and in a backward position more often than the sheep from the unpredictable, stimulus-poor conditions. Housing conditions influenced how the sheep behaved, which can either be thought to be mediated by mood or by the animals' previous experience with stimulus-richness in their housing conditions. Frontal cortical activity may not depend on valence only, but also on the perceptual channel through which the stimuli were perceived. Copyright © 2015 Elsevier B.V. All rights reserved.

  12. Slow motion in films and video clips: Music influences perceived duration and emotion, autonomic physiological activation and pupillary responses.

    PubMed

    Wöllner, Clemens; Hammerschmidt, David; Albrecht, Henning

    2018-01-01

    Slow motion scenes are ubiquitous in screen-based audiovisual media and are typically accompanied by emotional music. The strong effects of slow motion on observers are hypothetically related to heightened emotional states in which time seems to pass more slowly. These states are simulated in films and video clips, and seem to resemble such experiences in daily life. The current study investigated time perception and emotional response to media clips containing decelerated human motion, with or without music using psychometric and psychophysiological testing methods. Participants were presented with slow-motion scenes taken from commercial films, ballet and sports footage, as well as the same scenes converted to real-time. Results reveal that slow-motion scenes, compared to adapted real-time scenes, led to systematic underestimations of duration, lower perceived arousal but higher valence, lower respiration rates and smaller pupillary diameters. The presence of music compared to visual-only presentations strongly affected results in terms of higher accuracy in duration estimates, higher perceived arousal and valence, higher physiological activation and larger pupillary diameters, indicating higher arousal. Video genre affected responses in addition. These findings suggest that perceiving slow motion is not related to states of high arousal, but rather affects cognitive dimensions of perceived time and valence. Music influences these experiences profoundly, thus strengthening the impact of stretched time in audiovisual media.

  13. Hypervelocity High Speed Projectile Imagery and Video

    NASA Technical Reports Server (NTRS)

    Henderson, Donald J.

    2009-01-01

    This DVD contains video showing the results of hypervelocity impact. One is showing a projectile impact on a Kevlar wrapped Aluminum bottle containing 3000 psi gaseous oxygen. One video show animations of a two stage light gas gun.

  14. CUQI: cardiac ultrasound video quality index

    PubMed Central

    Razaak, Manzoor; Martini, Maria G.

    2016-01-01

    Abstract. Medical images and videos are now increasingly part of modern telecommunication applications, including telemedicinal applications, favored by advancements in video compression and communication technologies. Medical video quality evaluation is essential for modern applications since compression and transmission processes often compromise the video quality. Several state-of-the-art video quality metrics used for quality evaluation assess the perceptual quality of the video. For a medical video, assessing quality in terms of “diagnostic” value rather than “perceptual” quality is more important. We present a diagnostic-quality–oriented video quality metric for quality evaluation of cardiac ultrasound videos. Cardiac ultrasound videos are characterized by rapid repetitive cardiac motions and distinct structural information characteristics that are explored by the proposed metric. Cardiac ultrasound video quality index, the proposed metric, is a full reference metric and uses the motion and edge information of the cardiac ultrasound video to evaluate the video quality. The metric was evaluated for its performance in approximating the quality of cardiac ultrasound videos by testing its correlation with the subjective scores of medical experts. The results of our tests showed that the metric has high correlation with medical expert opinions and in several cases outperforms the state-of-the-art video quality metrics considered in our tests. PMID:27014715

  15. STS-66 Mission Highlights Resource Tape

    NASA Technical Reports Server (NTRS)

    1995-01-01

    This video contains the mission highlights of the STS-66 Space Shuttle Atlantis Mission in November 1994. Astronauts included: Don McMonagle (Mission Commander), Kurt Brown, Ellen Ochoa (Payload Commander), Joe Tanner, Scott Parazynski, and Jean-Francois Clervoy (collaborating French astronaut). Footage includes: pre-launch suitup, entering Space Shuttle, countdown and launching of Shuttle, EVA activities (ATLAS-3, CRISTA/SPAS, SSBUV/A, ESCAPE-2), on-board experiments dealing with microgravity and its effects, protein crystal growth experiments, daily living and sleeping compartment footage, earthviews of various meteorological processes (dust storms, cloud cover, ocean storms), pre-landing and land footage (both from inside the Shuttle and from outside with long range cameras), and tracking and landing shots from inside Mission Control Center. Included is air-to-ground communication between Mission Control and the Shuttle. This Shuttle was the last launch of 1994.

  16. STS-66 mission highlights resource tape

    NASA Astrophysics Data System (ADS)

    1995-04-01

    This video contains the mission highlights of the STS-66 Space Shuttle Atlantis Mission in November 1994. Astronauts included: Don McMonagle (Mission Commander), Kurt Brown, Ellen Ochoa (Payload Commander), Joe Tanner, Scott Parazynski, and Jean-Francois Clervoy (collaborating French astronaut). Footage includes: pre-launch suitup, entering Space Shuttle, countdown and launching of Shuttle, EVA activities (ATLAS-3, CRISTA/SPAS, SSBUV/A, ESCAPE-2), on-board experiments dealing with microgravity and its effects, protein crystal growth experiments, daily living and sleeping compartment footage, earthviews of various meteorological processes (dust storms, cloud cover, ocean storms), pre-landing and land footage (both from inside the Shuttle and from outside with long range cameras), and tracking and landing shots from inside Mission Control Center. Included is air-to-ground communication between Mission Control and the Shuttle. This Shuttle was the last launch of 1994.

  17. MSFC April 2016 Resource Reel

    NASA Image and Video Library

    2016-04-27

    Name/Title of Video: Marshall Space Flight Center Media Resource Reel 2016 Description: Edited b-roll video of NASA's Marshall Space Flight Center in Huntsville, Ala., and of various projects and programs located at or associated with the center. For more information and more detailed footage, please contact the center's Public & Employee Communications Office. Graphic Information: PAO Name:Jennifer Stanfield Phone Number:256-544-0034 Email Address: jennifer.stanfield@nasa.gov

  18. Lithium-Ion Small Cell Battery Shorting Study

    NASA Technical Reports Server (NTRS)

    Pearson, Chris; Curzon, David; Blackmore, Paul; Rao, Gopalakrishna

    2004-01-01

    AEA performed a hard short study on various cell configurations whilst monitoring voltage, current and temperature. Video recording was also done to verify the evidence for cell venting. The presentation summarizes the results of the study including video footage of typical samples. Need for the diode protection in manned applications is identified. The standard AEA approach of using fused connectors during AIT for unmanned applications is also described.

  19. Exploring the dark energy biosphere, 15 seconds at a time

    NASA Astrophysics Data System (ADS)

    Petrone, C.; Tossey, L.; Biddle, J.

    2016-12-01

    Science communication often suffers from numerous pitfalls including jargon, complexity, ageneral lack of (science) education of the audience, and short attention spans. With the Center for Dark EnergyBiosphere Investigations (C-DEBI), Delaware Sea Grant is expanding its collection of 15 Second Science videos, whichdeliver complex science topics, with visually stimulating footage and succinct audio. Featuring a diverse cast of scientistsand educators in front of the camera, we are expanded our reach into the public and classrooms. We're alsoexperimenting with smartphone-based virtual reality, for a more immersive experience into the deep! We will show youthe process for planning, producing, and posting our #15secondscience videos and VR segments, and how we areevaluating effectiveness.

  20. NOAA - National Oceanic and Atmospheric Administration - Media Resources

    Science.gov Websites

    the footage you're looking for in the shot sheets provided below or complete B-Roll list, please send an e-mail to the NOAA Video Studio at broll@noaa.gov. NOAA B-Roll Shot Sheets (Text & PDF

  1. Tailhook 91. Part 2. Events at the 35th Annual Tailhook Symposium

    DTIC Science & Technology

    1993-02-01

    In the HS-1 suite (room 315), a few officers recorded over combat footage in a video camcorder to memorialize their mooning activities. They left the...ranging from "soft core" to "hard core" videos and slides. A few suites simply used the Hilton Hotel "pay for view" television to rent adult movies, which...paid strippers on Saturday night. Other squadrons known to have shown adult-oriented videos were VX-4 and Top Gun. The MAWTS-1 squadron reportedly

  2. Evaluation of automatic video summarization systems

    NASA Astrophysics Data System (ADS)

    Taskiran, Cuneyt M.

    2006-01-01

    Compact representations of video, or video summaries, data greatly enhances efficient video browsing. However, rigorous evaluation of video summaries generated by automatic summarization systems is a complicated process. In this paper we examine the summary evaluation problem. Text summarization is the oldest and most successful summarization domain. We show some parallels between these to domains and introduce methods and terminology. Finally, we present results for a comprehensive evaluation summary that we have performed.

  3. Video game training and the reward system.

    PubMed

    Lorenz, Robert C; Gleich, Tobias; Gallinat, Jürgen; Kühn, Simone

    2015-01-01

    Video games contain elaborate reinforcement and reward schedules that have the potential to maximize motivation. Neuroimaging studies suggest that video games might have an influence on the reward system. However, it is not clear whether reward-related properties represent a precondition, which biases an individual toward playing video games, or if these changes are the result of playing video games. Therefore, we conducted a longitudinal study to explore reward-related functional predictors in relation to video gaming experience as well as functional changes in the brain in response to video game training. Fifty healthy participants were randomly assigned to a video game training (TG) or control group (CG). Before and after training/control period, functional magnetic resonance imaging (fMRI) was conducted using a non-video game related reward task. At pretest, both groups showed strongest activation in ventral striatum (VS) during reward anticipation. At posttest, the TG showed very similar VS activity compared to pretest. In the CG, the VS activity was significantly attenuated. This longitudinal study revealed that video game training may preserve reward responsiveness in the VS in a retest situation over time. We suggest that video games are able to keep striatal responses to reward flexible, a mechanism which might be of critical value for applications such as therapeutic cognitive training.

  4. [STS-7 Launch and Land

    NASA Technical Reports Server (NTRS)

    1983-01-01

    The prelaunch, launch, and landing activities of the STS-7 Space Shuttle mission are highlighted in this video, with brief footage of the deployment of the Shuttle Pallet Satellite (SPAS). The flight crew consisted of: Cmdr. Bob Crippen, Pilot Rich Hauck, and Mission Specialists John Fabian, Dr. Sally Ride, and Norm Thaggart. With this mission, Cmdr. Crippen became the first astronaut to fly twice in a Space Shuttle Mission and Dr. Sally Ride was the first American woman to fly in space. There is a large amount of footage of the Space Shuttle by the aircraft that accompanies the Shuttle launchings and landings.

  5. Bullet-Block Science Video Puzzle

    ERIC Educational Resources Information Center

    Shakur, Asif

    2015-01-01

    A science video blog, which has gone viral, shows a wooden block shot by a vertically aimed rifle. The video shows that the block hit dead center goes exactly as high as the one shot off-center. (Fig. 1). The puzzle is that the block shot off-center carries rotational kinetic energy in addition to the gravitational potential energy. This leads a…

  6. MAF Resource Reel August 2016

    NASA Image and Video Library

    2016-08-01

    Edited b-roll video of NASA's Michoud Assembly Facility, which is managed by the Marshall Space Flight Center in Huntsville, Alabama. This B-roll shows various projects including manufacturing of the Space Launch System core stage and the Orion spacecraft pressure vessel. It includes interior and exterior views of the facility. For more information and more detailed footage, please contact the center's Public & Employee Communications Office. PAO Name:Tracy McMahan Phone Number:256-544-0034 Email Address: tracy.mcmahan@nasa.gov

  7. Video File - Eclipse Event At Stennis Space Center

    NASA Image and Video Library

    2017-08-21

    On Monday, Aug. 21, NASA provided coast-to-coast coverage of the solar eclipse across America – featuring views of the phenomenon from unique vantage points, including from the ground, from aircraft, and from spacecraft including the ISS, during a live broadcast seen on NASA Television and the agency’s website.  This is footage from Stennis Space Center.

  8. Changes, disruption and innovation: An investigation of the introduction of new health information technology in a microbiology laboratory.

    PubMed

    Toouli, George; Georgiou, Andrew; Westbrook, Johanna

    2012-01-01

    It is expected that health information technology (HIT) will deliver a safer, more efficient and effective health care system. The aim of this study was to undertake a qualitative and video-ethnographic examination of the impact of information technologies on work processes in the reception area of a Microbiology Department, to ascertain what changed, how it changed and the impact of the change. The setting for this study was the microbiology laboratory of a large tertiary hospital in Sydney. The study consisted of qualitative (interview and focus group) data and observation sessions for the period August 2005 to October 2006 along with video footage shot in three sessions covering the original system and the two stages of the Cerner implementation. Data analysis was assisted by NVivo software and process maps were produced from the video footage. There were two laboratory information systems observed in the video footage with computerized provider order entry introduced four months later. Process maps highlighted the large number of pre data entry steps with the original system whilst the newer system incorporated many of these steps in to the data entry stage. However, any time saved with the new system was offset by the requirement to complete some data entry of patient information not previously required. Other changes noted included the change of responsibilities for the reception staff and the physical changes required to accommodate the increased activity around the data entry area. Implementing a new HIT is always an exciting time for any environment but ensuring that the implementation goes smoothly and with minimal trouble requires the administrator and their team to plan well in advance for staff training, physical layout and possible staff resource reallocation.

  9. Involving patients in understanding hospital infection control using visual methods.

    PubMed

    Wyer, Mary; Jackson, Debra; Iedema, Rick; Hor, Su-Yin; Gilbert, Gwendolyn L; Jorm, Christine; Hooker, Claire; O'Sullivan, Matthew Vincent Neil; Carroll, Katherine

    2015-06-01

    This paper explores patients' perspectives on infection prevention and control. Healthcare-associated infections are the most frequent adverse event experienced by patients. Reduction strategies have predominantly addressed front-line clinicians' practices; patients' roles have been less explored. Video-reflexive ethnography. Fieldwork undertaken at a large metropolitan hospital in Australia involved 300 hours of ethnographic observations, including 11 hours of video footage. This paper focuses on eight occasions, where video footage was shown back to patients in one-on-one reflexive sessions. Viewing and discussing video footage of clinical care enabled patients to become articulate about infection risks, and to identify their own roles in reducing transmission. Barriers to detailed understandings of preventative practices and their roles included lack of conversation between patients and clinicians about infection prevention and control, and being ignored or contradicted when challenging perceived suboptimal practice. It became evident that to compensate for clinicians' lack of engagement around infection control, participants had developed a range of strategies, of variable effectiveness, to protect themselves and others. Finally, the reflexive process engendered closer scrutiny and a more critical attitude to infection control that increased patients' sense of agency. This study found that patients actively contribute to their own safety. Their success, however, depends on the quality of patient-provider relationships and conversations. Rather than treating patients as passive recipients of infection control practices, clinicians can support and engage with patients' contributions towards achieving safer care. This study suggests that if clinicians seek to reduce infection rates, they must start to consider patients as active contributors to infection control. Clinicians can engage patients in conversations about practices and pay attention to patient feedback

  10. Video Feedforward for Reading

    ERIC Educational Resources Information Center

    Dowrick, Peter W.; Kim-Rupnow, Weol Soon; Power, Thomas J.

    2006-01-01

    Video feedforward can create images of positive futures, as has been shown by researchers using self-modeling methods to teach new skills with carefully planned and edited videos that show the future capability of the individual. As a supplement to tutoring provided by community members, we extended these practices to young children struggling to…

  11. Packetized video on MAGNET

    NASA Astrophysics Data System (ADS)

    Lazar, Aurel A.; White, John S.

    1986-11-01

    Theoretical analysis of an ILAN model of MAGNET, an integrated network testbed developed at Columbia University, shows that the bandwidth freed up by video and voice calls during periods of little movement in the images and silence periods in the speech signals could be utilized efficiently for graphics and data transmission. Based on these investigations, an architecture supporting adaptive protocols that are dynamically controlled by the requirements of a fluctuating load and changing user environment has been advanced. To further analyze the behavior of the network, a real-time packetized video system has been implemented. This system is embedded in the real time multimedia workstation EDDY that integrates video, voice and data traffic flows. Protocols supporting variable bandwidth, constant quality packetized video transport are descibed in detail.

  12. Physics Reality Show

    NASA Astrophysics Data System (ADS)

    Erukhimova, Tatiana

    The attention span of K-12 students is very short; they are used to digesting information in short snippets through social media and TV. To get the students interested in physics, we created the Physics Reality Show: a series of staged short videos with duration no longer than a few minutes. Each video explains and illustrates one physics concept or law through a fast-paced sequence of physics demonstrations and experiments. The cast consists entirely of physics undergraduate students with artistic abilities and substantial experience in showing physics demonstrations at current outreach events run by the department: Physics Shows and Physics & Engineering Festival. Undergraduate students are of almost the same age as their high-school audience. They are in the best position to connect with kids and convey their fascination with physics. The PI and other faculty members who are involved in the outreach advise and coach the cast. They help students in staging the episodes and choosing the most exciting and relevant demonstrations. Supported by the APS mini-outreach Grant.

  13. The Effect of Background Music in Shark Documentaries on Viewers' Perceptions of Sharks

    PubMed Central

    Hastings, Philip A.; Gneezy, Ayelet

    2016-01-01

    Despite the ongoing need for shark conservation and management, prevailing negative sentiments marginalize these animals and legitimize permissive exploitation. These negative attitudes arise from an instinctive, yet exaggerated fear, which is validated and reinforced by disproportionate and sensationalistic news coverage of shark ‘attacks’ and by highlighting shark-on-human violence in popular movies and documentaries. In this study, we investigate another subtler, yet powerful factor that contributes to this fear: the ominous background music that often accompanies shark footage in documentaries. Using three experiments, we show that participants rated sharks more negatively and less positively after viewing a 60-second video clip of swimming sharks set to ominous background music, compared to participants who watched the same video clip set to uplifting background music, or silence. This finding was not an artifact of soundtrack alone because attitudes toward sharks did not differ among participants assigned to audio-only control treatments. This is the first study to demonstrate empirically that the connotative attributes of background music accompanying shark footage affect viewers’ attitudes toward sharks. Given that nature documentaries are often regarded as objective and authoritative sources of information, it is critical that documentary filmmakers and viewers are aware of how the soundtrack can affect the interpretation of the educational content. PMID:27487003

  14. The Effect of Background Music in Shark Documentaries on Viewers' Perceptions of Sharks.

    PubMed

    Nosal, Andrew P; Keenan, Elizabeth A; Hastings, Philip A; Gneezy, Ayelet

    2016-01-01

    Despite the ongoing need for shark conservation and management, prevailing negative sentiments marginalize these animals and legitimize permissive exploitation. These negative attitudes arise from an instinctive, yet exaggerated fear, which is validated and reinforced by disproportionate and sensationalistic news coverage of shark 'attacks' and by highlighting shark-on-human violence in popular movies and documentaries. In this study, we investigate another subtler, yet powerful factor that contributes to this fear: the ominous background music that often accompanies shark footage in documentaries. Using three experiments, we show that participants rated sharks more negatively and less positively after viewing a 60-second video clip of swimming sharks set to ominous background music, compared to participants who watched the same video clip set to uplifting background music, or silence. This finding was not an artifact of soundtrack alone because attitudes toward sharks did not differ among participants assigned to audio-only control treatments. This is the first study to demonstrate empirically that the connotative attributes of background music accompanying shark footage affect viewers' attitudes toward sharks. Given that nature documentaries are often regarded as objective and authoritative sources of information, it is critical that documentary filmmakers and viewers are aware of how the soundtrack can affect the interpretation of the educational content.

  15. Eye of the Beholder: Stage Entrance Behavior and Facial Expression Affect Continuous Quality Ratings in Music Performance

    PubMed Central

    Waddell, George; Williamon, Aaron

    2017-01-01

    Judgments of music performance quality are commonly employed in music practice, education, and research. However, previous studies have demonstrated the limited reliability of such judgments, and there is now evidence that extraneous visual, social, and other “non-musical” features can unduly influence them. The present study employed continuous measurement techniques to examine how the process of forming a music quality judgment is affected by the manipulation of temporally specific visual cues. Video footage comprising an appropriate stage entrance and error-free performance served as the standard condition (Video 1). This footage was manipulated to provide four additional conditions, each identical save for a single variation: an inappropriate stage entrance (Video 2); the presence of an aural performance error midway through the piece (Video 3); the same error accompanied by a negative facial reaction by the performer (Video 4); the facial reaction with no corresponding aural error (Video 5). The participants were 53 musicians and 52 non-musicians (N = 105) who individually assessed the performance quality of one of the five randomly assigned videos via a digital continuous measurement interface and headphones. The results showed that participants viewing the “inappropriate” stage entrance made judgments significantly more quickly than those viewing the “appropriate” entrance, and while the poor entrance caused significantly lower initial scores among those with musical training, the effect did not persist long into the performance. The aural error caused an immediate drop in quality judgments that persisted to a lower final score only when accompanied by the frustrated facial expression from the pianist; the performance error alone caused a temporary drop only in the musicians' ratings, and the negative facial reaction alone caused no reaction regardless of participants' musical experience. These findings demonstrate the importance of visual

  16. Video game training and the reward system

    PubMed Central

    Lorenz, Robert C.; Gleich, Tobias; Gallinat, Jürgen; Kühn, Simone

    2015-01-01

    Video games contain elaborate reinforcement and reward schedules that have the potential to maximize motivation. Neuroimaging studies suggest that video games might have an influence on the reward system. However, it is not clear whether reward-related properties represent a precondition, which biases an individual toward playing video games, or if these changes are the result of playing video games. Therefore, we conducted a longitudinal study to explore reward-related functional predictors in relation to video gaming experience as well as functional changes in the brain in response to video game training. Fifty healthy participants were randomly assigned to a video game training (TG) or control group (CG). Before and after training/control period, functional magnetic resonance imaging (fMRI) was conducted using a non-video game related reward task. At pretest, both groups showed strongest activation in ventral striatum (VS) during reward anticipation. At posttest, the TG showed very similar VS activity compared to pretest. In the CG, the VS activity was significantly attenuated. This longitudinal study revealed that video game training may preserve reward responsiveness in the VS in a retest situation over time. We suggest that video games are able to keep striatal responses to reward flexible, a mechanism which might be of critical value for applications such as therapeutic cognitive training. PMID:25698962

  17. "PAY ATTENTION!": How Teachers and Students Construct Attentiveness in First Grade Classrooms

    ERIC Educational Resources Information Center

    Milman, Noriko Sabene

    2009-01-01

    Research indicates that early school success influences eventual life chances and that attentiveness in the classroom contributes to early school success. Using ethnographic fieldnotes, interviews, short surveys, and video footage collected over three school years, I investigate how teachers and young students experience "attentiveness" during…

  18. Video-Based Fingerprint Verification

    PubMed Central

    Qin, Wei; Yin, Yilong; Liu, Lili

    2013-01-01

    Conventional fingerprint verification systems use only static information. In this paper, fingerprint videos, which contain dynamic information, are utilized for verification. Fingerprint videos are acquired by the same capture device that acquires conventional fingerprint images, and the user experience of providing a fingerprint video is the same as that of providing a single impression. After preprocessing and aligning processes, “inside similarity” and “outside similarity” are defined and calculated to take advantage of both dynamic and static information contained in fingerprint videos. Match scores between two matching fingerprint videos are then calculated by combining the two kinds of similarity. Experimental results show that the proposed video-based method leads to a relative reduction of 60 percent in the equal error rate (EER) in comparison to the conventional single impression-based method. We also analyze the time complexity of our method when different combinations of strategies are used. Our method still outperforms the conventional method, even if both methods have the same time complexity. Finally, experimental results demonstrate that the proposed video-based method can lead to better accuracy than the multiple impressions fusion method, and the proposed method has a much lower false acceptance rate (FAR) when the false rejection rate (FRR) is quite low. PMID:24008283

  19. Gamifying Video Object Segmentation.

    PubMed

    Spampinato, Concetto; Palazzo, Simone; Giordano, Daniela

    2017-10-01

    Video object segmentation can be considered as one of the most challenging computer vision problems. Indeed, so far, no existing solution is able to effectively deal with the peculiarities of real-world videos, especially in cases of articulated motion and object occlusions; limitations that appear more evident when we compare the performance of automated methods with the human one. However, manually segmenting objects in videos is largely impractical as it requires a lot of time and concentration. To address this problem, in this paper we propose an interactive video object segmentation method, which exploits, on one hand, the capability of humans to identify correctly objects in visual scenes, and on the other hand, the collective human brainpower to solve challenging and large-scale tasks. In particular, our method relies on a game with a purpose to collect human inputs on object locations, followed by an accurate segmentation phase achieved by optimizing an energy function encoding spatial and temporal constraints between object regions as well as human-provided location priors. Performance analysis carried out on complex video benchmarks, and exploiting data provided by over 60 users, demonstrated that our method shows a better trade-off between annotation times and segmentation accuracy than interactive video annotation and automated video object segmentation approaches.

  20. Toward enhancing the distributed video coder under a multiview video codec framework

    NASA Astrophysics Data System (ADS)

    Lee, Shih-Chieh; Chen, Jiann-Jone; Tsai, Yao-Hong; Chen, Chin-Hua

    2016-11-01

    The advance of video coding technology enables multiview video (MVV) or three-dimensional television (3-D TV) display for users with or without glasses. For mobile devices or wireless applications, a distributed video coder (DVC) can be utilized to shift the encoder complexity to decoder under the MVV coding framework, denoted as multiview distributed video coding (MDVC). We proposed to exploit both inter- and intraview video correlations to enhance side information (SI) and improve the MDVC performance: (1) based on the multiview motion estimation (MVME) framework, a categorized block matching prediction with fidelity weights (COMPETE) was proposed to yield a high quality SI frame for better DVC reconstructed images. (2) The block transform coefficient properties, i.e., DCs and ACs, were exploited to design the priority rate control for the turbo code, such that the DVC decoding can be carried out with fewest parity bits. In comparison, the proposed COMPETE method demonstrated lower time complexity, while presenting better reconstructed video quality. Simulations show that the proposed COMPETE can reduce the time complexity of MVME to 1.29 to 2.56 times smaller, as compared to previous hybrid MVME methods, while the image peak signal to noise ratios (PSNRs) of a decoded video can be improved 0.2 to 3.5 dB, as compared to H.264/AVC intracoding.

  1. Lecture Videos in Online Courses: A Follow-Up

    ERIC Educational Resources Information Center

    Evans, Heather K.; Cordova, Victoria

    2015-01-01

    In a recent study regarding online lecture videos, Evans (2014) shows that lecture videos are not superior to still slides. Using two Introduction to American Government courses, taught in a 4-week summer session, she shows that students in a non-video course had higher satisfaction with the course and instructor and performed better on exams than…

  2. Packetized Video On MAGNET

    NASA Astrophysics Data System (ADS)

    Lazar, Aurel A.; White, John S.

    1987-07-01

    Theoretical analysis of integrated local area network model of MAGNET, an integrated network testbed developed at Columbia University, shows that the bandwidth freed up during video and voice calls during periods of little movement in the images and periods of silence in the speech signals could be utilized efficiently for graphics and data transmission. Based on these investigations, an architecture supporting adaptive protocols that are dynamicaly controlled by the requirements of a fluctuating load and changing user environment has been advanced. To further analyze the behavior of the network, a real-time packetized video system has been implemented. This system is embedded in the real-time multimedia workstation EDDY, which integrates video, voice, and data traffic flows. Protocols supporting variable-bandwidth, fixed-quality packetized video transport are described in detail.

  3. Illusory control, gambling, and video gaming: an investigation of regular gamblers and video game players.

    PubMed

    King, Daniel L; Ejova, Anastasia; Delfabbro, Paul H

    2012-09-01

    There is a paucity of empirical research examining the possible association between gambling and video game play. In two studies, we examined the association between video game playing, erroneous gambling cognitions, and risky gambling behaviour. One hundred and fifteen participants, including 65 electronic gambling machine (EGM) players and 50 regular video game players, were administered a questionnaire that examined video game play, gambling involvement, problem gambling, and beliefs about gambling. We then assessed each groups' performance on a computerised gambling task that involved real money. A post-game survey examined perceptions of the skill and chance involved in the gambling task. The results showed that video game playing itself was not significantly associated with gambling involvement or problem gambling status. However, among those persons who both gambled and played video games, video game playing was uniquely and significantly positively associated with the perception of direct control over chance-based gambling events. Further research is needed to better understand the nature of this association, as it may assist in understanding the impact of emerging digital gambling technologies.

  4. Blind prediction of natural video quality.

    PubMed

    Saad, Michele A; Bovik, Alan C; Charrier, Christophe

    2014-03-01

    We propose a blind (no reference or NR) video quality evaluation model that is nondistortion specific. The approach relies on a spatio-temporal model of video scenes in the discrete cosine transform domain, and on a model that characterizes the type of motion occurring in the scenes, to predict video quality. We use the models to define video statistics and perceptual features that are the basis of a video quality assessment (VQA) algorithm that does not require the presence of a pristine video to compare against in order to predict a perceptual quality score. The contributions of this paper are threefold. 1) We propose a spatio-temporal natural scene statistics (NSS) model for videos. 2) We propose a motion model that quantifies motion coherency in video scenes. 3) We show that the proposed NSS and motion coherency models are appropriate for quality assessment of videos, and we utilize them to design a blind VQA algorithm that correlates highly with human judgments of quality. The proposed algorithm, called video BLIINDS, is tested on the LIVE VQA database and on the EPFL-PoliMi video database and shown to perform close to the level of top performing reduced and full reference VQA algorithms.

  5. Satisfaction with Online Teaching Videos: A Quantitative Approach

    ERIC Educational Resources Information Center

    Meseguer-Martinez, Angel; Ros-Galvez, Alejandro; Rosa-Garcia, Alfonso

    2017-01-01

    We analyse the factors that determine the number of clicks on the "Like" button in online teaching videos, with a sample of teaching videos in the area of Microeconomics across Spanish-speaking countries. The results show that users prefer short online teaching videos. Moreover, some features of the videos have a significant impact on…

  6. Objectively Determining the Educational Potential of Computer and Video-Based Courseware; or, Producing Reliable Evaluations Despite the Dog and Pony Show.

    ERIC Educational Resources Information Center

    Barrett, Andrew J.; And Others

    The Center for Interactive Technology, Applications, and Research at the College of Engineering of the University of South Florida (Tampa) has developed objective and descriptive evaluation models to assist in determining the educational potential of computer and video courseware. The computer-based courseware evaluation model and the video-based…

  7. The Tacoma Narrows Bridge Collapse on Film and Video

    ERIC Educational Resources Information Center

    Olson, Don; Hook, Joseph; Doescher, Russell; Wolf, Steven

    2015-01-01

    This month marks the 75th anniversary of the Tacoma Narrows Bridge collapse. During a gale on Nov. 7, 1940, the bridge exhibited remarkable oscillations before collapsing spectacularly (Figs. 1-5). Physicists over the years have spent a great deal of time and energy studying this event. By using open-source analysis tools and digitized footage of…

  8. Video document

    NASA Astrophysics Data System (ADS)

    Davies, Bob; Lienhart, Rainer W.; Yeo, Boon-Lock

    1999-08-01

    The metaphor of film and TV permeates the design of software to support video on the PC. Simply transplanting the non- interactive, sequential experience of film to the PC fails to exploit the virtues of the new context. Video ont eh PC should be interactive and non-sequential. This paper experiments with a variety of tools for using video on the PC that exploits the new content of the PC. Some feature are more successful than others. Applications that use these tools are explored, including primarily the home video archive but also streaming video servers on the Internet. The ability to browse, edit, abstract and index large volumes of video content such as home video and corporate video is a problem without appropriate solution in today's market. The current tools available are complex, unfriendly video editors, requiring hours of work to prepare a short home video, far more work that a typical home user can be expected to provide. Our proposed solution treats video like a text document, providing functionality similar to a text editor. Users can browse, interact, edit and compose one or more video sequences with the same ease and convenience as handling text documents. With this level of text-like composition, we call what is normally a sequential medium a 'video document'. An important component of the proposed solution is shot detection, the ability to detect when a short started or stopped. When combined with a spreadsheet of key frames, the host become a grid of pictures that can be manipulated and viewed in the same way that a spreadsheet can be edited. Multiple video documents may be viewed, joined, manipulated, and seamlessly played back. Abstracts of unedited video content can be produce automatically to create novel video content for export to other venues. Edited and raw video content can be published to the net or burned to a CD-ROM with a self-installing viewer for Windows 98 and Windows NT 4.0.

  9. NASA Video Catalog

    NASA Technical Reports Server (NTRS)

    2006-01-01

    This issue of the NASA Video Catalog cites video productions listed in the NASA STI database. The videos listed have been developed by the NASA centers, covering Shuttle mission press conferences; fly-bys of planets; aircraft design, testing and performance; environmental pollution; lunar and planetary exploration; and many other categories related to manned and unmanned space exploration. Each entry in the publication consists of a standard bibliographic citation accompanied by an abstract. The Table of Contents shows how the entries are arranged by divisions and categories according to the NASA Scope and Subject Category Guide. For users with specific information, a Title Index is available. A Subject Term Index, based on the NASA Thesaurus, is also included. Guidelines for usage of NASA audio/visual material, ordering information, and order forms are also available.

  10. Content-based video retrieval by example video clip

    NASA Astrophysics Data System (ADS)

    Dimitrova, Nevenka; Abdel-Mottaleb, Mohamed

    1997-01-01

    This paper presents a novel approach for video retrieval from a large archive of MPEG or Motion JPEG compressed video clips. We introduce a retrieval algorithm that takes a video clip as a query and searches the database for clips with similar contents. Video clips are characterized by a sequence of representative frame signatures, which are constructed from DC coefficients and motion information (`DC+M' signatures). The similarity between two video clips is determined by using their respective signatures. This method facilitates retrieval of clips for the purpose of video editing, broadcast news retrieval, or copyright violation detection.

  11. Critical Assessment of Video Production in Teacher Education: Can Video Production Foster Community-Engaged Scholarship?

    ERIC Educational Resources Information Center

    Yang, Kyung-Hwa

    2014-01-01

    In the theoretical framework of production pedagogy, I reflect on a video production project conducted in a teacher education program and discuss the potential of video production to foster community-engaged scholarship among pre-service teachers. While the importance of engaging learners in creating media has been emphasized, studies show little…

  12. Video Measurements: Quantity or Quality

    ERIC Educational Resources Information Center

    Zajkov, Oliver; Mitrevski, Boce

    2012-01-01

    Students have problems with understanding, using and interpreting graphs. In order to improve the students' skills for working with graphs, we propose Manual Video Measurement (MVM). In this paper, the MVM method is explained and its accuracy is tested. The comparison with the standardized video data software shows that its accuracy is comparable…

  13. Interventions for Speech Sound Disorders in Children

    ERIC Educational Resources Information Center

    Williams, A. Lynn, Ed.; McLeod, Sharynne, Ed.; McCauley, Rebecca J., Ed.

    2010-01-01

    With detailed discussion and invaluable video footage of 23 treatment interventions for speech sound disorders (SSDs) in children, this textbook and DVD set should be part of every speech-language pathologist's professional preparation. Focusing on children with functional or motor-based speech disorders from early childhood through the early…

  14. Technologies and Techniques for Supporting Facilitated Video

    ERIC Educational Resources Information Center

    Linnell, Natalie

    2011-01-01

    Worldwide, demand for education of all kinds is increasing beyond the capacity to provide it. One approach that shows potential for addressing this demand is facilitated video. In facilitated video, an educator is recorded teaching, and that video is sent to a remote site where it is shown to students by a facilitator who creates interaction…

  15. Region of interest video coding for low bit-rate transmission of carotid ultrasound videos over 3G wireless networks.

    PubMed

    Tsapatsoulis, Nicolas; Loizou, Christos; Pattichis, Constantinos

    2007-01-01

    Efficient medical video transmission over 3G wireless is of great importance for fast diagnosis and on site medical staff training purposes. In this paper we present a region of interest based ultrasound video compression study which shows that significant reduction of the required, for transmission, bit rate can be achieved without altering the design of existing video codecs. Simple preprocessing of the original videos to define visually and clinically important areas is the only requirement.

  16. Longer you play, the more hostile you feel: examination of first person shooter video games and aggression during video game play.

    PubMed

    Barlett, Christopher P; Harris, Richard J; Baldassaro, Ross

    2007-01-01

    This study investigated the effects of video game play on aggression. Using the General Aggression Model, as applied to video games by Anderson and Bushman, [2002] this study measured physiological arousal, state hostility, and how aggressively participants would respond to three hypothetical scenarios. In addition, this study measured each of these variables multiple times to gauge how aggression would change with increased video game play. Results showed a significant increase from baseline in hostility and aggression (based on two of the three story stems), which is consistent with the General Aggression Model. This study adds to the existing literature on video games and aggression by showing that increased play of a violent first person shooter video game can significantly increase aggression from baseline. 2007 Wiley-Liss, Inc.

  17. VideoANT: Extending Online Video Annotation beyond Content Delivery

    ERIC Educational Resources Information Center

    Hosack, Bradford

    2010-01-01

    This paper expands the boundaries of video annotation in education by outlining the need for extended interaction in online video use, identifying the challenges faced by existing video annotation tools, and introducing Video-ANT, a tool designed to create text-based annotations integrated within the time line of a video hosted online. Several…

  18. STS-109 Mission Highlights Resource Tape. Part 4 of 4; Flight Days 8 - 12

    NASA Technical Reports Server (NTRS)

    2002-01-01

    This video, Part 4 of 4, shows footage of crew activities from flight days 8 through 12 of STS-109. The crew included: Scott Altman, Commander; Duane Carey, Pilot; John Grunsfeld, Payload Commander; Nancy Currie, Richard Linnehan, James Newman, Michael Massimino, Mission Speicalists. The activities from other flights days can be seen on 'STS-109 Mission Highlights Resource Tape' Part 1 of 4 (internal ID 2002139471), 'STS-109 Mission Highlights Resource Tape' Part 2 of 4 (internal ID 2002137664), and 'STS-109 Mission Highlights Resource Tape' Part 3 of 4 (internal ID 2002139476). The primary activity on flight day 8 was an EVA (extravehicular activity) by Grunsfeld and Linnehan to install a cryocooler and radiator for the NICMOS (Near Infrared Camera and Multi-Object Spectrometer) on the HST (Hubble Space Telescope). Before returning to Columbia's airlock, the astronauts, with a cloudy background, hold onto the orbiter and offer their thoughts on the significance of their mission, the HST, and spaceflight. Footage from flight day 9 includes the grappling, unbearthing, and deployment of the HST from Columbia, and the crew coordinating and videotaping Columbia's departure. Flight day 10 was a relatively inactive day, and flight day 11 includes a checkout of Columbia's aerodynamic surfaces. Columbia landed on flight day 12, which is covered by footage of the crew members speaking during reentry, and their night landing, primarily shown through the orbiter's head-up display. The video includes numerous views of the HST, as well as views of the the Galapagos Islands, Madagascar, and Southern Africa with parts of the Atlantic, Indian, and Pacific Oceans, and part of the coast of Chile. The pistol grip space tool is shown in use, and the crew answers two messages from the public, including a message to Massimino from the Fire Department of New York.

  19. STS-109 Mission Highlights Resource Tape

    NASA Astrophysics Data System (ADS)

    2002-05-01

    This video, Part 4 of 4, shows footage of crew activities from flight days 8 through 12 of STS-109. The crew included: Scott Altman, Commander; Duane Carey, Pilot; John Grunsfeld, Payload Commander; Nancy Currie, Richard Linnehan, James Newman, Michael Massimino, Mission Speicalists. The activities from other flights days can be seen on 'STS-109 Mission Highlights Resource Tape' Part 1 of 4 (internal ID 2002139471), 'STS-109 Mission Highlights Resource Tape' Part 2 of 4 (internal ID 2002137664), and 'STS-109 Mission Highlights Resource Tape' Part 3 of 4 (internal ID 2002139476). The primary activity on flight day 8 was an EVA (extravehicular activity) by Grunsfeld and Linnehan to install a cryocooler and radiator for the NICMOS (Near Infrared Camera and Multi-Object Spectrometer) on the HST (Hubble Space Telescope). Before returning to Columbia's airlock, the astronauts, with a cloudy background, hold onto the orbiter and offer their thoughts on the significance of their mission, the HST, and spaceflight. Footage from flight day 9 includes the grappling, unbearthing, and deployment of the HST from Columbia, and the crew coordinating and videotaping Columbia's departure. Flight day 10 was a relatively inactive day, and flight day 11 includes a checkout of Columbia's aerodynamic surfaces. Columbia landed on flight day 12, which is covered by footage of the crew members speaking during reentry, and their night landing, primarily shown through the orbiter's head-up display. The video includes numerous views of the HST, as well as views of the the Galapagos Islands, Madagascar, and Southern Africa with parts of the Atlantic, Indian, and Pacific Oceans, and part of the coast of Chile. The pistol grip space tool is shown in use, and the crew answers two messages from the public, including a message to Massimino from the Fire Department of New York.

  20. The emerging High Efficiency Video Coding standard (HEVC)

    NASA Astrophysics Data System (ADS)

    Raja, Gulistan; Khan, Awais

    2013-12-01

    High definition video (HDV) is becoming popular day by day. This paper describes the performance analysis of latest upcoming video standard known as High Efficiency Video Coding (HEVC). HEVC is designed to fulfil all the requirements for future high definition videos. In this paper, three configurations (intra only, low delay and random access) of HEVC are analyzed using various 480p, 720p and 1080p high definition test video sequences. Simulation results show the superior objective and subjective quality of HEVC.

  1. ISS General Resource Reel

    NASA Technical Reports Server (NTRS)

    1998-01-01

    This video is a collection of computer animations and live footage showing the construction and assembly of the International Space Station (ISS). Computer animations show the following: (1) ISS fly around; (2) ISS over a sunrise seen from space; (3) the launch of the Zarya Control Module; (4) a Proton rocket launch; (5) the Space Shuttle docking with Zarya and attaching Zarya to the Unity Node; (6) the docking of the Service Module, Zarya, and Unity to Soyuz; (7) the Space Shuttle docking to ISS and installing the Z1 Truss segment and the Pressurized Mating Adapter (PMA); (8) Soyuz docking to the ISS; (9) the Transhab components; and (10) a complete ISS assembly. Live footage shows the construction of Zarya, the Proton rocket, Unity Node, PMA, Service Module, US Laboratory, Italian Multipurpose Logistics Module, US Airlock, and the US Habitation Module. STS-88 Mission Specialists Jerry Ross and James Newman are seen training in the Neutral Buoyancy Laboratory (NBL). The Expedition 1 crewmembers, William Shepherd, Yuri Gidzenko, and Sergei Krikalev, are shown training in the Black Sea and at Johnson Space Flight Center for water survival.

  2. Coaches' Use of Anticipatory and Counterfactual Regret Messages during Competition

    ERIC Educational Resources Information Center

    Turman, Paul D.

    2005-01-01

    By focusing on coaches' use of anticipatory and counterfactual regret messages, this investigation examined video footage (i.e., pre-game, halftime, and post-game speeches) of high school football coaches' interaction with their athletes during competition. Participants were 17 high school football coaches who were found to use a combination of…

  3. Weather Fundamentals: Rain & Snow. [Videotape].

    ERIC Educational Resources Information Center

    1998

    The videos in this educational series, for grades 4-7, help students understand the science behind weather phenomena through dramatic live-action footage, vivid animated graphics, detailed weather maps, and hands-on experiments. This episode (23 minutes) gives concise explanations of the various types of precipitation and describes how the water…

  4. Weather Fundamentals: Hurricanes & Tornadoes. [Videotape].

    ERIC Educational Resources Information Center

    1998

    The videos in this educational series, for grades 4-7, help students understand the science behind weather phenomena through dramatic live-action footage, vivid animated graphics, detailed weather maps, and hands-on experiments. This episode (23 minutes) features information on the deadliest and most destructive storms on Earth. Through satellite…

  5. Chinese Mine Warfare: A PLA Navy Assassin’s Mace Capability (China MaritimeStudy, Number 3)

    DTIC Science & Technology

    2009-06-01

    derived from obsolete tor- pedoes (e.g., earlier models of China’s Yu series) and launched from submarines, they travel along a user-determined course... video clip, originally at web.search.cctv .com, has been removed from the CCTV website. An image from the television footage has been posted on

  6. Weather Fundamentals: Climate & Seasons. [Videotape].

    ERIC Educational Resources Information Center

    1998

    The videos in this educational series for grades 4-7, help students understand the science behind weather phenomena through dramatic live-action footage, vivid animated graphics, detailed weather maps, and hands-on experiments. This episode (23 minutes), describes weather patterns and cycles around the globe. The various types of climates around…

  7. Weather Fundamentals: Meteorology. [Videotape].

    ERIC Educational Resources Information Center

    1998

    The videos in this educational series, for grades 4-7, help students understand the science behind weather phenomena through dramatic live-action footage, vivid animated graphics, detailed weather maps, and hands-on experiments. This episode (23 minutes) looks at how meteorologists gather and interpret current weather data collected from sources…

  8. Tape It Yourself: Videotapes for Teacher Education

    ERIC Educational Resources Information Center

    Ebsworth, Miriam Eistein; Feknous, Barbara; Loyet, Dianne; Zimmerman, Spencer

    2004-01-01

    This paper describes the development and implementation of a series of videotapes of ESL classes for a pre-service teacher education program grounded in experiential learning theory. The videos included footage of ESL classrooms, and tapes edited and supplemented with interviews of ESL teachers. Our experience demonstrates that-with relatively low…

  9. Closeup of F-15B Flight Test Fixture (FTF) with X-33 Thermal Protection Systems (TPS)

    NASA Technical Reports Server (NTRS)

    1998-01-01

    A close up of the Flight Test Fixture II, mounted on the underside of the F-15B Aerodynamic Flight Facility aircraft. The Thermal Protection System (TPS)samples, which included metallic Inconel tiles, soft Advanced Flexible Reusable Surface Insulation tiles, and sealing materials, were attached to the forward-left side position of the test fixture. In-flight video from the aircraft's on-board video system, as well as chase aircraft photos and video footage, documented the condition of the TPS during flights. Surface pressures over the TPS was measured by thermocouples contained in instrumentation 'islands,' to document shear and shock loads.

  10. Closeup of F-15B Flight Test Fixture (FTF) with X-33 Thermal Protection Systems (TPS)

    NASA Image and Video Library

    1998-05-14

    A close up of the Flight Test Fixture II, mounted on the underside of the F-15B Aerodynamic Flight Facility aircraft. The Thermal Protection System (TPS) samples, which included metallic Inconel tiles, soft Advanced Flexible Reusable Surface Insulation tiles, and sealing materials, were attached to the forward-left side position of the test fixture. In-flight video from the aircraft's on-board video system, as well as chase aircraft photos and video footage, documented the condition of the TPS during flights. Surface pressures over the TPS was measured by thermocouples contained in instrumentation "islands," to document shear and shock loads.

  11. Interactive Videos Enhance Learning about Socio-Ecological Systems

    ERIC Educational Resources Information Center

    Smithwick, Erica; Baxter, Emily; Kim, Kyung; Edel-Malizia, Stephanie; Rocco, Stevie; Blackstock, Dean

    2018-01-01

    Two forms of interactive video were assessed in an online course focused on conservation. The hypothesis was that interactive video enhances student perceptions about learning and improves mental models of social-ecological systems. Results showed that students reported greater learning and attitudes toward the subject following interactive video.…

  12. Physiology and behaviour of Atlantic salmon (Salmo salar) smolts during commercial land and sea transport.

    PubMed

    Nomura, M; Sloman, K A; von Keyserlingk, M A G; Farrell, A P

    2009-02-16

    This study examined the physiology (plasma cortisol, glucose, lactate, potassium, sodium and chloride concentrations) and behaviour (underwater video footage) of commercially produced Atlantic salmon (Salmo salar) smolts during transport from freshwater farms to saltwater net pens. Smolts were transported by truck in closed tanks from two freshwater farms to the dock (30-60 min), and then in the flow-through cargo holds of a live-haul vessel, the Sterling Carrier, to the saltwater net pens (~2 h). Some fish were dockside in the vessel for up to 8 h while successive deliveries were loaded into the holds. Fish and water were sampled both before and after truck transport, and then at several time points aboard the vessel. Analysis of plasma constituents showed modest primary and secondary stress responses due to loading and truck transport, and the recovery that occurred dockside in the live-haul vessel was maintained when the vessel was underway. Underwater video footage revealed behavioural differences between fish from the two freshwater facilities that were not evident from the physiological measurements, but the behaviours observed during transport on a live-haul vessel were consistent with a non-stressful environment. Although smolts were subjected to moderately stressful conditions during loading and trucking, they began to recover rapidly aboard the Sterling Carrier. We therefore conclude that smolt transport, as currently conducted by our industry partner, appears to reflect good fish welfare.

  13. [The Questionnaire of Experiences Associated with Video games (CERV): an instrument to detect the problematic use of video games in Spanish adolescents].

    PubMed

    Chamarro, Andres; Carbonell, Xavier; Manresa, Josep Maria; Munoz-Miralles, Raquel; Ortega-Gonzalez, Raquel; Lopez-Morron, M Rosa; Batalla-Martinez, Carme; Toran-Monserrat, Pere

    2014-01-01

    The aim of this study is to validate the Video Game-Related Experiences Questionnaire (CERV in Spanish). The questionnaire consists of 17 items, developed from the CERI (Internet-Related Experiences Questionnaire - Beranuy and cols.), and assesses the problematic use of non-massive video games. It was validated for adolescents in Compulsory Secondary Education. To validate the questionnaire, a confirmatory factor analysis (CFA) and an internal consistency analysis were carried out. The factor structure shows two factors: (a) Psychological dependence and use for evasion; and (b) Negative consequences of using video games. Two cut-off points were established for people with no problems in their use of video games (NP), with potential problems in their use of video games (PP), and with serious problems in their use of video games (SP). Results show that there is higher prevalence among males and that problematic use decreases with age. The CERV seems to be a good instrument for the screening of adolescents with difficulties deriving from video game use. Further research should relate problematic video game use with difficulties in other life domains, such as the academic field.

  14. Quality of experience enhancement of high efficiency video coding video streaming in wireless packet networks using multiple description coding

    NASA Astrophysics Data System (ADS)

    Boumehrez, Farouk; Brai, Radhia; Doghmane, Noureddine; Mansouri, Khaled

    2018-01-01

    Recently, video streaming has attracted much attention and interest due to its capability to process and transmit large data. We propose a quality of experience (QoE) model relying on high efficiency video coding (HEVC) encoder adaptation scheme, in turn based on the multiple description coding (MDC) for video streaming. The main contributions of the paper are (1) a performance evaluation of the new and emerging video coding standard HEVC/H.265, which is based on the variation of quantization parameter (QP) values depending on different video contents to deduce their influence on the sequence to be transmitted, (2) QoE support multimedia applications in wireless networks are investigated, so we inspect the packet loss impact on the QoE of transmitted video sequences, (3) HEVC encoder parameter adaptation scheme based on MDC is modeled with the encoder parameter and objective QoE model. A comparative study revealed that the proposed MDC approach is effective for improving the transmission with a peak signal-to-noise ratio (PSNR) gain of about 2 to 3 dB. Results show that a good choice of QP value can compensate for transmission channel effects and improve received video quality, although HEVC/H.265 is also sensitive to packet loss. The obtained results show the efficiency of our proposed method in terms of PSNR and mean-opinion-score.

  15. ISS Expedition 42 / 43 Soyuz Rollout

    NASA Image and Video Library

    2014-11-26

    NASA TV (NTV) video file of ISS Expedition 42 / 43 Soyuz Spacecraft rollout on a train to the launch pad by the Baikonur Cosmodrome in Kazakhstan. Includes footage of the rollout, the rocket being raised to upright position and interviews with Astronaut Mike Fossum, ISS Assistant Director of Operations and Astronaut Sunita Williams.

  16. The Effectiveness of Classroom Capture Technology

    ERIC Educational Resources Information Center

    Ford, Maire B.; Burns, Colleen E.; Mitch, Nathan; Gomez, Melissa M.

    2012-01-01

    The use of classroom capture systems (systems that capture audio and video footage of a lecture and attempt to replicate a classroom experience) is becoming increasingly popular at the university level. However, research on the effectiveness of classroom capture systems in the university classroom has been limited due to the recent development and…

  17. Weather Fundamentals: Wind. [Videotape].

    ERIC Educational Resources Information Center

    1998

    The videos in this educational series, for grades 4-7, help students understand the science behind weather phenomena through dramatic live-action footage, vivid animated graphics, detailed weather maps, and hands-on experiments. This episode (23 minutes) describes the roles of the sun, temperature, and air pressure in creating the incredible power…

  18. Plasma Physics Lab and the Tokamak Fusion Test Reactor, 1989

    ScienceCinema

    None

    2018-01-16

    From the Princeton University Archives: Promotional video about the Plasma Physics Lab and the new Tokamak Fusion Test Reactor (TFTR), with footage of the interior, machines, and scientists at work. This film is discussed in the audiovisual blog of the Seeley G. Mudd Manuscript Library, which holds the archives of Princeton University.

  19. Weather Fundamentals: Clouds. [Videotape].

    ERIC Educational Resources Information Center

    1998

    The videos in this educational series, for grades 4-7, help students understand the science behind weather phenomena through dramatic live-action footage, vivid animated graphics, detailed weather maps, and hands-on experiments. This episode (23 minutes) discusses how clouds form, the different types of clouds, and the important role they play in…

  20. Digital Video (DV): A Primer for Developing an Enterprise Video Strategy

    NASA Astrophysics Data System (ADS)

    Talovich, Thomas L.

    2002-09-01

    The purpose of this thesis is to provide an overview of digital video production and delivery. The thesis presents independent research demonstrating the educational value of incorporating video and multimedia content in training and education programs. The thesis explains the fundamental concepts associated with the process of planning, preparing, and publishing video content and assists in the development of follow-on strategies for incorporation of video content into distance training and education programs. The thesis provides an overview of the following technologies: Digital Video, Digital Video Editors, Video Compression, Streaming Video, and Optical Storage Media.

  1. More About The Video Event Trigger

    NASA Technical Reports Server (NTRS)

    Williams, Glenn L.

    1996-01-01

    Report presents additional information about system described in "Video Event Trigger" (LEW-15076). Digital electronic system processes video-image data to generate trigger signal when image shows significant change, such as motion, or appearance, disappearance, change in color, brightness, or dilation of object. Potential uses include monitoring of hallways, parking lots, and other areas during hours when supposed unoccupied, looking for fires, tracking airplanes or other moving objects, identification of missing or defective parts on production lines, and video recording of automobile crash tests.

  2. Correlates of video games playing among adolescents in an Islamic country

    PubMed Central

    2010-01-01

    Background No study has ever explored the prevalence and correlates of video game playing among children in the Islamic Republic of Iran. This study describes patterns and correlates of excessive video game use in a random sample of middle-school students in Iran. Specifically, we examine the relationship between video game playing and psychological well-being, aggressive behaviors, and adolescents' perceived threat of video-computer game playing. Methods This cross-sectional study was performed with a random sample of 444 adolescents recruited from eight middle schools. A self-administered, anonymous questionnaire covered socio-demographics, video gaming behaviors, mental health status, self-reported aggressive behaviors, and perceived side effects of video game playing. Results Overall, participants spent an average of 6.3 hours per week playing video games. Moreover, 47% of participants reported that they had played one or more intensely violent games. Non-gamers reported suffering poorer mental health compared to excessive gamers. Both non-gamers and excessive gamers overall reported suffering poorer mental health compared to low or moderate players. Participants who initiated gaming at younger ages were more likely to score poorer in mental health measures. Participants' self-reported aggressive behaviors were associated with length of gaming. Boys, but not girls, who reported playing video games excessively showed more aggressive behaviors. A multiple binary logistic regression shows that when controlling for other variables, older students, those who perceived less serious side effects of video gaming, and those who have personal computers, were more likely to report that they had played video games excessively. Conclusion Our data show a curvilinear relationship between video game playing and mental health outcomes, with "moderate" gamers faring best and "excessive" gamers showing mild increases in problematic behaviors. Interestingly, "non-gamers" clearly

  3. Correlates of video games playing among adolescents in an Islamic country.

    PubMed

    Allahverdipour, Hamid; Bazargan, Mohsen; Farhadinasab, Abdollah; Moeini, Babak

    2010-05-27

    No study has ever explored the prevalence and correlates of video game playing among children in the Islamic Republic of Iran. This study describes patterns and correlates of excessive video game use in a random sample of middle-school students in Iran. Specifically, we examine the relationship between video game playing and psychological well-being, aggressive behaviors, and adolescents' perceived threat of video-computer game playing. This cross-sectional study was performed with a random sample of 444 adolescents recruited from eight middle schools. A self-administered, anonymous questionnaire covered socio-demographics, video gaming behaviors, mental health status, self-reported aggressive behaviors, and perceived side effects of video game playing. Overall, participants spent an average of 6.3 hours per week playing video games. Moreover, 47% of participants reported that they had played one or more intensely violent games. Non-gamers reported suffering poorer mental health compared to excessive gamers. Both non-gamers and excessive gamers overall reported suffering poorer mental health compared to low or moderate players. Participants who initiated gaming at younger ages were more likely to score poorer in mental health measures. Participants' self-reported aggressive behaviors were associated with length of gaming. Boys, but not girls, who reported playing video games excessively showed more aggressive behaviors. A multiple binary logistic regression shows that when controlling for other variables, older students, those who perceived less serious side effects of video gaming, and those who have personal computers, were more likely to report that they had played video games excessively. Our data show a curvilinear relationship between video game playing and mental health outcomes, with "moderate" gamers faring best and "excessive" gamers showing mild increases in problematic behaviors. Interestingly, "non-gamers" clearly show the worst outcomes. Therefore

  4. NASA Research Being Shared Through Live, Interactive Video Tours

    NASA Technical Reports Server (NTRS)

    Petersen, Ruth A.; Zona, Kathleen A.

    2001-01-01

    On June 2, 2000, the NASA Glenn Research Center Learning Technologies Project (LTP) coordinated the first live remote videoconferencing broadcast from a Glenn facility. The historic event from Glenn's Icing Research Tunnel featured wind tunnel technicians and researchers performing an icing experiment, obtaining results, and discussing the relevance to everyday flight operations and safety. After a brief overview of its history, students were able to "walk through" the tunnel, stand in the control room, and observe a live icing experiment that demonstrated how ice would grow on an airplane wing in flight through an icing cloud. The tour was interactive, with a spirited exchange of questions and explanations between the students and presenters. The virtual tour of the oldest and largest refrigerated icing research tunnel in the world was the second of a series of videoconferencing connections with the AP Physics students at Bay Village High School, Bay Village, Ohio. The first connection, called Aircraft Safety and Icing Research, introduced the Tailplane Icing Program. In an effort to improve aircraft safety by reducing the number of in-flight icing events, Glenn's Icing Branch uses its icing research aircraft to conduct flight tests. The presenter engaged the students in discussions of basic aircraft flight mechanics and the function of the horizontal tailplane, as well as the effect of ice on airfoil (wing or tail) surfaces. A brief video of actual flight footage provided a view of the pilot's actions and reactions and of the horizon during tailplane icing conditions.

  5. A spatiotemporal decomposition strategy for personal home video management

    NASA Astrophysics Data System (ADS)

    Yi, Haoran; Kozintsev, Igor; Polito, Marzia; Wu, Yi; Bouguet, Jean-Yves; Nefian, Ara; Dulong, Carole

    2007-01-01

    With the advent and proliferation of low cost and high performance digital video recorder devices, an increasing number of personal home video clips are recorded and stored by the consumers. Compared to image data, video data is lager in size and richer in multimedia content. Efficient access to video content is expected to be more challenging than image mining. Previously, we have developed a content-based image retrieval system and the benchmarking framework for personal images. In this paper, we extend our personal image retrieval system to include personal home video clips. A possible initial solution to video mining is to represent video clips by a set of key frames extracted from them thus converting the problem into an image search one. Here we report that a careful selection of key frames may improve the retrieval accuracy. However, because video also has temporal dimension, its key frame representation is inherently limited. The use of temporal information can give us better representation for video content at semantic object and concept levels than image-only based representation. In this paper we propose a bottom-up framework to combine interest point tracking, image segmentation and motion-shape factorization to decompose the video into spatiotemporal regions. We show an example application of activity concept detection using the trajectories extracted from the spatio-temporal regions. The proposed approach shows good potential for concise representation and indexing of objects and their motion in real-life consumer video.

  6. Students’ Perception on Teaching Practicum Evaluation using Video Technology

    NASA Astrophysics Data System (ADS)

    Chee Sern, Lai; ‘Ain Helan Nor, Nurul; Foong, Lee Ming; Hassan, Razali

    2017-08-01

    Video technology has been widely used in education especially in teaching and learning. However, the use of video technology for evaluation purpose especially in teaching practicum is extremely scarce and the benefits of video technology in teaching practicum evaluation have not yet been fully discovered. For that reason, this quantitative research aimed at identifying the perceptions of trainee teachers towards teaching practicum evaluation via video technology. A total of 260 students of Teacher Certification Programme (Program Pensiswazahan Guru - PPG) from the Faculty of Technical and Vocational Education (FPTV) of Universiti Tun Hussein Onn Malaysia (UTHM) had been randomly selected as respondents. A set of questionnaire was developed to assess the suitability, effectiveness and satisfaction of using video technology for teaching practicum. Conclusively, this research showed that the trainee teachers have positive perceptions in all three aspects related teaching practicum evaluation using video technology. Apart from that, no significant racial difference was found in the measured aspects. In addition, the trainee teachers also showed an understanding of the vast importance of teaching practicum evaluation via video. These research findings suggest that video technology can be a feasible and practical means of teaching practicum evaluation especially for distance learning program.

  7. Dashboard Videos

    ERIC Educational Resources Information Center

    Gleue, Alan D.; Depcik, Chris; Peltier, Ted

    2012-01-01

    Last school year, I had a web link emailed to me entitled "A Dashboard Physics Lesson." The link, created and posted by Dale Basier on his "Lab Out Loud" blog, illustrates video of a car's speedometer synchronized with video of the road. These two separate video streams are compiled into one video that students can watch and analyze. After seeing…

  8. An appraisal of the current and potential value of web 2.0 contributions to continuing education in oral implantology.

    PubMed

    Knösel, M; Engelke, W; Helms, H-J; Bleckmann, A

    2012-08-01

    To systematically assess the informational value, quality, intention, source and bias of web 2.0 footage whose aim is peer-to-peer education about oral implantology. YouTube (http://www.youtube.com) was scanned on 15 October 2010 for oral implantology-related videos using an adequately pre-defined search query. Search results were filtered with the system-generated category 'education' and the additional criterion 'most viewed'. Only those videos with at least 1000 views were included (total 124, of which 27 were excluded because they were not related to implantology). Filtered videos were discussed and rated with particular regard to the educational needs of potential groups of addressees [(i) undergraduates and laymen, (ii) dentists without or currently undergoing a specialisation in oral implantology and (iii) dentists who have completed a specialisation in the field of oral implantology] by a jury consisting of (i) an accredited post-graduate university instructor with 22 years of professional teaching experience in the field of implantology, (ii) a university lecturer in dentistry/orthodontics with 10 years teaching experience and (iii) a university haematologist/oncologist. They were required to fill out a questionnaire for each video. The data were statistically analysed using non-parametric ANOVA (α = 5%) and a sign test (α = 0.05/3 = 0.017). The YouTube scan produced 1710 results in the category 'EDU'. The analysis revealed that there is a wide range of instructional footage on this topic, but with highly variable range in quality and informational value. Footage intention was to large proportions (47.4%) a mixture of education and advertisement. Its usefulness differed significantly for the three groups of addressees, offering greater novelty to undergraduates and post-graduates. YouTube and similar social media websites may have a potential capacity and value in complementing continuing education in the technique of oral implantology. As a means of

  9. STS-107 Flight Day 8 Highlights

    NASA Technical Reports Server (NTRS)

    2003-01-01

    This video shows the activities of the STS-107 crew (Rick Husband, Commander; William McCool, Pilot; Kalpana Chawla, David Brown, Michael Anderson, Laurel Clark, Mission Specialists, Ilan Ramon, Payload Specialist) during flight day 8 of the Columbia orbiter's final flight. The primary activities of flight day 8 are spaceborne experiments. Some background information is given on the SOFBALL (Structure of Flame Balls at Low Lewis-Number) microgravity experiment as footage of the flame balls is shown. The video also shows the MEIDEX (Mediterranean Israeli Dust Experiment) calibrating on the Moon. The six STARS (Space Technology and Research Students) international student experiments are profiled, including experiments on carpenter bees (Liechtenstein), spiders (Australia), silkworms (China), ants (United States), crystal growth (Israel), and fish embryos (Japan). A commercial experiment on roses is also profiled. Astronaut Clark gives a tour of the SpaceHab RDM (Research Double Module), in the space shuttle's payload bay. Astronauts McCool and Ramon take turns on an exercise machine. The video includes a partly cloudy view of the Pacific Ocean.

  10. Geotail Video News Release

    NASA Technical Reports Server (NTRS)

    1992-01-01

    The Geotail mission, part of the International Solar Terrestrial Physics (ISTP) program, measures global energy flow and transformation in the magnetotail to increase understanding of fundamental magnetospheric processes. The satellite was launched on July 24, 1992 onboard a Delta II rocket. This video shows with animation the solar wind, and its effect on the Earth. The narrator explains that the Geotail spacecraft was designed and built by the Institute of Space and Astronautical Science (ISAS), the Japanese Space Agency. The mission objectives are reviewed by one of the scientist in a live view. The video also shows an animation of the orbit, while the narrator explains the orbit and the reason for the small launch window.

  11. Performance evaluation of MPEG internet video coding

    NASA Astrophysics Data System (ADS)

    Luo, Jiajia; Wang, Ronggang; Fan, Kui; Wang, Zhenyu; Li, Ge; Wang, Wenmin

    2016-09-01

    Internet Video Coding (IVC) has been developed in MPEG by combining well-known existing technology elements and new coding tools with royalty-free declarations. In June 2015, IVC project was approved as ISO/IEC 14496-33 (MPEG- 4 Internet Video Coding). It is believed that this standard can be highly beneficial for video services in the Internet domain. This paper evaluates the objective and subjective performances of IVC by comparing it against Web Video Coding (WVC), Video Coding for Browsers (VCB) and AVC High Profile. Experimental results show that IVC's compression performance is approximately equal to that of the AVC High Profile for typical operational settings, both for streaming and low-delay applications, and is better than WVC and VCB.

  12. Video Tutorial of Continental Food

    NASA Astrophysics Data System (ADS)

    Nurani, A. S.; Juwaedah, A.; Mahmudatussa'adah, A.

    2018-02-01

    This research is motivated by the belief in the importance of media in a learning process. Media as an intermediary serves to focus on the attention of learners. Selection of appropriate learning media is very influential on the success of the delivery of information itself both in terms of cognitive, affective and skills. Continental food is a course that studies food that comes from Europe and is very complex. To reduce verbalism and provide more real learning, then the tutorial media is needed. Media tutorials that are audio visual can provide a more concrete learning experience. The purpose of this research is to develop tutorial media in the form of video. The method used is the development method with the stages of analyzing the learning objectives, creating a story board, validating the story board, revising the story board and making video tutorial media. The results show that the making of storyboards should be very thorough, and detailed in accordance with the learning objectives to reduce errors in video capture so as to save time, cost and effort. In video capturing, lighting, shooting angles, and soundproofing make an excellent contribution to the quality of tutorial video produced. In shooting should focus more on tools, materials, and processing. Video tutorials should be interactive and two-way.

  13. WCE video segmentation using textons

    NASA Astrophysics Data System (ADS)

    Gallo, Giovanni; Granata, Eliana

    2010-03-01

    Wireless Capsule Endoscopy (WCE) integrates wireless transmission with image and video technology. It has been used to examine the small intestine non invasively. Medical specialists look for signicative events in the WCE video by direct visual inspection manually labelling, in tiring and up to one hour long sessions, clinical relevant frames. This limits the WCE usage. To automatically discriminate digestive organs such as esophagus, stomach, small intestine and colon is of great advantage. In this paper we propose to use textons for the automatic discrimination of abrupt changes within a video. In particular, we consider, as features, for each frame hue, saturation, value, high-frequency energy content and the responses to a bank of Gabor filters. The experiments have been conducted on ten video segments extracted from WCE videos, in which the signicative events have been previously labelled by experts. Results have shown that the proposed method may eliminate up to 70% of the frames from further investigations. The direct analysis of the doctors may hence be concentrated only on eventful frames. A graphical tool showing sudden changes in the textons frequencies for each frame is also proposed as a visual aid to find clinically relevant segments of the video.

  14. Close-range photogrammetry with video cameras

    NASA Technical Reports Server (NTRS)

    Burner, A. W.; Snow, W. L.; Goad, W. K.

    1985-01-01

    Examples of photogrammetric measurements made with video cameras uncorrected for electronic and optical lens distortions are presented. The measurement and correction of electronic distortions of video cameras using both bilinear and polynomial interpolation are discussed. Examples showing the relative stability of electronic distortions over long periods of time are presented. Having corrected for electronic distortion, the data are further corrected for lens distortion using the plumb line method. Examples of close-range photogrammetric data taken with video cameras corrected for both electronic and optical lens distortion are presented.

  15. Close-Range Photogrammetry with Video Cameras

    NASA Technical Reports Server (NTRS)

    Burner, A. W.; Snow, W. L.; Goad, W. K.

    1983-01-01

    Examples of photogrammetric measurements made with video cameras uncorrected for electronic and optical lens distortions are presented. The measurement and correction of electronic distortions of video cameras using both bilinear and polynomial interpolation are discussed. Examples showing the relative stability of electronic distortions over long periods of time are presented. Having corrected for electronic distortion, the data are further corrected for lens distortion using the plumb line method. Examples of close-range photogrammetric data taken with video cameras corrected for both electronic and optical lens distortion are presented.

  16. Chelyabinsk meteoroid entry and airburst damage

    NASA Astrophysics Data System (ADS)

    Popova, O.; Jenniskens, P.; Shuvalov, V.; Emel'yanenko, V.; Rybnov, Y.; Kharlamov, V.; Kartashova, A.; Biryukov, E.; Khaibrakhmanov, S.; Glazachev, D.; Trubetskaya, I.

    2014-07-01

    A field study of the Chelyabinsk Airburst was conducted in the weeks following the event on February 15, 2013. To measure the impact energy, the extent of the glass damage was mapped by visiting over 50 villages in the area. To determine how that energy was deposited in the atmosphere, the most suitable dash-cam and video security camera footage was calibrated by taking star background images at the sites where video was taken. Shadow obstacles in videos taken at Chelyabinsk and Chebarkul were calibrated. To measure the nature of the damaging shockwave, arrival times were measured from the footage of 34 traffic cameras, data saved on a single timed server. To measure the impact of the shockwave, some 150 eyewitnesses were interviewed to ask about their personal experiences, smells, sense of heat, sunburn, etc. Meteorite find locations, shape, and size were documented by interviewing the finders and photographing the collections. Some of these meteorites were analyzed in a consortium study to determine what material properties contributed to the manner in which the meteoroid broke in the atmosphere. The results paint the first detailed picture of an asteroid impact airburst over a populated area. This information may help better prepare for future impact hazard mitigation scenarios.

  17. The Important Elements of a Science Video

    NASA Astrophysics Data System (ADS)

    Harned, D. A.; Moorman, M.; McMahon, G.

    2012-12-01

    New technologies have revolutionized use of video as a means of communication. Films have become easier to create and to distribute. Video is omnipresent in our culture and supplements or even replaces writing in many applications. How can scientists and educators best use video to communicate scientific results? Video podcasts are being used in addition to journal, print, and online publications to communicate the relevance of scientific findings of the U.S. Geological Survey's (USGS) National Water-Quality Assessment (NAWQA) program to general audiences such as resource managers, educational groups, public officials, and the general public. In an effort to improve the production of science videos a survey was developed to provide insight into effective science communication with video. Viewers of USGS podcast videos were surveyed using Likert response- scaling to identify the important elements of science videos. The surveys were of 120 scientists and educators attending the 2010 and 2011 Fall Meetings of the American Geophysical Union and the 2012 meeting of the National Monitoring Council. The median age of the respondents was 44 years, with an education level of a Bachelor's Degree or higher. Respondents reported that their primary sources for watching science videos were YouTube and science websites. Video length was the single most important element associated with reaching the greatest number of viewers. The surveys indicated a median length of 5 minutes as appropriate for a web video, with 5-7 minutes the 25th-75th percentiles. An illustration of the effect of length: a 5-minute and a 20-minute version of a USGS film on the effect of urbanization on water-quality was made available on the same website. The short film has been downloaded 3 times more frequently than the longer film version. The survey showed that the most important elements to include in a science film are style elements including strong visuals, an engaging story, and a simple message, and

  18. Dissection Videos Do Not Improve Anatomy Examination Scores

    ERIC Educational Resources Information Center

    Mahmud, Waqas; Hyder, Omar; Butt, Jamaal; Aftab, Arsalan

    2011-01-01

    In this quasi-experimental study, we describe the effect of showing dissection videos on first-year medical students' performance in terms of test scores during a gross anatomy course. We also surveyed students' perception regarding the showing of dissection videos. Two hundred eighty-seven first-year medical students at Rawalpindi Medical College…

  19. Tackling action-based video abstraction of animated movies for video browsing

    NASA Astrophysics Data System (ADS)

    Ionescu, Bogdan; Ott, Laurent; Lambert, Patrick; Coquin, Didier; Pacureanu, Alexandra; Buzuloiu, Vasile

    2010-07-01

    We address the issue of producing automatic video abstracts in the context of the video indexing of animated movies. For a quick browse of a movie's visual content, we propose a storyboard-like summary, which follows the movie's events by retaining one key frame for each specific scene. To capture the shot's visual activity, we use histograms of cumulative interframe distances, and the key frames are selected according to the distribution of the histogram's modes. For a preview of the movie's exciting action parts, we propose a trailer-like video highlight, whose aim is to show only the most interesting parts of the movie. Our method is based on a relatively standard approach, i.e., highlighting action through the analysis of the movie's rhythm and visual activity information. To suit every type of movie content, including predominantly static movies or movies without exciting parts, the concept of action depends on the movie's average rhythm. The efficiency of our approach is confirmed through several end-user studies.

  20. Wavelet-based audio embedding and audio/video compression

    NASA Astrophysics Data System (ADS)

    Mendenhall, Michael J.; Claypoole, Roger L., Jr.

    2001-12-01

    Watermarking, traditionally used for copyright protection, is used in a new and exciting way. An efficient wavelet-based watermarking technique embeds audio information into a video signal. Several effective compression techniques are applied to compress the resulting audio/video signal in an embedded fashion. This wavelet-based compression algorithm incorporates bit-plane coding, index coding, and Huffman coding. To demonstrate the potential of this audio embedding and audio/video compression algorithm, we embed an audio signal into a video signal and then compress. Results show that overall compression rates of 15:1 can be achieved. The video signal is reconstructed with a median PSNR of nearly 33 dB. Finally, the audio signal is extracted from the compressed audio/video signal without error.

  1. Depicting surgical anatomy of the porta hepatis in living donor liver transplantation.

    PubMed

    Kelly, Paul; Fung, Albert; Qu, Joy; Greig, Paul; Tait, Gordon; Jenkinson, Jodie; McGilvray, Ian; Agur, Anne

    2017-01-01

    Visualizing the complex anatomy of vascular and biliary structures of the liver on a case-by-case basis has been challenging. A living donor liver transplant (LDLT) right hepatectomy case, with focus on the porta hepatis, was used to demonstrate an innovative method to visualize anatomy with the purpose of refining preoperative planning and teaching of complex surgical procedures. The production of an animation-enhanced video consisted of many stages including the integration of pre-surgical planning; case-specific footage and 3D models of the liver and associated vasculature, reconstructed from contrast-enhanced CTs. Reconstructions of the biliary system were modeled from intraoperative cholangiograms. The distribution of the donor portal veins, hepatic arteries and bile ducts was defined from the porta hepatis intrahepatically to the point of surgical division. Each step of the surgery was enhanced with 3D animation to provide sequential and seamless visualization from pre-surgical planning to outcome. Use of visualization techniques such as transparency and overlays allows viewers not only to see the operative field, but also the origin and course of segmental branches and their spatial relationships. This novel educational approach enables integrating case-based operative footage with advanced editing techniques for visualizing not only the surgical procedure, but also complex anatomy such as vascular and biliary structures. The surgical team has found this approach to be beneficial for preoperative planning and clinical teaching, especially for complex cases. Each animation-enhanced video case is posted to the open-access Toronto Video Atlas of Surgery (TVASurg), an education resource with a global clinical and patient user base. The novel educational system described in this paper enables integrating operative footage with 3D animation and cinematic editing techniques for seamless sequential organization from pre-surgical planning to outcome.

  2. Depicting surgical anatomy of the porta hepatis in living donor liver transplantation

    PubMed Central

    Fung, Albert; Qu, Joy; Greig, Paul; Tait, Gordon; Jenkinson, Jodie; McGilvray, Ian; Agur, Anne

    2017-01-01

    Visualizing the complex anatomy of vascular and biliary structures of the liver on a case-by-case basis has been challenging. A living donor liver transplant (LDLT) right hepatectomy case, with focus on the porta hepatis, was used to demonstrate an innovative method to visualize anatomy with the purpose of refining preoperative planning and teaching of complex surgical procedures. The production of an animation-enhanced video consisted of many stages including the integration of pre-surgical planning; case-specific footage and 3D models of the liver and associated vasculature, reconstructed from contrast-enhanced CTs. Reconstructions of the biliary system were modeled from intraoperative cholangiograms. The distribution of the donor portal veins, hepatic arteries and bile ducts was defined from the porta hepatis intrahepatically to the point of surgical division. Each step of the surgery was enhanced with 3D animation to provide sequential and seamless visualization from pre-surgical planning to outcome. Use of visualization techniques such as transparency and overlays allows viewers not only to see the operative field, but also the origin and course of segmental branches and their spatial relationships. This novel educational approach enables integrating case-based operative footage with advanced editing techniques for visualizing not only the surgical procedure, but also complex anatomy such as vascular and biliary structures. The surgical team has found this approach to be beneficial for preoperative planning and clinical teaching, especially for complex cases. Each animation-enhanced video case is posted to the open-access Toronto Video Atlas of Surgery (TVASurg), an education resource with a global clinical and patient user base. The novel educational system described in this paper enables integrating operative footage with 3D animation and cinematic editing techniques for seamless sequential organization from pre-surgical planning to outcome. PMID:29078606

  3. Play Therapy for Severe Psychological Trauma. [Videotape

    ERIC Educational Resources Information Center

    Gil, Eliana

    In this 36-minute educational video, a play and family therapist elucidates the nature of trauma, how to recognize it clinically, and how to manage its powerful effects upon children's development with the use of specific play materials and techniques. With a reenacted clinical interview, footage from an actual play therapy session, and a detailed…

  4. Begin with Love[R]. The First Three Months: Connecting with Your Child. [Videotape].

    ERIC Educational Resources Information Center

    CIVITAS Initiative, Chicago, IL.

    Hosted by Oprah Winfrey and featuring Dr. Kyle Pruett, this videotape focuses on new parents' relationship with their infant in the first 3 months of life. The 30-minute videotape begins with footage of infants during the newborn period and depicts parents talking about their emotional response to their infant's birth. The video focuses on…

  5. Thematic video indexing to support video database retrieval and query processing

    NASA Astrophysics Data System (ADS)

    Khoja, Shakeel A.; Hall, Wendy

    1999-08-01

    This paper presents a novel video database system, which caters for complex and long videos, such as documentaries, educational videos, etc. As compared to relatively structured format videos like CNN news or commercial advertisements, this database system has the capacity to work with long and unstructured videos.

  6. Sines and Cosines. Part 2 of 3

    NASA Technical Reports Server (NTRS)

    Apostol, Tom M. (Editor)

    1993-01-01

    The Law of Sines and the Law of Cosines are introduced and demonstrated in this 'Project Mathematics' series video using both film footage and computer animation. This video deals primarily with the mathematical field of Trigonometry and explains how these laws were developed and their applications. One significant use is geographical and geological surveying. This includes both the triangulation method and the spirit leveling method. With these methods, it is shown how the height of the tallest mountain in the world, Mt. Everest, was determined.

  7. The United States Air Force and Profession: Why Sixty Percent of Air Force General Officers are Still Pilots When Pilots Comprise Just Twenty Percent of the Officer Corps

    DTIC Science & Technology

    2006-08-25

    EXA thog001uehMcoptr a acltetecus n pedo nm lnsi secnd. an dipath gide misila ad itereptrs o te trge fo detrutio anLreur intrcptos o hte I pefomin tes...a capability, at that time at least, to transmit video footage of the aircraft, then loiter while waiting for a decision to shoot down the aircraft or...emerges. In general, however, modem Air Force warfare takes on the aura of a video game. Furthermore, the air and missile crews do not generally see

  8. Video game addiction, ADHD symptomatology, and video game reinforcement.

    PubMed

    Mathews, Christine L; Morrell, Holly E R; Molle, Jon E

    2018-06-06

    Up to 23% of people who play video games report symptoms of addiction. Individuals with attention deficit hyperactivity disorder (ADHD) may be at increased risk for video game addiction, especially when playing games with more reinforcing properties. The current study tested whether level of video game reinforcement (type of game) places individuals with greater ADHD symptom severity at higher risk for developing video game addiction. Adult video game players (N = 2,801; Mean age = 22.43, SD = 4.70; 93.30% male; 82.80% Caucasian) completed an online survey. Hierarchical multiple linear regression analyses were used to test type of game, ADHD symptom severity, and the interaction between type of game and ADHD symptomatology as predictors of video game addiction severity, after controlling for age, gender, and weekly time spent playing video games. ADHD symptom severity was positively associated with increased addiction severity (b = .73 and .68, ps < 0.001). Type of game played or preferred the most was not associated with addiction severity, ps > .05. The relationship between ADHD symptom severity and addiction severity did not depend on the type of video game played or preferred most, ps > .05. Gamers who have greater ADHD symptom severity may be at greater risk for developing symptoms of video game addiction and its negative consequences, regardless of type of video game played or preferred most. Individuals who report ADHD symptomatology and also identify as gamers may benefit from psychoeducation about the potential risk for problematic play.

  9. An unsupervised method for summarizing egocentric sport videos

    NASA Astrophysics Data System (ADS)

    Habibi Aghdam, Hamed; Jahani Heravi, Elnaz; Puig, Domenec

    2015-12-01

    People are getting more interested to record their sport activities using head-worn or hand-held cameras. This type of videos which is called egocentric sport videos has different motion and appearance patterns compared with life-logging videos. While a life-logging video can be defined in terms of well-defined human-object interactions, notwithstanding, it is not trivial to describe egocentric sport videos using well-defined activities. For this reason, summarizing egocentric sport videos based on human-object interaction might fail to produce meaningful results. In this paper, we propose an unsupervised method for summarizing egocentric videos by identifying the key-frames of the video. Our method utilizes both appearance and motion information and it automatically finds the number of the key-frames. Our blind user study on the new dataset collected from YouTube shows that in 93:5% cases, the users choose the proposed method as their first video summary choice. In addition, our method is within the top 2 choices of the users in 99% of studies.

  10. OceanVideoLab: A Tool for Exploring Underwater Video

    NASA Astrophysics Data System (ADS)

    Ferrini, V. L.; Morton, J. J.; Wiener, C.

    2016-02-01

    Video imagery acquired with underwater vehicles is an essential tool for characterizing seafloor ecosystems and seafloor geology. It is a fundamental component of ocean exploration that facilitates real-time operations, augments multidisciplinary scientific research, and holds tremendous potential for public outreach and engagement. Acquiring, documenting, managing, preserving and providing access to large volumes of video acquired with underwater vehicles presents a variety of data stewardship challenges to the oceanographic community. As a result, only a fraction of underwater video content collected with research submersibles is documented, discoverable and/or viewable online. With more than 1 billion users, YouTube offers infrastructure that can be leveraged to help address some of the challenges associated with sharing underwater video with a broad global audience. Anyone can post content to YouTube, and some oceanographic organizations, such as the Schmidt Ocean Institute, have begun live-streaming video directly from underwater vehicles. OceanVideoLab (oceanvideolab.org) was developed to help improve access to underwater video through simple annotation, browse functionality, and integration with related environmental data. Any underwater video that is publicly accessible on YouTube can be registered with OceanVideoLab by simply providing a URL. It is strongly recommended that a navigational file also be supplied to enable geo-referencing of observations. Once a video is registered, it can be viewed and annotated using a simple user interface that integrates observations with vehicle navigation data if provided. This interface includes an interactive map and a list of previous annotations that allows users to jump to times of specific observations in the video. Future enhancements to OceanVideoLab will include the deployment of a search interface, the development of an application program interface (API) that will drive the search and enable querying of

  11. 78 FR 41084 - Solicitation for a Cooperative Agreement-Video Production: Direct Supervision Jails

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-07-09

    ... narration, interviews, graphics, and footage shot in jails. This project will be a collaborative venture... solicitation, one award will be made. Funds awarded under this solicitation may only be used for activities... the complete production. Project Director The production company will assign one staff to oversee the...

  12. A preliminary study to estimate contact rates between free-roaming domestic dogs using novel miniature cameras.

    PubMed

    Bombara, Courtenay B; Dürr, Salome; Machovsky-Capuska, Gabriel E; Jones, Peter W; Ward, Michael P

    2017-01-01

    Information on contacts between individuals within a population is crucial to inform disease control strategies, via parameterisation of disease spread models. In this study we investigated the use of dog-borne video cameras-in conjunction with global positioning systems (GPS) loggers-to both characterise dog-to-dog contacts and to estimate contact rates. We customized miniaturised video cameras, enclosed within 3D-printed plastic cases, and attached these to nylon dog collars. Using two 3400 mAh NCR lithium Li-ion batteries, cameras could record a maximum of 22 hr of continuous video footage. Together with a GPS logger, collars were attached to six free roaming domestic dogs (FRDDs) in two remote Indigenous communities in northern Australia. We recorded a total of 97 hr of video footage, ranging from 4.5 to 22 hr (mean 19.1) per dog, and observed a wide range of social behaviours. The majority (69%) of all observed interactions between community dogs involved direct physical contact. Direct contact behaviours included sniffing, licking, mouthing and play fighting. No contacts appeared to be aggressive, however multiple teeth baring incidents were observed during play fights. We identified a total of 153 contacts-equating to 8 to 147 contacts per dog per 24 hr-from the videos of the five dogs with camera data that could be analysed. These contacts were attributed to 42 unique dogs (range 1 to 19 per video) which could be identified (based on colour patterns and markings). Most dog activity was observed in urban (houses and roads) environments, but contacts were more common in bushland and beach environments. A variety of foraging behaviours were observed, included scavenging through rubbish and rolling on dead animal carcasses. Identified food consumed included chicken, raw bones, animal carcasses, rubbish, grass and cheese. For characterising contacts between FRDD, several benefits of analysing videos compared to GPS fixes alone were identified in this study

  13. STS-46 post flight press conference

    NASA Astrophysics Data System (ADS)

    1992-08-01

    At a post flight press conference, the flight crew of the STS-46 mission (Cmdr. Loren Shriver, Pilot Andrew Allen, Mission Specialists Claude Nicollier (European Space Agency (ESA)), Marsha Ivins (Flight Engineer), Jeff Hoffman (Payload Commander), Franklin Chang-Dias, and Payload Specialist Franco Malerba (Italian Space Agency (ISA))) discussed their roles in and presented video footage, slides and still photographs of the different aspects of their mission. The primary objectives of the mission were the deployment of ESA's European Retrievable Carrier (EURECA) satellite and the joint NASA/ISA deployment and testing of the Tethered Satellite System (TSS). Secondary objectives included the IMAX Camera, the Limited Duration Space Environment Candidate Materials Exposure (LDVE), and the Pituitary Growth Hormone Cell Function (PHCF) experiments. Video footage of the EURECA and TSS deployment procedures are shown. Earth views were extensive and included Javanese volcanoes, Amazon basin forest ground fires, southern Mexico, southern Bolivian volcanoes, south-west Sudan and the Sahara Desert, and Melville Island, Australia. Questions from reporters and journalists from Johnson Space Center and Kennedy Space Center were discussed.

  14. STS-46 Post Flight Press Conference

    NASA Technical Reports Server (NTRS)

    1992-01-01

    At a post flight press conference, the flight crew of the STS-46 mission (Cmdr. Loren Shriver, Pilot Andrew Allen, Mission Specialists Claude Nicollier (European Space Agency (ESA)), Marsha Ivins (Flight Engineer), Jeff Hoffman (Payload Commander), Franklin Chang-Dias, and Payload Specialist Franco Malerba (Italian Space Agency (ISA))) discussed their roles in and presented video footage, slides and still photographs of the different aspects of their mission. The primary objectives of the mission were the deployment of ESA's European Retrievable Carrier (EURECA) satellite and the joint NASA/ISA deployment and testing of the Tethered Satellite System (TSS). Secondary objectives included the IMAX Camera, the Limited Duration Space Environment Candidate Materials Exposure (LDVE), and the Pituitary Growth Hormone Cell Function (PHCF) experiments. Video footage of the EURECA and TSS deployment procedures are shown. Earth views were extensive and included Javanese volcanoes, Amazon basin forest ground fires, southern Mexico, southern Bolivian volcanoes, south-west Sudan and the Sahara Desert, and Melville Island, Australia. Questions from reporters and journalists from Johnson Space Center and Kennedy Space Center were discussed.

  15. Winter Video Series Coming in January | Poster

    Cancer.gov

    The Scientific Library’s annual Summer Video Series was so successful that it will be offering a new Winter Video Series beginning in January. For this inaugural event, the staff is showing the eight-part series from National Geographic titled “American Genius.” 

  16. STS-113 Mission Highlights Resource Tape Flight Days 7-11. Tape: 3 of 4

    NASA Technical Reports Server (NTRS)

    2003-01-01

    This video, part 3 of 4, shows the activities of the crew of Space Shuttle Envdeavour and the Expedition 5 and 6 crews of the International Space Station (ISS) during flight days 7 through 11 of STS-113. Endeavour's crew consists of Commander Jim Wetherbee, Pilot Paul Lockhart, and Mission Specialists Michael Lopez-Alegria and John Herrington. Footage of flight day 7 includes a change of command ceremony on board the ISS, and Endeavour dumping supply water through a nozzle. On flight day 8 the Space Station Mobile Transporter jams while traveling on the P1 truss of the ISS, and Herrington attempts to free it as part of a lengthy extravehicular activity (EVA) with Lopez-Alegria. Flight day 9 is the last full day the three crews spend together. Expedition 5 NASA ISS Science Officer Peggy Whitsun troubleshoots the Microgravity Glovebox on board the ISS with her successor Don Pettit. The undocking of Endeavour and the ISS is the main activity of flight day 10. Endeavour also deploys a pair of experimental tethered microsatellites for the Department of Defense. The footage from flight day 11 shows the Expedition 5 crew exercising, laying in recumbant seats to help them adjust to the gravity on Earth, and sleeping. The video includes numerous views of the earth, some with the ISS and Endeavour in the foreground. There are close-ups of Italy, Spain and Portugal, Tierra del Fuego, and Baja California, and a night view of Chicago and the Great Lakes.

  17. Multicore-based 3D-DWT video encoder

    NASA Astrophysics Data System (ADS)

    Galiano, Vicente; López-Granado, Otoniel; Malumbres, Manuel P.; Migallón, Hector

    2013-12-01

    Three-dimensional wavelet transform (3D-DWT) encoders are good candidates for applications like professional video editing, video surveillance, multi-spectral satellite imaging, etc. where a frame must be reconstructed as quickly as possible. In this paper, we present a new 3D-DWT video encoder based on a fast run-length coding engine. Furthermore, we present several multicore optimizations to speed-up the 3D-DWT computation. An exhaustive evaluation of the proposed encoder (3D-GOP-RL) has been performed, and we have compared the evaluation results with other video encoders in terms of rate/distortion (R/D), coding/decoding delay, and memory consumption. Results show that the proposed encoder obtains good R/D results for high-resolution video sequences with nearly in-place computation using only the memory needed to store a group of pictures. After applying the multicore optimization strategies over the 3D DWT, the proposed encoder is able to compress a full high-definition video sequence in real-time.

  18. Competitive action video game players display rightward error bias during on-line video game play.

    PubMed

    Roebuck, Andrew J; Dubnyk, Aurora J B; Cochran, David; Mandryk, Regan L; Howland, John G; Harms, Victoria

    2017-09-12

    Research in asymmetrical visuospatial attention has identified a leftward bias in the general population across a variety of measures including visual attention and line-bisection tasks. In addition, increases in rightward collisions, or bumping, during visuospatial navigation tasks have been demonstrated in real world and virtual environments. However, little research has investigated these biases beyond the laboratory. The present study uses a semi-naturalistic approach and the online video game streaming service Twitch to examine navigational errors and assaults as skilled action video game players (n = 60) compete in Counter Strike: Global Offensive. This study showed a significant rightward bias in both fatal assaults and navigational errors. Analysis using the in-game ranking system as a measure of skill failed to show a relationship between bias and skill. These results suggest that a leftward visuospatial bias may exist in skilled players during online video game play. However, the present study was unable to account for some factors such as environmental symmetry and player handedness. In conclusion, video game streaming is a promising method for behavioural research in the future, however further study is required before one can determine whether these results are an artefact of the method applied, or representative of a genuine rightward bias.

  19. Use of “Entertainment” Chimpanzees in Commercials Distorts Public Perception Regarding Their Conservation Status

    PubMed Central

    Schroepfer, Kara K.; Rosati, Alexandra G.; Chartrand, Tanya; Hare, Brian

    2011-01-01

    Chimpanzees (Pan troglodytes) are often used in movies, commercials and print advertisements with the intention of eliciting a humorous response from audiences. The portrayal of chimpanzees in unnatural, human-like situations may have a negative effect on the public's understanding of their endangered status in the wild while making them appear as suitable pets. Alternatively, media content that elicits a positive emotional response toward chimpanzees may increase the public's commitment to chimpanzee conservation. To test these competing hypotheses, participants (n = 165) watched a series of commercials in an experiment framed as a marketing study. Imbedded within the same series of commercials was one of three chimpanzee videos. Participants either watched 1) a chimpanzee conservation commercial, 2) commercials containing “entertainment” chimpanzees or 3) control footage of the natural behavior of wild chimpanzees. Results from a post-viewing questionnaire reveal that participants who watched the conservation message understood that chimpanzees were endangered and unsuitable as pets at higher levels than those viewing the control footage. Meanwhile participants watching commercials with entertainment chimpanzees showed a decrease in understanding relative to those watching the control footage. In addition, when participants were given the opportunity to donate part of their earnings from the experiment to a conservation charity, donations were least frequent in the group watching commercials with entertainment chimpanzees. Control questions show that participants did not detect the purpose of the study. These results firmly support the hypothesis that use of entertainment chimpanzees in the popular media negatively distorts the public's perception and hinders chimpanzee conservation efforts. PMID:22022503

  20. Advancement of thyroid surgery video recording: A comparison between two full HD head mounted video cameras.

    PubMed

    Ortensi, Andrea; Panunzi, Andrea; Trombetta, Silvia; Cattaneo, Alberto; Sorrenti, Salvatore; D'Orazi, Valerio

    2017-05-01

    The aim of this study was to test two different video cameras and recording systems used in thyroid surgery in our Department. This is meant to be an attempt to record the real point of view of the magnified vision of surgeon, so as to make the viewer aware of the difference with the naked eye vision. In this retrospective study, we recorded and compared twenty thyroidectomies performed using loupes magnification and microsurgical technique: ten were recorded with GoPro ® 4 Session action cam (commercially available) and ten with our new prototype of head mounted video camera. Settings were selected before surgery for both cameras. The recording time is about from 1 to 2 h for GoPro ® and from 3 to 5 h for our prototype. The average time of preparation to fit the camera on the surgeon's head and set the functionality is about 5 min for GoPro ® and 7-8 min for the prototype, mostly due to HDMI wiring cable. Videos recorded with the prototype require no further editing, which is mandatory for videos recorded with GoPro ® to highlight the surgical details. the present study showed that our prototype of video camera, compared with GoPro ® 4 Session, guarantees best results in terms of surgical video recording quality, provides to the viewer the exact perspective of the microsurgeon and shows accurately his magnified view through the loupes in thyroid surgery. These recordings are surgical aids for teaching and education and might be a method of self-analysis of surgical technique. Copyright © 2017 IJS Publishing Group Ltd. Published by Elsevier Ltd. All rights reserved.

  1. Streaming Video--The Wave of the Video Future!

    ERIC Educational Resources Information Center

    Brown, Laura

    2004-01-01

    Videos and DVDs give the teachers more flexibility than slide projectors, filmstrips, and 16mm films but teachers and students are excited about a new technology called streaming. Streaming allows the educators to view videos on demand via the Internet, which works through the transfer of digital media like video, and voice data that is received…

  2. Picturing Video

    NASA Technical Reports Server (NTRS)

    2000-01-01

    Video Pics is a software program that generates high-quality photos from video. The software was developed under an SBIR contract with Marshall Space Flight Center by Redhawk Vision, Inc.--a subsidiary of Irvine Sensors Corporation. Video Pics takes information content from multiple frames of video and enhances the resolution of a selected frame. The resulting image has enhanced sharpness and clarity like that of a 35 mm photo. The images are generated as digital files and are compatible with image editing software.

  3. Case Study Analyses of Play Behaviors of 12-Month-Old Infants Later Diagnosed with Autism

    ERIC Educational Resources Information Center

    Mulligan, Shelley

    2015-01-01

    Case study research methodology was used to describe the play behaviors of three infants at 12 months of age, who were later diagnosed with an autism spectrum disorder. Data included standardized test scores, and analyses of video footage of semi-structured play sessions from infants identified as high risk for autism, because of having a sibling…

  4. The LivePhoto Physics videos and video analysis site

    NASA Astrophysics Data System (ADS)

    Abbott, David

    2009-09-01

    The LivePhoto site is similar to an archive of short films for video analysis. Some videos have Flash tools for analyzing the video embedded in the movie. Most of the videos address mechanics topics with titles like Rolling Pencil (check this one out for pedagogy and content knowledge—nicely done!), Juggler, Yo-yo, Puck and Bar (this one is an inelastic collision with rotation), but there are a few titles in other areas (E&M, waves, thermo, etc.).

  5. A scoping review of video gaming in rehabilitation.

    PubMed

    Ravenek, Kelly E; Wolfe, Dalton L; Hitzig, Sander L

    2016-08-01

    To examine the scope of the peer-reviewed literature on the use of commercially available video gaming in rehabilitation. Five databases (SCOPUS, Cochrane, PsycINFO, PubMed and CINAHL) were searched for articles published between January 1990 and January 2014. The reference lists of selected articles were also reviewed to identify other relevant studies. Thirty articles met the inclusion criteria. Commercially available video gaming in rehabilitation was most commonly recommended by physiotherapists (50% or 15/30 studies) for populations at risk for falls or with decreased balance (67% or 19/30 studies). The most commonly used target outcomes were those assessing balance and/or fall prevention, with the Berg Balance Scale being the most frequently used (53% or 16/30 studies) outcome measure. The Nintendo Wii was the most prevalent gaming system (90% or 27/30 studies) used in the identified studies. Video gaming in rehabilitation is widely used by clinicians. Preliminary findings show that video gaming technology can be applied across a wide variety of rehabilitation populations, with some evidence showing clinical gains in physical functioning (e.g. gait and balance). There is a need for more robust clinical trials evaluating the efficacy of using video game systems as an adjunct to conventional rehabilitation. Implications for Rehabilitation Video gaming is a readily available technology that has been suggested as an enjoyable and motivating activity that engages patients in rehabilitation programming. Video gaming is becoming an increasingly popular adjunct to traditional therapy. Video gaming is most commonly used by physical therapists in a hospital setting for those with balance impairments. Video gaming has been shown to improve functional outcomes.

  6. Video consultation use by Australian general practitioners: video vignette study.

    PubMed

    Jiwa, Moyez; Meng, Xingqiong

    2013-06-19

    There is unequal access to health care in Australia, particularly for the one-third of the population living in remote and rural areas. Video consultations delivered via the Internet present an opportunity to provide medical services to those who are underserviced, but this is not currently routine practice in Australia. There are advantages and shortcomings to using video consultations for diagnosis, and general practitioners (GPs) have varying opinions regarding their efficacy. The aim of this Internet-based study was to explore the attitudes of Australian GPs toward video consultation by using a range of patient scenarios presenting different clinical problems. Overall, 102 GPs were invited to view 6 video vignettes featuring patients presenting with acute and chronic illnesses. For each vignette, they were asked to offer a differential diagnosis and to complete a survey based on the theory of planned behavior documenting their views on the value of a video consultation. A total of 47 GPs participated in the study. The participants were younger than Australian GPs based on national data, and more likely to be working in a larger practice. Most participants (72%-100%) agreed on the differential diagnosis in all video scenarios. Approximately one-third of the study participants were positive about video consultations, one-third were ambivalent, and one-third were against them. In all, 91% opposed conducting a video consultation for the patient with symptoms of an acute myocardial infarction. Inability to examine the patient was most frequently cited as the reason for not conducting a video consultation. Australian GPs who were favorably inclined toward video consultations were more likely to work in larger practices, and were more established GPs, especially in rural areas. The survey results also suggest that the deployment of video technology will need to focus on follow-up consultations. Patients with minor self-limiting illnesses and those with medical

  7. Trigger Videos on the Web: Impact of Audiovisual Design

    ERIC Educational Resources Information Center

    Verleur, Ria; Heuvelman, Ard; Verhagen, Plon W.

    2011-01-01

    Audiovisual design might impact emotional responses, as studies from the 1970s and 1980s on movie and television content show. Given today's abundant presence of web-based videos, this study investigates whether audiovisual design will impact web-video content in a similar way. The study is motivated by the potential influence of video-evoked…

  8. A catalog of video records of the 2013 Chelyabinsk superbolide

    NASA Astrophysics Data System (ADS)

    Borovička, J.; Shrbený, L.; Kalenda, P.; Loskutov, N.; Brown, P.; Spurný, P.; Cooke, W.; Blaauw, R.; Moser, D. E.; Kingery, A.

    2016-01-01

    The Chelyabinsk superbolide of February 15, 2013, was caused by the atmospheric entry of a ~19 m asteroid with a kinetic energy of 500 kT TNT just south of the city of Chelyabinsk, Russia. It was a rare event; impacts of similar energy occur on the Earth only a few times per century. Impacts of this energy near such a large urban area are expected only a few times per 10 000 years. A number of video records obtained by casual eyewitnesses, dashboard cameras in cars, security, and traffic cameras were made publicly available by their authors on the Internet. These represent a rich repository for future scientific studies of this unique event. To aid researchers in the archival study of this airburst, we provide and document a catalog of 960 videos showing various aspects of the event. Among the video records are 400 distinct videos showing the bolide itself and 108 videos showing the illumination caused by the bolide. Other videos show the dust trail left in the atmosphere, the arrival of the blast wave on the ground, or the damage caused by the blast wave. As these video recordings have high scientific, historical, and archival value for future studies of this airburst, a systematic documentation and description of records is desirable. Many have already been used for scientific analyses. We give the exact locations where 715 videos were taken as well as details of the visible/audible phenomena in each video recording. An online version of the published catalog has been developed and will be regularly updated to provide a long-term database for investigators. An online version of the catalog is available at http://meteor.asu.cas.cz/Chelyabinsk/

  9. An efficient approach for video information retrieval

    NASA Astrophysics Data System (ADS)

    Dong, Daoguo; Xue, Xiangyang

    2005-01-01

    Today, more and more video information can be accessed through internet, satellite, etc.. Retrieving specific video information from large-scale video database has become an important and challenging research topic in the area of multimedia information retrieval. In this paper, we introduce a new and efficient index structure OVA-File, which is a variant of VA-File. In OVA-File, the approximations close to each other in data space are stored in close positions of the approximation file. The benefit is that only a part of approximations close to the query vector need to be visited to get the query result. Both shot query algorithm and video clip algorithm are proposed to support video information retrieval efficiently. The experimental results showed that the queries based on OVA-File were much faster than that based on VA-File with small loss of result quality.

  10. Playing violent video games increases intergroup bias.

    PubMed

    Greitemeyer, Tobias

    2014-01-01

    Previous research has shown how, why, and for whom violent video game play is related to aggression and aggression-related variables. In contrast, less is known about whether some individuals are more likely than others to be the target of increased aggression after violent video game play. The present research examined the idea that the effects of violent video game play are stronger when the target is a member of an outgroup rather than an ingroup. In fact, a correlational study revealed that violent video game exposure was positively related to ethnocentrism. This relation remained significant when controlling for trait aggression. Providing causal evidence, an experimental study showed that playing a violent video game increased aggressive behavior, and that this effect was more pronounced when the target was an outgroup rather than an ingroup member. Possible mediating mechanisms are discussed.

  11. Effects of prosocial video games on prosocial behavior.

    PubMed

    Greitemeyer, Tobias; Osswald, Silvia

    2010-02-01

    Previous research has documented that playing violent video games has various negative effects on social behavior in that it causes an increase in aggressive behavior and a decrease in prosocial behavior. In contrast, there has been much less evidence on the effects of prosocial video games. In the present research, 4 experiments examined the hypothesis that playing a prosocial (relative to a neutral) video game increases helping behavior. In fact, participants who had played a prosocial video game were more likely to help after a mishap, were more willing (and devoted more time) to assist in further experiments, and intervened more often in a harassment situation. Results further showed that exposure to prosocial video games activated the accessibility of prosocial thoughts, which in turn promoted prosocial behavior. Thus, depending on the content of the video game, playing video games not only has negative effects on social behavior but has positive effects as well. Copyright 2009 APA, all rights reserved

  12. Influence of audio triggered emotional attention on video perception

    NASA Astrophysics Data System (ADS)

    Torres, Freddy; Kalva, Hari

    2014-02-01

    Perceptual video coding methods attempt to improve compression efficiency by discarding visual information not perceived by end users. Most of the current approaches for perceptual video coding only use visual features ignoring the auditory component. Many psychophysical studies have demonstrated that auditory stimuli affects our visual perception. In this paper we present our study of audio triggered emotional attention and it's applicability to perceptual video coding. Experiments with movie clips show that the reaction time to detect video compression artifacts was longer when video was presented with the audio information. The results reported are statistically significant with p=0.024.

  13. Objective video presentation QoE predictor for smart adaptive video streaming

    NASA Astrophysics Data System (ADS)

    Wang, Zhou; Zeng, Kai; Rehman, Abdul; Yeganeh, Hojatollah; Wang, Shiqi

    2015-09-01

    How to deliver videos to consumers over the network for optimal quality-of-experience (QoE) has been the central goal of modern video delivery services. Surprisingly, regardless of the large volume of videos being delivered everyday through various systems attempting to improve visual QoE, the actual QoE of end consumers is not properly assessed, not to say using QoE as the key factor in making critical decisions at the video hosting, network and receiving sites. Real-world video streaming systems typically use bitrate as the main video presentation quality indicator, but using the same bitrate to encode different video content could result in drastically different visual QoE, which is further affected by the display device and viewing condition of each individual consumer who receives the video. To correct this, we have to put QoE back to the driver's seat and redesign the video delivery systems. To achieve this goal, a major challenge is to find an objective video presentation QoE predictor that is accurate, fast, easy-to-use, display device adaptive, and provides meaningful QoE predictions across resolution and content. We propose to use the newly developed SSIMplus index (https://ece.uwaterloo.ca/~z70wang/research/ssimplus/) for this role. We demonstrate that based on SSIMplus, one can develop a smart adaptive video streaming strategy that leads to much smoother visual QoE impossible to achieve using existing adaptive bitrate video streaming approaches. Furthermore, SSIMplus finds many more applications, in live and file-based quality monitoring, in benchmarking video encoders and transcoders, and in guiding network resource allocations.

  14. Script Design for Information Film and Video.

    ERIC Educational Resources Information Center

    Shelton, S. M. (Marty); And Others

    1993-01-01

    Shows how the empathy created in the audience by each of the five genres of film/video is a function of the five elements of film design: camera angle, close up, composition, continuity, and cutting. Discusses film/video script designing. Illustrates these concepts with a sample script and story board. (SR)

  15. Physics and Video Analysis

    NASA Astrophysics Data System (ADS)

    Allain, Rhett

    2016-05-01

    We currently live in a world filled with videos. There are videos on YouTube, feature movies and even videos recorded with our own cameras and smartphones. These videos present an excellent opportunity to not only explore physical concepts, but also inspire others to investigate physics ideas. With video analysis, we can explore the fantasy world in science-fiction films. We can also look at online videos to determine if they are genuine or fake. Video analysis can be used in the introductory physics lab and it can even be used to explore the make-believe physics embedded in video games. This book covers the basic ideas behind video analysis along with the fundamental physics principles used in video analysis. The book also includes several examples of the unique situations in which video analysis can be used.

  16. Replacing Non-Active Video Gaming by Active Video Gaming to Prevent Excessive Weight Gain in Adolescents.

    PubMed

    Simons, Monique; Brug, Johannes; Chinapaw, Mai J M; de Boer, Michiel; Seidell, Jaap; de Vet, Emely

    2015-01-01

    The aim of the current study was to evaluate the effects of and adherence to an active video game promotion intervention on anthropometrics, sedentary screen time and consumption of sugar-sweetened beverages and snacks among non-active video gaming adolescents who primarily were of healthy weight. We assigned 270 gaming (i.e. ≥ 2 hours/week non-active video game time) adolescents randomly to an intervention group (n = 140) (receiving active video games and encouragement to play) or a waiting-list control group (n = 130). BMI-SDS (SDS = adjusted for mean standard deviation score), waist circumference-SDS, hip circumference and sum of skinfolds were measured at baseline, at four and ten months follow-up (primary outcomes). Sedentary screen time, physical activity, consumption of sugar-sweetened beverages and snacks, and process measures (not at baseline) were assessed with self-reports at baseline, one, four and ten months follow-up. Multi-level-intention to treat-regression analyses were conducted. The control group decreased significantly more than the intervention group on BMI-SDS (β = 0.074, 95%CI: 0.008;0.14), and sum of skinfolds (β = 3.22, 95%CI: 0.27;6.17) (overall effects). The intervention group had a significantly higher decrease in self-reported non-active video game time (β = -1.76, 95%CI: -3.20;-0.32) and total sedentary screen time (Exp (β = 0.81, 95%CI: 0.74;0.88) than the control group (overall effects). The process evaluation showed that 14% of the adolescents played the Move video games every week ≥ 1 hour/week during the whole intervention period. The active video game intervention did not result in lower values on anthropometrics in a group of 'excessive' non-active video gamers (mean ~ 14 hours/week) who primarily were of healthy weight compared to a control group throughout a ten-month-period. Even some effects in the unexpected direction were found, with the control group showing lower BMI-SDS and skin folds than the intervention group

  17. Replacing Non-Active Video Gaming by Active Video Gaming to Prevent Excessive Weight Gain in Adolescents

    PubMed Central

    Simons, Monique; Brug, Johannes; Chinapaw, Mai J. M.; de Boer, Michiel; Seidell, Jaap; de Vet, Emely

    2015-01-01

    Objective The aim of the current study was to evaluate the effects of and adherence to an active video game promotion intervention on anthropometrics, sedentary screen time and consumption of sugar-sweetened beverages and snacks among non-active video gaming adolescents who primarily were of healthy weight. Methods We assigned 270 gaming (i.e. ≥2 hours/week non-active video game time) adolescents randomly to an intervention group (n = 140) (receiving active video games and encouragement to play) or a waiting-list control group (n = 130). BMI-SDS (SDS = adjusted for mean standard deviation score), waist circumference-SDS, hip circumference and sum of skinfolds were measured at baseline, at four and ten months follow-up (primary outcomes). Sedentary screen time, physical activity, consumption of sugar-sweetened beverages and snacks, and process measures (not at baseline) were assessed with self-reports at baseline, one, four and ten months follow-up. Multi-level-intention to treat-regression analyses were conducted. Results The control group decreased significantly more than the intervention group on BMI-SDS (β = 0.074, 95%CI: 0.008;0.14), and sum of skinfolds (β = 3.22, 95%CI: 0.27;6.17) (overall effects). The intervention group had a significantly higher decrease in self-reported non-active video game time (β = -1.76, 95%CI: -3.20;-0.32) and total sedentary screen time (Exp (β = 0.81, 95%CI: 0.74;0.88) than the control group (overall effects). The process evaluation showed that 14% of the adolescents played the Move video games every week ≥1 hour/week during the whole intervention period. Conclusions The active video game intervention did not result in lower values on anthropometrics in a group of ‘excessive’ non-active video gamers (mean ~ 14 hours/week) who primarily were of healthy weight compared to a control group throughout a ten-month-period. Even some effects in the unexpected direction were found, with the control group showing lower BMI

  18. Developing the Fourth Evaluation Dimension: A Protocol for Evaluation of Video From the Patient's Perspective During Major Incident Exercises.

    PubMed

    Haverkort, J J Mark; Leenen, Luke P H

    2017-10-01

    Presently used evaluation techniques rely on 3 traditional dimensions: reports from observers, registration system data, and observational cameras. Some of these techniques are observer-dependent and are not reproducible for a second review. This proof-of-concept study aimed to test the feasibility of extending evaluation to a fourth dimension, the patient's perspective. Footage was obtained during a large, full-scale hospital trauma drill. Two mock victims were equipped with point-of-view cameras filming from the patient's head. Based on the Major Incident Hospital's first experience during the drill, a protocol was developed for a prospective, standardized method to evaluate a hospital's major incident response from the patient's perspective. The protocol was then tested in a second drill for its feasibility. New insights were gained after review of the footage. The traditional observer missed some of the evaluation points, which were seen on the point-of-view cameras. The information gained from the patient's perspective proved to be implementable into the designed protocol. Use of point-of-view camera recordings from a mock patient's perspective is a valuable addition to traditional evaluation of trauma drills and trauma care. Protocols should be designed to optimize and objectify judgement of such footage. (Disaster Med Public Health Preparedness. 2017;11:594-599).

  19. Joint Video Stitching and Stabilization from Moving Cameras.

    PubMed

    Guo, Heng; Liu, Shuaicheng; He, Tong; Zhu, Shuyuan; Zeng, Bing; Gabbouj, Moncef

    2016-09-08

    In this paper, we extend image stitching to video stitching for videos that are captured for the same scene simultaneously by multiple moving cameras. In practice, videos captured under this circumstance often appear shaky. Directly applying image stitching methods for shaking videos often suffers from strong spatial and temporal artifacts. To solve this problem, we propose a unified framework in which video stitching and stabilization are performed jointly. Specifically, our system takes several overlapping videos as inputs. We estimate both inter motions (between different videos) and intra motions (between neighboring frames within a video). Then, we solve an optimal virtual 2D camera path from all original paths. An enlarged field of view along the virtual path is finally obtained by a space-temporal optimization that takes both inter and intra motions into consideration. Two important components of this optimization are that (1) a grid-based tracking method is designed for an improved robustness, which produces features that are distributed evenly within and across multiple views, and (2) a mesh-based motion model is adopted for the handling of the scene parallax. Some experimental results are provided to demonstrate the effectiveness of our approach on various consumer-level videos and a Plugin, named "Video Stitcher" is developed at Adobe After Effects CC2015 to show the processed videos.

  20. 77 FR 48102 - Closed Captioning and Video Description of Video Programming

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-13

    ... Captioning and Video Description of Video Programming AGENCY: Federal Communications Commission. [[Page 48103..., enacted by the Twenty-First Century Communications and Video Accessibility Act of 2010 (CVAA), which...) establishing requirements for closed captioning on video programming to ensure access by persons with hearing...

  1. Using video to introduce clinical materials.

    PubMed

    Kommalage, Mahinda; Senadheera, Chandanie

    2012-08-01

    The early introduction of clinical material is a recognised strategy in medical education. The University of Ruhana Medical School, where a traditional curriculum is followed, offers students pre-clinical subjects without clinical exposure during their first and second years. Clinical materials in the form of videos were introduced to first-year students. In the videos, patients and their relatives described the diseases and related problems. Students were instructed to identify the problems encountered by patients and relatives. Each video was followed by a discussion of the problems identified by the students. The medical, social and economic problems encountered by patients and relatives were emphasised during post-video discussions. A lecture was conducted linking the contents of the videos to subsequent lectures. The aim of this study is to investigate whether combining teaching preclinical material with a video presentation of relevant clinical cases facilitates the interest and understanding of students. Quantitative data were collected using a questionnaire, whereas qualitative data were collected using focus group discussions. Quantitative data showed that students appreciated the video, had 'better' knowledge acquisition and a 'better' understanding of problems encountered by patients. Qualitative analysis highlighted the following themes: increased interest; enhanced understanding; relevance of basic knowledge to clinical practice; orientation to profession; and personalising theories. The introduction of patients in the form of videos helped students to understand the relevance of subject material for clinical practice, increased their interest and facilitated a better understanding of the subject material. Therefore, it seems video is a feasible medium to introduce clinical materials to first-year students who follow a traditional curriculum in a resource-limited environment. © Blackwell Publishing Ltd 2012.

  2. Self-Supervised Video Hashing With Hierarchical Binary Auto-Encoder.

    PubMed

    Song, Jingkuan; Zhang, Hanwang; Li, Xiangpeng; Gao, Lianli; Wang, Meng; Hong, Richang

    2018-07-01

    Existing video hash functions are built on three isolated stages: frame pooling, relaxed learning, and binarization, which have not adequately explored the temporal order of video frames in a joint binary optimization model, resulting in severe information loss. In this paper, we propose a novel unsupervised video hashing framework dubbed self-supervised video hashing (SSVH), which is able to capture the temporal nature of videos in an end-to-end learning to hash fashion. We specifically address two central problems: 1) how to design an encoder-decoder architecture to generate binary codes for videos and 2) how to equip the binary codes with the ability of accurate video retrieval. We design a hierarchical binary auto-encoder to model the temporal dependencies in videos with multiple granularities, and embed the videos into binary codes with less computations than the stacked architecture. Then, we encourage the binary codes to simultaneously reconstruct the visual content and neighborhood structure of the videos. Experiments on two real-world data sets show that our SSVH method can significantly outperform the state-of-the-art methods and achieve the current best performance on the task of unsupervised video retrieval.

  3. Self-Supervised Video Hashing With Hierarchical Binary Auto-Encoder

    NASA Astrophysics Data System (ADS)

    Song, Jingkuan; Zhang, Hanwang; Li, Xiangpeng; Gao, Lianli; Wang, Meng; Hong, Richang

    2018-07-01

    Existing video hash functions are built on three isolated stages: frame pooling, relaxed learning, and binarization, which have not adequately explored the temporal order of video frames in a joint binary optimization model, resulting in severe information loss. In this paper, we propose a novel unsupervised video hashing framework dubbed Self-Supervised Video Hashing (SSVH), that is able to capture the temporal nature of videos in an end-to-end learning-to-hash fashion. We specifically address two central problems: 1) how to design an encoder-decoder architecture to generate binary codes for videos; and 2) how to equip the binary codes with the ability of accurate video retrieval. We design a hierarchical binary autoencoder to model the temporal dependencies in videos with multiple granularities, and embed the videos into binary codes with less computations than the stacked architecture. Then, we encourage the binary codes to simultaneously reconstruct the visual content and neighborhood structure of the videos. Experiments on two real-world datasets (FCVID and YFCC) show that our SSVH method can significantly outperform the state-of-the-art methods and achieve the currently best performance on the task of unsupervised video retrieval.

  4. Enhanced video indirect ophthalmoscopy (VIO) via robust mosaicing.

    PubMed

    Estrada, Rolando; Tomasi, Carlo; Cabrera, Michelle T; Wallace, David K; Freedman, Sharon F; Farsiu, Sina

    2011-10-01

    Indirect ophthalmoscopy (IO) is the standard of care for evaluation of the neonatal retina. When recorded on video from a head-mounted camera, IO images have low quality and narrow Field of View (FOV). We present an image fusion methodology for converting a video IO recording into a single, high quality, wide-FOV mosaic that seamlessly blends the best frames in the video. To this end, we have developed fast and robust algorithms for automatic evaluation of video quality, artifact detection and removal, vessel mapping, registration, and multi-frame image fusion. Our experiments show the effectiveness of the proposed methods.

  5. Enhance Video Film using Retnix method

    NASA Astrophysics Data System (ADS)

    Awad, Rasha; Al-Zuky, Ali A.; Al-Saleh, Anwar H.; Mohamad, Haidar J.

    2018-05-01

    An enhancement technique used to improve the studied video quality. Algorithms like mean and standard deviation are used as a criterion within this paper, and it applied for each video clip that divided into 80 images. The studied filming environment has different light intensity (315, 566, and 644Lux). This different environment gives similar reality to the outdoor filming. The outputs of the suggested algorithm are compared with the results before applying it. This method is applied into two ways: first, it is applied for the full video clip to get the enhanced film; second, it is applied for every individual image to get the enhanced image then compiler them to get the enhanced film. This paper shows that the enhancement technique gives good quality video film depending on a statistical method, and it is recommended to use it in different application.

  6. Alleviating travel anxiety through virtual reality and narrated video technology.

    PubMed

    Ahn, J C; Lee, O

    2013-01-01

    This study presents an empirical evidence of benefit of narrative video clips in embedded virtual reality websites of hotels for relieving travel anxiety. Even though it was proven that virtual reality functions do provide some relief in travel anxiety, a stronger virtual reality website can be built when narrative video clips that show video clips with narration about important aspects of the hotel. We posit that these important aspects are 1. Escape route and 2. Surrounding neighborhood information, which are derived from the existing research on anxiety disorder as well as travel anxiety. Thus we created a video clip that showed and narrated about the escape route from the hotel room, another video clip that showed and narrated about surrounding neighborhood. We then conducted experiments with this enhanced virtual reality website of a hotel by having human subjects play with the website and fill out a questionnaire. The result confirms our hypothesis that there is a statistically significant relationship between the degree of travel anxiety and psychological relief caused by the use of embedded virtual reality functions with narrative video clips of a hotel website (Tab. 2, Fig. 3, Ref. 26).

  7. Objectification of perceptual image quality for mobile video

    NASA Astrophysics Data System (ADS)

    Lee, Seon-Oh; Sim, Dong-Gyu

    2011-06-01

    This paper presents an objective video quality evaluation method for quantifying the subjective quality of digital mobile video. The proposed method aims to objectify the subjective quality by extracting edgeness and blockiness parameters. To evaluate the performance of the proposed algorithms, we carried out subjective video quality tests with the double-stimulus continuous quality scale method and obtained differential mean opinion score values for 120 mobile video clips. We then compared the performance of the proposed methods with that of existing methods in terms of the differential mean opinion score with 120 mobile video clips. Experimental results showed that the proposed methods were approximately 10% better than the edge peak signal-to-noise ratio of the J.247 method in terms of the Pearson correlation.

  8. Playing Action Video Games Improves Visuomotor Control.

    PubMed

    Li, Li; Chen, Rongrong; Chen, Jing

    2016-08-01

    Can playing action video games improve visuomotor control? If so, can these games be used in training people to perform daily visuomotor-control tasks, such as driving? We found that action gamers have better lane-keeping and visuomotor-control skills than do non-action gamers. We then trained non-action gamers with action or nonaction video games. After they played a driving or first-person-shooter video game for 5 or 10 hr, their visuomotor control improved significantly. In contrast, non-action gamers showed no such improvement after they played a nonaction video game. Our model-driven analysis revealed that although different action video games have different effects on the sensorimotor system underlying visuomotor control, action gaming in general improves the responsiveness of the sensorimotor system to input error signals. The findings support a causal link between action gaming (for as little as 5 hr) and enhancement in visuomotor control, and suggest that action video games can be beneficial training tools for driving. © The Author(s) 2016.

  9. Blurry-frame detection and shot segmentation in colonoscopy videos

    NASA Astrophysics Data System (ADS)

    Oh, JungHwan; Hwang, Sae; Tavanapong, Wallapak; de Groen, Piet C.; Wong, Johnny

    2003-12-01

    Colonoscopy is an important screening procedure for colorectal cancer. During this procedure, the endoscopist visually inspects the colon. Human inspection, however, is not without error. We hypothesize that colonoscopy videos may contain additional valuable information missed by the endoscopist. Video segmentation is the first necessary step for the content-based video analysis and retrieval to provide efficient access to the important images and video segments from a large colonoscopy video database. Based on the unique characteristics of colonoscopy videos, we introduce a new scheme to detect and remove blurry frames, and segment the videos into shots based on the contents. Our experimental results show that the average precision and recall of the proposed scheme are over 90% for the detection of non-blurry images. The proposed method of blurry frame detection and shot segmentation is extensible to the videos captured from other endoscopic procedures such as upper gastrointestinal endoscopy, enteroscopy, cystoscopy, and laparoscopy.

  10. Design of video interface conversion system based on FPGA

    NASA Astrophysics Data System (ADS)

    Zhao, Heng; Wang, Xiang-jun

    2014-11-01

    This paper presents a FPGA based video interface conversion system that enables the inter-conversion between digital and analog video. Cyclone IV series EP4CE22F17C chip from Altera Corporation is used as the main video processing chip, and single-chip is used as the information interaction control unit between FPGA and PC. The system is able to encode/decode messages from the PC. Technologies including video decoding/encoding circuits, bus communication protocol, data stream de-interleaving and de-interlacing, color space conversion and the Camera Link timing generator module of FPGA are introduced. The system converts Composite Video Broadcast Signal (CVBS) from the CCD camera into Low Voltage Differential Signaling (LVDS), which will be collected by the video processing unit with Camera Link interface. The processed video signals will then be inputted to system output board and displayed on the monitor.The current experiment shows that it can achieve high-quality video conversion with minimum board size.

  11. Layered Wyner-Ziv video coding.

    PubMed

    Xu, Qian; Xiong, Zixiang

    2006-12-01

    Following recent theoretical works on successive Wyner-Ziv coding (WZC), we propose a practical layered Wyner-Ziv video coder using the DCT, nested scalar quantization, and irregular LDPC code based Slepian-Wolf coding (or lossless source coding with side information at the decoder). Our main novelty is to use the base layer of a standard scalable video coder (e.g., MPEG-4/H.26L FGS or H.263+) as the decoder side information and perform layered WZC for quality enhancement. Similar to FGS coding, there is no performance difference between layered and monolithic WZC when the enhancement bitstream is generated in our proposed coder. Using an H.26L coded version as the base layer, experiments indicate that WZC gives slightly worse performance than FGS coding when the channel (for both the base and enhancement layers) is noiseless. However, when the channel is noisy, extensive simulations of video transmission over wireless networks conforming to the CDMA2000 1X standard show that H.26L base layer coding plus Wyner-Ziv enhancement layer coding are more robust against channel errors than H.26L FGS coding. These results demonstrate that layered Wyner-Ziv video coding is a promising new technique for video streaming over wireless networks.

  12. Video Compression

    NASA Technical Reports Server (NTRS)

    1996-01-01

    Optivision developed two PC-compatible boards and associated software under a Goddard Space Flight Center Small Business Innovation Research grant for NASA applications in areas such as telerobotics, telesciences and spaceborne experimentation. From this technology, the company used its own funds to develop commercial products, the OPTIVideo MPEG Encoder and Decoder, which are used for realtime video compression and decompression. They are used in commercial applications including interactive video databases and video transmission. The encoder converts video source material to a compressed digital form that can be stored or transmitted, and the decoder decompresses bit streams to provide high quality playback.

  13. Evaluation of privacy in high dynamic range video sequences

    NASA Astrophysics Data System (ADS)

    Řeřábek, Martin; Yuan, Lin; Krasula, Lukáš; Korshunov, Pavel; Fliegel, Karel; Ebrahimi, Touradj

    2014-09-01

    The ability of high dynamic range (HDR) to capture details in environments with high contrast has a significant impact on privacy in video surveillance. However, the extent to which HDR imaging affects privacy, when compared to a typical low dynamic range (LDR) imaging, is neither well studied nor well understood. To achieve such an objective, a suitable dataset of images and video sequences is needed. Therefore, we have created a publicly available dataset of HDR video for privacy evaluation PEViD-HDR, which is an HDR extension of an existing Privacy Evaluation Video Dataset (PEViD). PEViD-HDR video dataset can help in the evaluations of privacy protection tools, as well as for showing the importance of HDR imaging in video surveillance applications and its influence on the privacy-intelligibility trade-off. We conducted a preliminary subjective experiment demonstrating the usability of the created dataset for evaluation of privacy issues in video. The results confirm that a tone-mapped HDR video contains more privacy sensitive information and details compared to a typical LDR video.

  14. A Completely Blind Video Integrity Oracle.

    PubMed

    Mittal, Anish; Saad, Michele A; Bovik, Alan C

    2016-01-01

    Considerable progress has been made toward developing still picture perceptual quality analyzers that do not require any reference picture and that are not trained on human opinion scores of distorted images. However, there do not yet exist any such completely blind video quality assessment (VQA) models. Here, we attempt to bridge this gap by developing a new VQA model called the video intrinsic integrity and distortion evaluation oracle (VIIDEO). The new model does not require the use of any additional information other than the video being quality evaluated. VIIDEO embodies models of intrinsic statistical regularities that are observed in natural vidoes, which are used to quantify disturbances introduced due to distortions. An algorithm derived from the VIIDEO model is thereby able to predict the quality of distorted videos without any external knowledge about the pristine source, anticipated distortions, or human judgments of video quality. Even with such a paucity of information, we are able to show that the VIIDEO algorithm performs much better than the legacy full reference quality measure MSE on the LIVE VQA database and delivers performance comparable with a leading human judgment trained blind VQA model. We believe that the VIIDEO algorithm is a significant step toward making real-time monitoring of completely blind video quality possible.

  15. Community Access Video.

    ERIC Educational Resources Information Center

    Frederiksen, H. Allan

    In the belief that "the spread of technological development and the attendant rapidly changing environment creates the necessity for multi-source feedback systems to maximize the alternatives available in dealing with global problems," the author shows how to participate in the process of alternate video. He offers detailed information…

  16. Detection of inter-frame forgeries in digital videos.

    PubMed

    K, Sitara; Mehtre, B M

    2018-05-26

    Videos are acceptable as evidence in the court of law, provided its authenticity and integrity are scientifically validated. Videos recorded by surveillance systems are susceptible to malicious alterations of visual content by perpetrators locally or remotely. Such malicious alterations of video contents (called video forgeries) are categorized into inter-frame and intra-frame forgeries. In this paper, we propose inter-frame forgery detection techniques using tamper traces from spatio-temporal and compressed domains. Pristine videos containing frames that are recorded during sudden camera zooming event, may get wrongly classified as tampered videos leading to an increase in false positives. To address this issue, we propose a method for zooming detection and it is incorporated in video tampering detection. Frame shuffling detection, which was not explored so far is also addressed in our work. Our method is capable of differentiating various inter-frame tamper events and its localization in the temporal domain. The proposed system is tested on 23,586 videos of which 2346 are pristine and rest of them are candidates of inter-frame forged videos. Experimental results show that we have successfully detected frame shuffling with encouraging accuracy rates. We have achieved improved accuracy on forgery detection in frame insertion, frame deletion and frame duplication. Copyright © 2018. Published by Elsevier B.V.

  17. Video conference quality assessment based on cooperative sensing of video and audio

    NASA Astrophysics Data System (ADS)

    Wang, Junxi; Chen, Jialin; Tian, Xin; Zhou, Cheng; Zhou, Zheng; Ye, Lu

    2015-12-01

    This paper presents a method to video conference quality assessment, which is based on cooperative sensing of video and audio. In this method, a proposed video quality evaluation method is used to assess the video frame quality. The video frame is divided into noise image and filtered image by the bilateral filters. It is similar to the characteristic of human visual, which could also be seen as a low-pass filtering. The audio frames are evaluated by the PEAQ algorithm. The two results are integrated to evaluate the video conference quality. A video conference database is built to test the performance of the proposed method. It could be found that the objective results correlate well with MOS. Then we can conclude that the proposed method is efficiency in assessing video conference quality.

  18. NASA Video Catalog. Supplement 15

    NASA Technical Reports Server (NTRS)

    2005-01-01

    This issue of the NASA Video Catalog cites video productions listed in the NASA STI Database. The videos listed have been developed by the NASA centers, covering Shuttle mission press conferences; fly-bys of planets; aircraft design, testing and performance; environmental pollution; lunar and planetary exploration; and many other categories related to manned and unmanned space exploration. Each entry in the publication consists of a standard bibliographic citation accompanied by an abstract. The Table of Contents shows how the entries are arranged by divisions and categories according to the NASA Scope and Coverage Category Guide. For users with specific information, a Title Index is available. A Subject Term Index, based on the NASA Thesaurus, is also included. Guidelines for usage of NASA audio/visual material, ordering information, and order forms are also available.

  19. NASA Video Catalog. Supplement 13

    NASA Technical Reports Server (NTRS)

    2003-01-01

    This issue of the NASA Video Catalog cites video productions listed in the NASA STI Database. The videos listed have been developed by the NASA centers, covering Shuttle mission press conferences; fly-bys of planets; aircraft design, testing and performance; environmental pollution; lunar and planetary exploration; and many other categories related to manned and unmanned space exploration. Each entry in the publication consists of a standard bibliographic citation accompanied by an abstract. The Table of Contents shows how the entries are arranged by divisions and categories according to the NASA Scope and Coverage Category Guide. For users with specific information, a Title Index is available. A Subject Term Index, based on the NASA Thesaurus, is also included. Guidelines for usage of NASA audio/visual material, ordering information, and order forms are also available.

  20. NASA Video Catalog. Supplement 14

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This issue of the NASA Video Catalog cites video productions listed in the NASA STI Database. The videos listed have been developed by the NASA centers, covering Shuttle mission press conferences; fly-bys of planets; aircraft design, testing and performance; environmental pollution; lunar and planetary exploration; and many other categories related to manned and unmanned space exploration. Each entry in the publication consists of a standard bibliographic citation accompanied by an abstract. The Table of Contents shows how the entries are arranged by divisions and categories according to the NASA Scope and Coverage Category Guide. For users with specific information, a Title Index is available. A Subject Term Index, based on the NASA Thesaurus, is also included. Guidelines for usage of NASA audio/visual material, ordering information, and order forms are also available.

  1. Blood Sampling in Newborns: A Systematic Review of YouTube Videos.

    PubMed

    Bueno, Mariana; Nishi, Érika Tihemi; Costa, Taine; Freire, Laís Machado; Harrison, Denise

    Objective of this study was to conduct a systematic review of YouTube videos showing neonatal blood sampling, and to evaluate pain management and comforting interventions used. Selected videos were consumer- or professional-produced videos showing human newborns undergoing heel lancing or venipuncture for blood sampling, videos showing the entire blood sampling procedure (from the first attempt or puncture to the time of application of a cotton ball or bandage), publication date prior to October 2014, Portuguese titles, available audio. Search terms included "neonate," "newborn," "neonatal screening," and "blood collection." Two reviewers independently screened the videos and extracted the following data. A total of 13 140 videos were retrieved, of which 1354 were further evaluated, and 68 were included. Videos were mostly consumer produced (97%). Heel lancing was performed in 62 (91%). Forty-nine infants (72%) were held by an adult during the procedure. Median pain score immediately after puncture was 4 (interquartile range [IQR] = 0-5), and median length of cry throughout the procedure was 61 seconds (IQR = 88). Breastfeeding (3%) and swaddling (1.5%) were rarely implemented. Posted YouTube videos in Portuguese of newborns undergoing blood collection demonstrate minimal use of pain treatment, and maximal distress during procedures. Knowledge translation strategies are needed to implement effective measures for neonatal pain relief and comfort.

  2. Identifying hidden voice and video streams

    NASA Astrophysics Data System (ADS)

    Fan, Jieyan; Wu, Dapeng; Nucci, Antonio; Keralapura, Ram; Gao, Lixin

    2009-04-01

    Given the rising popularity of voice and video services over the Internet, accurately identifying voice and video traffic that traverse their networks has become a critical task for Internet service providers (ISPs). As the number of proprietary applications that deliver voice and video services to end users increases over time, the search for the one methodology that can accurately detect such services while being application independent still remains open. This problem becomes even more complicated when voice and video service providers like Skype, Microsoft, and Google bundle their voice and video services with other services like file transfer and chat. For example, a bundled Skype session can contain both voice stream and file transfer stream in the same layer-3/layer-4 flow. In this context, traditional techniques to identify voice and video streams do not work. In this paper, we propose a novel self-learning classifier, called VVS-I , that detects the presence of voice and video streams in flows with minimum manual intervention. Our classifier works in two phases: training phase and detection phase. In the training phase, VVS-I first extracts the relevant features, and subsequently constructs a fingerprint of a flow using the power spectral density (PSD) analysis. In the detection phase, it compares the fingerprint of a flow to the existing fingerprints learned during the training phase, and subsequently classifies the flow. Our classifier is not only capable of detecting voice and video streams that are hidden in different flows, but is also capable of detecting different applications (like Skype, MSN, etc.) that generate these voice/video streams. We show that our classifier can achieve close to 100% detection rate while keeping the false positive rate to less that 1%.

  3. Effect of video server topology on contingency capacity requirements

    NASA Astrophysics Data System (ADS)

    Kienzle, Martin G.; Dan, Asit; Sitaram, Dinkar; Tetzlaff, William H.

    1996-03-01

    Video servers need to assign a fixed set of resources to each video stream in order to guarantee on-time delivery of the video data. If a server has insufficient resources to guarantee the delivery, it must reject the stream request rather than slowing down all existing streams. Large scale video servers are being built as clusters of smaller components, so as to be economical, scalable, and highly available. This paper uses a blocking model developed for telephone systems to evaluate video server cluster topologies. The goal is to achieve high utilization of the components and low per-stream cost combined with low blocking probability and high user satisfaction. The analysis shows substantial economies of scale achieved by larger server images. Simple distributed server architectures can result in partitioning of resources with low achievable resource utilization. By comparing achievable resource utilization of partitioned and monolithic servers, we quantify the cost of partitioning. Next, we present an architecture for a distributed server system that avoids resource partitioning and results in highly efficient server clusters. Finally, we show how, in these server clusters, further optimizations can be achieved through caching and batching of video streams.

  4. Video time encoding machines.

    PubMed

    Lazar, Aurel A; Pnevmatikakis, Eftychios A

    2011-03-01

    We investigate architectures for time encoding and time decoding of visual stimuli such as natural and synthetic video streams (movies, animation). The architecture for time encoding is akin to models of the early visual system. It consists of a bank of filters in cascade with single-input multi-output neural circuits. Neuron firing is based on either a threshold-and-fire or an integrate-and-fire spiking mechanism with feedback. We show that analog information is represented by the neural circuits as projections on a set of band-limited functions determined by the spike sequence. Under Nyquist-type and frame conditions, the encoded signal can be recovered from these projections with arbitrary precision. For the video time encoding machine architecture, we demonstrate that band-limited video streams of finite energy can be faithfully recovered from the spike trains and provide a stable algorithm for perfect recovery. The key condition for recovery calls for the number of neurons in the population to be above a threshold value.

  5. Guerrilla Video: A New Protocol for Producing Classroom Video

    ERIC Educational Resources Information Center

    Fadde, Peter; Rich, Peter

    2010-01-01

    Contemporary changes in pedagogy point to the need for a higher level of video production value in most classroom video, replacing the default video protocol of an unattended camera in the back of the classroom. The rich and complex environment of today's classroom can be captured more fully using the higher level, but still easily manageable,…

  6. Automated Video Quality Assessment for Deep-Sea Video

    NASA Astrophysics Data System (ADS)

    Pirenne, B.; Hoeberechts, M.; Kalmbach, A.; Sadhu, T.; Branzan Albu, A.; Glotin, H.; Jeffries, M. A.; Bui, A. O. V.

    2015-12-01

    Video provides a rich source of data for geophysical analysis, often supplying detailed information about the environment when other instruments may not. This is especially true of deep-sea environments, where direct visual observations cannot be made. As computer vision techniques improve and volumes of video data increase, automated video analysis is emerging as a practical alternative to labor-intensive manual analysis. Automated techniques can be much more sensitive to video quality than their manual counterparts, so performing quality assessment before doing full analysis is critical to producing valid results.Ocean Networks Canada (ONC), an initiative of the University of Victoria, operates cabled ocean observatories that supply continuous power and Internet connectivity to a broad suite of subsea instruments from the coast to the deep sea, including video and still cameras. This network of ocean observatories has produced almost 20,000 hours of video (about 38 hours are recorded each day) and an additional 8,000 hours of logs from remotely operated vehicle (ROV) dives. We begin by surveying some ways in which deep-sea video poses challenges for automated analysis, including: 1. Non-uniform lighting: Single, directional, light sources produce uneven luminance distributions and shadows; remotely operated lighting equipment are also susceptible to technical failures. 2. Particulate noise: Turbidity and marine snow are often present in underwater video; particles in the water column can have sharper focus and higher contrast than the objects of interest due to their proximity to the light source and can also influence the camera's autofocus and auto white-balance routines. 3. Color distortion (low contrast): The rate of absorption of light in water varies by wavelength, and is higher overall than in air, altering apparent colors and lowering the contrast of objects at a distance.We also describe measures under development at ONC for detecting and mitigating

  7. Content Based Lecture Video Retrieval Using Speech and Video Text Information

    ERIC Educational Resources Information Center

    Yang, Haojin; Meinel, Christoph

    2014-01-01

    In the last decade e-lecturing has become more and more popular. The amount of lecture video data on the "World Wide Web" (WWW) is growing rapidly. Therefore, a more efficient method for video retrieval in WWW or within large lecture video archives is urgently needed. This paper presents an approach for automated video indexing and video…

  8. A video event trigger for high frame rate, high resolution video technology

    NASA Astrophysics Data System (ADS)

    Williams, Glenn L.

    1991-12-01

    When video replaces film the digitized video data accumulates very rapidly, leading to a difficult and costly data storage problem. One solution exists for cases when the video images represent continuously repetitive 'static scenes' containing negligible activity, occasionally interrupted by short events of interest. Minutes or hours of redundant video frames can be ignored, and not stored, until activity begins. A new, highly parallel digital state machine generates a digital trigger signal at the onset of a video event. High capacity random access memory storage coupled with newly available fuzzy logic devices permits the monitoring of a video image stream for long term or short term changes caused by spatial translation, dilation, appearance, disappearance, or color change in a video object. Pretrigger and post-trigger storage techniques are then adaptable for archiving the digital stream from only the significant video images.

  9. A video event trigger for high frame rate, high resolution video technology

    NASA Technical Reports Server (NTRS)

    Williams, Glenn L.

    1991-01-01

    When video replaces film the digitized video data accumulates very rapidly, leading to a difficult and costly data storage problem. One solution exists for cases when the video images represent continuously repetitive 'static scenes' containing negligible activity, occasionally interrupted by short events of interest. Minutes or hours of redundant video frames can be ignored, and not stored, until activity begins. A new, highly parallel digital state machine generates a digital trigger signal at the onset of a video event. High capacity random access memory storage coupled with newly available fuzzy logic devices permits the monitoring of a video image stream for long term or short term changes caused by spatial translation, dilation, appearance, disappearance, or color change in a video object. Pretrigger and post-trigger storage techniques are then adaptable for archiving the digital stream from only the significant video images.

  10. Robust video copy detection approach based on local tangent space alignment

    NASA Astrophysics Data System (ADS)

    Nie, Xiushan; Qiao, Qianping

    2012-04-01

    We propose a robust content-based video copy detection approach based on local tangent space alignment (LTSA), which is an efficient dimensionality reduction algorithm. The idea is motivated by the fact that the content of video becomes richer and the dimension of content becomes higher. It does not give natural tools for video analysis and understanding because of the high dimensionality. The proposed approach reduces the dimensionality of video content using LTSA, and then generates video fingerprints in low dimensional space for video copy detection. Furthermore, a dynamic sliding window is applied to fingerprint matching. Experimental results show that the video copy detection approach has good robustness and discrimination.

  11. Paroxysmal events during prolonged video-video electroencephalography monitoring in refractory epilepsy.

    PubMed

    Sanabria-Castro, A; Henríquez-Varela, F; Monge-Bonilla, C; Lara-Maier, S; Sittenfeld-Appel, M

    2017-03-16

    Given that epileptic seizures and non-epileptic paroxysmal events have similar clinical manifestations, using specific diagnostic methods is crucial, especially in patients with drug-resistant epilepsy. Prolonged video electroencephalography monitoring during epileptic seizures reveals epileptiform discharges and has become an essential procedure for epilepsy diagnosis. The main purpose of this study is to characterise paroxysmal events and compare patterns in patients with refractory epilepsy. We conducted a retrospective analysis of medical records from 91 patients diagnosed with refractory epilepsy who underwent prolonged video electroencephalography monitoring during hospitalisation. During prolonged video electroencephalography monitoring, 76.9% of the patients (n=70) had paroxysmal events. The mean number of events was 3.4±2.7; the duration of these events was highly variable. Most patients (80%) experienced seizures during wakefulness. The most common events were focal seizures with altered levels of consciousness, progressive bilateral generalized seizures and psychogenic non-epileptic seizures. Regarding all paroxysmal events, no differences were observed in the number or type of events by sex, in duration by sex or age at onset, or in the number of events by type of event. Psychogenic nonepileptic seizures were predominantly registered during wakefulness, lasted longer, started at older ages, and were more frequent in women. Paroxysmal events recorded during prolonged video electroencephalography monitoring in patients with refractory epilepsy show similar patterns and characteristics to those reported in other latitudes. Copyright © 2017 The Author(s). Publicado por Elsevier España, S.L.U. All rights reserved.

  12. STS-111 Flight Day 7 Highlights

    NASA Astrophysics Data System (ADS)

    2002-06-01

    On Flight Day 7 of STS-111 (Space Shuttle Endeavour crew includes: Kenneth Cockrell, Commander; Paul Lockhart, Pilot; Franklin Chang-Diaz, Mission Specialist; Philippe Perrin, Mission Specialist; International Space Station (ISS) Expedition 5 crew includes Valery Korzun, Commander; Peggy Whitson, Flight Engineer; Sergei Treschev, Flight Engineer; ISS Expedition 4 crew includes: Yury Onufrienko, Commander; Daniel Bursch, Flight Engineer; Carl Walz, Flight Engineer), this video opens with answers to questions asked by the public via e-mail about the altitude of the space station, the length of its orbit, how astronauts differentiate between up and down in the microgravity environment, and whether they hear wind noise during the shuttle's reentry. In video footage shot from inside the Quest airlock, Perrin is shown exiting the station to perform an extravehicular activity (EVA) with Chang-Diaz. Chang-Diaz is shown, in helmet mounted camera footage, attaching cable protection booties to a fish-stringer device with multiple hooks, and Perrin is seen loosening bolts that hold the replacement unit accomodation in launch position atop the Mobile Base System (MBS). Perrin then mounts a camera atop the mast of the MBS. During this EVA, the astronauts installed the MBS on the Mobile Transporter (MT) to support the Canadarm 2 robotic arm. A camera in the Endeavour's payload bay provides footage of the Pacific Ocean, the Baja Peninsula, and Midwestern United States. Plumes from wildfires in Nevada, Idaho, Yellowstone National Park, Wyoming, and Montana are visible. The station continues over the Great Lakes and the Eastern Provinces of Canada.

  13. STS-111 Flight Day 7 Highlights

    NASA Technical Reports Server (NTRS)

    2002-01-01

    On Flight Day 7 of STS-111 (Space Shuttle Endeavour crew includes: Kenneth Cockrell, Commander; Paul Lockhart, Pilot; Franklin Chang-Diaz, Mission Specialist; Philippe Perrin, Mission Specialist; International Space Station (ISS) Expedition 5 crew includes Valery Korzun, Commander; Peggy Whitson, Flight Engineer; Sergei Treschev, Flight Engineer; ISS Expedition 4 crew includes: Yury Onufrienko, Commander; Daniel Bursch, Flight Engineer; Carl Walz, Flight Engineer), this video opens with answers to questions asked by the public via e-mail about the altitude of the space station, the length of its orbit, how astronauts differentiate between up and down in the microgravity environment, and whether they hear wind noise during the shuttle's reentry. In video footage shot from inside the Quest airlock, Perrin is shown exiting the station to perform an extravehicular activity (EVA) with Chang-Diaz. Chang-Diaz is shown, in helmet mounted camera footage, attaching cable protection booties to a fish-stringer device with multiple hooks, and Perrin is seen loosening bolts that hold the replacement unit accomodation in launch position atop the Mobile Base System (MBS). Perrin then mounts a camera atop the mast of the MBS. During this EVA, the astronauts installed the MBS on the Mobile Transporter (MT) to support the Canadarm 2 robotic arm. A camera in the Endeavour's payload bay provides footage of the Pacific Ocean, the Baja Peninsula, and Midwestern United States. Plumes from wildfires in Nevada, Idaho, Yellowstone National Park, Wyoming, and Montana are visible. The station continues over the Great Lakes and the Eastern Provinces of Canada.

  14. Video Analytics for Indexing, Summarization and Searching of Video Archives

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Trease, Harold E.; Trease, Lynn L.

    This paper will be submitted to the proceedings The Eleventh IASTED International Conference on. Signal and Image Processing. Given a video or video archive how does one effectively and quickly summarize, classify, and search the information contained within the data? This paper addresses these issues by describing a process for the automated generation of a table-of-contents and keyword, topic-based index tables that can be used to catalogue, summarize, and search large amounts of video data. Having the ability to index and search the information contained within the videos, beyond just metadata tags, provides a mechanism to extract and identify "useful"more » content from image and video data.« less

  15. Dissection videos do not improve anatomy examination scores.

    PubMed

    Mahmud, Waqas; Hyder, Omar; Butt, Jamaal; Aftab, Arsalan

    2011-01-01

    In this quasi-experimental study, we describe the effect of showing dissection videos on first-year medical students' performance in terms of test scores during a gross anatomy course. We also surveyed students' perception regarding the showing of dissection videos. Two hundred eighty-seven first-year medical students at Rawalpindi Medical College in Pakistan, divided into two groups, dissected one limb in first term and switched over to the other limb in the second term. During the second term, instruction was supplemented by dissection videos. Second-term anatomy examination marks were compared with first-term scores and with results from first-year medical students in previous years. Multiple linear regression analysis was performed, with term scores (continuous, 0-200) as the dependent variable. Students shown dissection videos scored 1.26 marks higher than those not shown. The relationship was not statistically significant (95% CI: -1.11, 3.70; P = 0.314). Ninety-three percent of students favored regular inclusion of dissection videos in curriculum, and 50% termed it the best source for learning gross anatomy. Seventy-six percent of students did not perform regular cadaver dissection. The most frequent reason cited for not performing regular dissection was high student-cadaver ratio. Dissection videos did not improve performance on final examination scores; however, students favored their use. Copyright © 2011 American Association of Anatomists.

  16. Energy 101: Wind Turbines - 2014 Update

    ScienceCinema

    None

    2018-05-11

    See how wind turbines generate clean electricity from the power of wind. The video highlights the basic principles at work in wind turbines, and illustrates how the various components work to capture and convert wind energy to electricity. This updated version also includes information on the Energy Department's efforts to advance offshore wind power. Offshore wind energy footage courtesy of Vestas.

  17. Exploring the Thermal Limits of IR-Based Automatic Whale Detection

    DTIC Science & Technology

    2014-09-30

    spouts during the northward humpback whale migration, which occurs annually rather close to shore near North Stradbroke Island, Queensland, Australia...with concurrent visual observations. APPROACH By obtaining continuous IR video footage during two successive northward humpback whale ... Whale Detection (ETAW) Olaf Boebel P.O. Box 120161 27515 Bremerhaven GERMANY phone: +49 (471) 4831-1879 fax: +49 (471) 4831-1797 email

  18. ISS Expedition 42 / 43 Soyuz Spacecraft and Crew Preparations for Launch

    NASA Image and Video Library

    2014-11-26

    NASA TV (NTV) video file of crewmembers Terry Virts, Anton Shkaplerov (Roskosmos) and Samantha Cristoforetti (ESA) during final fit check of the Soyuz TMA 15M spacedraft at the Integration Facility, Baikonurk, Kazakhstan. Includes footage of the crew climbing into the Soyuz spacecraft, interviews, visit to museum where the crew sign posters and a flag; flag raising ceremony; and visit to mating facility.

  19. Expedition 43 Crew Final Exams in Russia

    NASA Image and Video Library

    2015-03-13

    NASA Video File of ISS Expedition 43 final exams in Russia on March 5, 2015 with crewmembers Scott Kelly, Gennady Padalka, and Mikhail Kornienko; and backup crew Jeff Williams, Sergei Volkov and Alexei Ovchinin. Includes footage of final qualification training at the Gagarin Cosmonaut Training Center (GCTC); interview with Emily Nelson, ISS Expedition 46 Lead Flight Director; and scenes from the qualification training.

  20. Video game playing and its relations with aggressive and prosocial behaviour.

    PubMed

    Wiegman, O; van Schie, E G

    1998-09-01

    In this study of 278 children from the seventh and eighth grade of five elementary schools in Enschede, The Netherlands, the relationship between the amount of time children spent on playing video games and aggressive as well as prosocial behaviour was investigated. In addition, the relationship between the preference for aggressive video games and aggressive and prosocial behaviour was studied. No significant relationship was found between video game use in general and aggressive behaviour, but a significant negative relationship with prosocial behaviour was supported. However, separate analyses for boys and girls did not reveal this relationship. More consistent results were found for the preference for aggressive video games: children, especially boys, who preferred aggressive video games were more aggressive and showed less prosocial behaviour than those with a low preference for these games. Further analyses showed that children who preferred playing aggressive video games tended to be less intelligent.

  1. Learning to manage complexity through simulation: students' challenges and possible strategies.

    PubMed

    Gormley, Gerard J; Fenwick, Tara

    2016-06-01

    Many have called for medical students to learn how to manage complexity in healthcare. This study examines the nuances of students' challenges in coping with a complex simulation learning activity, using concepts from complexity theory, and suggests strategies to help them better understand and manage complexity.Wearing video glasses, participants took part in a simulation ward-based exercise that incorporated characteristics of complexity. Video footage was used to elicit interviews, which were transcribed. Using complexity theory as a theoretical lens, an iterative approach was taken to identify the challenges that participants faced and possible coping strategies using both interview transcripts and video footage.Students' challenges in coping with clinical complexity included being: a) unprepared for 'diving in', b) caught in an escalating system, c) captured by the patient, and d) unable to assert boundaries of acceptable practice.Many characteristics of complexity can be recreated in a ward-based simulation learning activity, affording learners an embodied and immersive experience of these complexity challenges. Possible strategies for managing complexity themes include: a) taking time to size up the system, b) attuning to what emerges, c) reducing complexity, d) boundary practices, and e) working with uncertainty. This study signals pedagogical opportunities for recognizing and dealing with complexity.

  2. Playing a first-person shooter video game induces neuroplastic change.

    PubMed

    Wu, Sijing; Cheng, Cho Kin; Feng, Jing; D'Angelo, Lisa; Alain, Claude; Spence, Ian

    2012-06-01

    Playing a first-person shooter (FPS) video game alters the neural processes that support spatial selective attention. Our experiment establishes a causal relationship between playing an FPS game and neuroplastic change. Twenty-five participants completed an attentional visual field task while we measured ERPs before and after playing an FPS video game for a cumulative total of 10 hr. Early visual ERPs sensitive to bottom-up attentional processes were little affected by video game playing for only 10 hr. However, participants who played the FPS video game and also showed the greatest improvement on the attentional visual field task displayed increased amplitudes in the later visual ERPs. These potentials are thought to index top-down enhancement of spatial selective attention via increased inhibition of distractors. Individual variations in learning were observed, and these differences show that not all video game players benefit equally, either behaviorally or in terms of neural change.

  3. Application of robust face recognition in video surveillance systems

    NASA Astrophysics Data System (ADS)

    Zhang, De-xin; An, Peng; Zhang, Hao-xiang

    2018-03-01

    In this paper, we propose a video searching system that utilizes face recognition as searching indexing feature. As the applications of video cameras have great increase in recent years, face recognition makes a perfect fit for searching targeted individuals within the vast amount of video data. However, the performance of such searching depends on the quality of face images recorded in the video signals. Since the surveillance video cameras record videos without fixed postures for the object, face occlusion is very common in everyday video. The proposed system builds a model for occluded faces using fuzzy principal component analysis (FPCA), and reconstructs the human faces with the available information. Experimental results show that the system has very high efficiency in processing the real life videos, and it is very robust to various kinds of face occlusions. Hence it can relieve people reviewers from the front of the monitors and greatly enhances the efficiency as well. The proposed system has been installed and applied in various environments and has already demonstrated its power by helping solving real cases.

  4. Games people play: How video games improve probabilistic learning.

    PubMed

    Schenk, Sabrina; Lech, Robert K; Suchan, Boris

    2017-09-29

    Recent research suggests that video game playing is associated with many cognitive benefits. However, little is known about the neural mechanisms mediating such effects, especially with regard to probabilistic categorization learning, which is a widely unexplored area in gaming research. Therefore, the present study aimed to investigate the neural correlates of probabilistic classification learning in video gamers in comparison to non-gamers. Subjects were scanned in a 3T magnetic resonance imaging (MRI) scanner while performing a modified version of the weather prediction task. Behavioral data yielded evidence for better categorization performance of video gamers, particularly under conditions characterized by stronger uncertainty. Furthermore, a post-experimental questionnaire showed that video gamers had acquired higher declarative knowledge about the card combinations and the related weather outcomes. Functional imaging data revealed for video gamers stronger activation clusters in the hippocampus, the precuneus, the cingulate gyrus and the middle temporal gyrus as well as in occipital visual areas and in areas related to attentional processes. All these areas are connected with each other and represent critical nodes for semantic memory, visual imagery and cognitive control. Apart from this, and in line with previous studies, both groups showed activation in brain areas that are related to attention and executive functions as well as in the basal ganglia and in memory-associated regions of the medial temporal lobe. These results suggest that playing video games might enhance the usage of declarative knowledge as well as hippocampal involvement and enhances overall learning performance during probabilistic learning. In contrast to non-gamers, video gamers showed better categorization performance, independently of the uncertainty of the condition. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Surgical gesture classification from video and kinematic data.

    PubMed

    Zappella, Luca; Béjar, Benjamín; Hager, Gregory; Vidal, René

    2013-10-01

    Much of the existing work on automatic classification of gestures and skill in robotic surgery is based on dynamic cues (e.g., time to completion, speed, forces, torque) or kinematic data (e.g., robot trajectories and velocities). While videos could be equally or more discriminative (e.g., videos contain semantic information not present in kinematic data), they are typically not used because of the difficulties associated with automatic video interpretation. In this paper, we propose several methods for automatic surgical gesture classification from video data. We assume that the video of a surgical task (e.g., suturing) has been segmented into video clips corresponding to a single gesture (e.g., grabbing the needle, passing the needle) and propose three methods to classify the gesture of each video clip. In the first one, we model each video clip as the output of a linear dynamical system (LDS) and use metrics in the space of LDSs to classify new video clips. In the second one, we use spatio-temporal features extracted from each video clip to learn a dictionary of spatio-temporal words, and use a bag-of-features (BoF) approach to classify new video clips. In the third one, we use multiple kernel learning (MKL) to combine the LDS and BoF approaches. Since the LDS approach is also applicable to kinematic data, we also use MKL to combine both types of data in order to exploit their complementarity. Our experiments on a typical surgical training setup show that methods based on video data perform equally well, if not better, than state-of-the-art approaches based on kinematic data. In turn, the combination of both kinematic and video data outperforms any other algorithm based on one type of data alone. Copyright © 2013 Elsevier B.V. All rights reserved.

  6. Video-based face recognition via convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Bao, Tianlong; Ding, Chunhui; Karmoshi, Saleem; Zhu, Ming

    2017-06-01

    Face recognition has been widely studied recently while video-based face recognition still remains a challenging task because of the low quality and large intra-class variation of video captured face images. In this paper, we focus on two scenarios of video-based face recognition: 1)Still-to-Video(S2V) face recognition, i.e., querying a still face image against a gallery of video sequences; 2)Video-to-Still(V2S) face recognition, in contrast to S2V scenario. A novel method was proposed in this paper to transfer still and video face images to an Euclidean space by a carefully designed convolutional neural network, then Euclidean metrics are used to measure the distance between still and video images. Identities of still and video images that group as pairs are used as supervision. In the training stage, a joint loss function that measures the Euclidean distance between the predicted features of training pairs and expanding vectors of still images is optimized to minimize the intra-class variation while the inter-class variation is guaranteed due to the large margin of still images. Transferred features are finally learned via the designed convolutional neural network. Experiments are performed on COX face dataset. Experimental results show that our method achieves reliable performance compared with other state-of-the-art methods.

  7. Learner-Generated Digital Video: Using Ideas Videos in Teacher Education

    ERIC Educational Resources Information Center

    Kearney, Matthew

    2013-01-01

    This qualitative study investigates the efficacy of "Ideas Videos" (or "iVideos") in pre-service teacher education. It explores the experiences of student teachers and their lecturer engaging with this succinct, advocacy-style video genre designed to evoke emotions about powerful ideas in Education (Wong, Mishra, Koehler, &…

  8. Privacy enabling technology for video surveillance

    NASA Astrophysics Data System (ADS)

    Dufaux, Frédéric; Ouaret, Mourad; Abdeljaoued, Yousri; Navarro, Alfonso; Vergnenègre, Fabrice; Ebrahimi, Touradj

    2006-05-01

    In this paper, we address the problem privacy in video surveillance. We propose an efficient solution based on transformdomain scrambling of regions of interest in a video sequence. More specifically, the sign of selected transform coefficients is flipped during encoding. We address more specifically the case of Motion JPEG 2000. Simulation results show that the technique can be successfully applied to conceal information in regions of interest in the scene while providing with a good level of security. Furthermore, the scrambling is flexible and allows adjusting the amount of distortion introduced. This is achieved with a small impact on coding performance and negligible computational complexity increase. In the proposed video surveillance system, heterogeneous clients can remotely access the system through the Internet or 2G/3G mobile phone network. Thanks to the inherently scalable Motion JPEG 2000 codestream, the server is able to adapt the resolution and bandwidth of the delivered video depending on the usage environment of the client.

  9. Video Image Stabilization and Registration

    NASA Technical Reports Server (NTRS)

    Hathaway, David H. (Inventor); Meyer, Paul J. (Inventor)

    2002-01-01

    A method of stabilizing and registering a video image in multiple video fields of a video sequence provides accurate determination of the image change in magnification, rotation and translation between video fields, so that the video fields may be accurately corrected for these changes in the image in the video sequence. In a described embodiment, a key area of a key video field is selected which contains an image which it is desired to stabilize in a video sequence. The key area is subdivided into nested pixel blocks and the translation of each of the pixel blocks from the key video field to a new video field is determined as a precursor to determining change in magnification, rotation and translation of the image from the key video field to the new video field.

  10. Video Image Stabilization and Registration

    NASA Technical Reports Server (NTRS)

    Hathaway, David H. (Inventor); Meyer, Paul J. (Inventor)

    2003-01-01

    A method of stabilizing and registering a video image in multiple video fields of a video sequence provides accurate determination of the image change in magnification, rotation and translation between video fields, so that the video fields may be accurately corrected for these changes in the image in the video sequence. In a described embodiment, a key area of a key video field is selected which contains an image which it is desired to stabilize in a video sequence. The key area is subdivided into nested pixel blocks and the translation of each of the pixel blocks from the key video field to a new video field is determined as a precursor to determining change in magnification, rotation and translation of the image from the key video field to the new video field.

  11. Video Captions Benefit Everyone.

    PubMed

    Gernsbacher, Morton Ann

    2015-10-01

    Video captions, also known as same-language subtitles, benefit everyone who watches videos (children, adolescents, college students, and adults). More than 100 empirical studies document that captioning a video improves comprehension of, attention to, and memory for the video. Captions are particularly beneficial for persons watching videos in their non-native language, for children and adults learning to read, and for persons who are D/deaf or hard of hearing. However, despite U.S. laws, which require captioning in most workplace and educational contexts, many video audiences and video creators are naïve about the legal mandate to caption, much less the empirical benefit of captions.

  12. Video Captions Benefit Everyone

    PubMed Central

    Gernsbacher, Morton Ann

    2016-01-01

    Video captions, also known as same-language subtitles, benefit everyone who watches videos (children, adolescents, college students, and adults). More than 100 empirical studies document that captioning a video improves comprehension of, attention to, and memory for the video. Captions are particularly beneficial for persons watching videos in their non-native language, for children and adults learning to read, and for persons who are D/deaf or hard of hearing. However, despite U.S. laws, which require captioning in most workplace and educational contexts, many video audiences and video creators are naïve about the legal mandate to caption, much less the empirical benefit of captions. PMID:28066803

  13. Hierarchical video summarization

    NASA Astrophysics Data System (ADS)

    Ratakonda, Krishna; Sezan, M. Ibrahim; Crinon, Regis J.

    1998-12-01

    We address the problem of key-frame summarization of vide in the absence of any a priori information about its content. This is a common problem that is encountered in home videos. We propose a hierarchical key-frame summarization algorithm where a coarse-to-fine key-frame summary is generated. A hierarchical key-frame summary facilitates multi-level browsing where the user can quickly discover the content of the video by accessing its coarsest but most compact summary and then view a desired segment of the video with increasingly more detail. At the finest level, the summary is generated on the basis of color features of video frames, using an extension of a recently proposed key-frame extraction algorithm. The finest level key-frames are recursively clustered using a novel pairwise K-means clustering approach with temporal consecutiveness constraint. We also address summarization of MPEG-2 compressed video without fully decoding the bitstream. We also propose efficient mechanisms that facilitate decoding the video when the hierarchical summary is utilized in browsing and playback of video segments starting at selected key-frames.

  14. Overlaid caption extraction in news video based on SVM

    NASA Astrophysics Data System (ADS)

    Liu, Manman; Su, Yuting; Ji, Zhong

    2007-11-01

    Overlaid caption in news video often carries condensed semantic information which is key cues for content-based video indexing and retrieval. However, it is still a challenging work to extract caption from video because of its complex background and low resolution. In this paper, we propose an effective overlaid caption extraction approach for news video. We first scan the video key frames using a small window, and then classify the blocks into the text and non-text ones via support vector machine (SVM), with statistical features extracted from the gray level co-occurrence matrices, the LH and HL sub-bands wavelet coefficients and the orientated edge intensity ratios. Finally morphological filtering and projection profile analysis are employed to localize and refine the candidate caption regions. Experiments show its high performance on four 30-minute news video programs.

  15. Using Sensor-based Demand Controlled Ventilation to Realize Energy Savings in Laboratories

    DTIC Science & Technology

    2014-03-27

    is warranted. The results show that a DCV system is life-cycle cost effective for many different HVAC system total pressure and square footage ...Name and Description of System Sensors ......................................................... 44 Table 5. BEL Laboratory HVAC Zones, Square Footage ...Intensity ............................................................................. 74 Table 9. Range of USAF Laboratory Square Footage and Occupancy

  16. The Measurement of Intelligence in the XXI Century using Video Games.

    PubMed

    Quiroga, M A; Román, F J; De La Fuente, J; Privado, J; Colom, R

    2016-12-05

    This paper reviews the use of video games for measuring intelligence differences and reports two studies analyzing the relationship between intelligence and performance on a leisure video game. In the first study, the main focus was to design an Intelligence Test using puzzles from the video game. Forty-seven young participants played "Professor Layton and the curious village"® for a maximum of 15 hours and completed a set of intelligence standardized tests. Results show that the time required for completing the game interacts with intelligence differences: the higher the intelligence, the lower the time (d = .91). Furthermore, a set of 41 puzzles showed excellent psychometric properties. The second study, done seven years later, confirmed the previous findings. We finally discuss the pros and cons of video games as tools for measuring cognitive abilities with commercial video games, underscoring that psychologists must develop their own intelligence video games and delineate their key features for the measurement devices of next generation.

  17. Using Video Modeling to Increase Variation in the Conversation of Children with Autism

    ERIC Educational Resources Information Center

    Charlop, Marjorie H.; Gilmore, Laura; Chang, Gina T.

    2009-01-01

    The present study assessed the effects of video modeling on acquisition and generalization of variation in the conversational speech of two boys with autism. A video was made showing several versions of several topics of conversation, thus providing multiple exemplars of each conversation. Video modeling consisted of showing each child a video…

  18. The calibration of video cameras for quantitative measurements

    NASA Technical Reports Server (NTRS)

    Snow, Walter L.; Childers, Brooks A.; Shortis, Mark R.

    1993-01-01

    Several different recent applications of velocimetry at Langley Research Center are described in order to show the need for video camera calibration for quantitative measurements. Problems peculiar to video sensing are discussed, including synchronization and timing, targeting, and lighting. The extension of the measurements to include radiometric estimates is addressed.

  19. Research on compression performance of ultrahigh-definition videos

    NASA Astrophysics Data System (ADS)

    Li, Xiangqun; He, Xiaohai; Qing, Linbo; Tao, Qingchuan; Wu, Di

    2017-11-01

    With the popularization of high-definition (HD) images and videos (1920×1080 pixels and above), there are even 4K (3840×2160) television signals and 8 K (8192×4320) ultrahigh-definition videos. The demand for HD images and videos is increasing continuously, along with the increasing data volume. The storage and transmission cannot be properly solved only by virtue of the expansion capacity of hard disks and the update and improvement of transmission devices. Based on the full use of the coding standard high-efficiency video coding (HEVC), super-resolution reconstruction technology, and the correlation between the intra- and the interprediction, we first put forward a "division-compensation"-based strategy to further improve the compression performance of a single image and frame I. Then, by making use of the above thought and HEVC encoder and decoder, a video compression coding frame is designed. HEVC is used inside the frame. Last, with the super-resolution reconstruction technology, the reconstructed video quality is further improved. The experiment shows that by the proposed compression method for a single image (frame I) and video sequence here, the performance is superior to that of HEVC in a low bit rate environment.

  20. The Association Between Video Game Play and Cognitive Function: Does Gaming Platform Matter?

    PubMed

    Huang, Vivian; Young, Michaelia; Fiocco, Alexandra J

    2017-11-01

    Despite consumer growth, few studies have evaluated the cognitive effects of gaming using mobile devices. This study examined the association between video game play platform and cognitive performance. Furthermore, the differential effect of video game genre (action versus nonaction) was explored. Sixty undergraduate students completed a video game experience questionnaire, and we divided them into three groups: mobile video game players (MVGPs), console/computer video game players (CVGPs), and nonvideo game players (NVGPs). Participants completed a cognitive battery to assess executive function, and learning and memory. Controlling for sex and ethnicity, analyses showed that frequent video game play is associated with enhanced executive function, but not learning and memory. MVGPs were significantly more accurate on working memory performances than NVGPs. Both MVGPs and CVGPs were similarly associated with enhanced cognitive function, suggesting that platform does not significantly determine the benefits of frequent video game play. Video game platform was found to differentially associate with preference for action video game genre and motivation for gaming. Exploratory analyses show that sex significantly effects frequent video game play, platform and genre preference, and cognitive function. This study represents a novel exploration of the relationship between mobile video game play and cognition and adds support to the cognitive benefits of frequent video game play.

  1. Non-mydriatic, wide field, fundus video camera

    NASA Astrophysics Data System (ADS)

    Hoeher, Bernhard; Voigtmann, Peter; Michelson, Georg; Schmauss, Bernhard

    2014-02-01

    We describe a method we call "stripe field imaging" that is capable of capturing wide field color fundus videos and images of the human eye at pupil sizes of 2mm. This means that it can be used with a non-dilated pupil even with bright ambient light. We realized a mobile demonstrator to prove the method and we could acquire color fundus videos of subjects successfully. We designed the demonstrator as a low-cost device consisting of mass market components to show that there is no major additional technical outlay to realize the improvements we propose. The technical core idea of our method is breaking the rotational symmetry in the optical design that is given in many conventional fundus cameras. By this measure we could extend the possible field of view (FOV) at a pupil size of 2mm from a circular field with 20° in diameter to a square field with 68° by 18° in size. We acquired a fundus video while the subject was slightly touching and releasing the lid. The resulting video showed changes at vessels in the region of the papilla and a change of the paleness of the papilla.

  2. STS-107 Mission Highlights Resource, Part 4 of 4

    NASA Technical Reports Server (NTRS)

    2003-01-01

    This video, Part 4 of 4, shows the activities of the STS-107 crew during flight days 13 through 15 of the Columbia orbiter's final flight. The crew consists of Commander Rick Husband, Pilot William McCool, Payload Commander Michael Anderson, Mission Specialists David Brown, Kalpana Chawla, and Laurel Clark, and Payload Specialist Ilan Ramon. The highlight of flight day 13 is Kalpana Chawla conversing with Mission Control Center in Houston during troubleshooting of the Combustion Module in a recovery procedure to get the MIST fire suppression experiment back online. Chawla is shown replacing an atomizer head. At Mission Control Center a vase of flowers commemorating the astronauts who died on board Space Shuttle Challenger's final flight is shown and explained. The footage of flight day 14 consists of a tour of Columbia's flight deck, middeck, and Spacehab research module. Rick Husband narrates the tour, which features Kalpana Chawla, Laurel Clark, and himself. The astronauts demonstrate hygene, a dining tray, the orbiter's toilet, and a space iron, which is a rack for strapping down shirts. The Earth limb is shown with the Spacehab module in the foreground. Clark exercises on a bicycle for a respiration experiment, and demonstrates how a compact disk player gyrates in microgravity. On flight day 15, the combustion module is running again, and footage is shown of the Water Mist Fire-Suppression Experiment (Mist) in operation. Laurel Clark narrates a segment of the video in which Ilan Ramon exercises on a bicycle, Rick Husband, Kalpana Chawla, and Ramon demonstrate spinning and push-ups in the Spacehab module, and Clark demonstrates eating from a couple of food packets. The video ends with a shot of the Earth limb reflected on the radiator on the inside of Columbia's open payload bay door with the Earth in the background.

  3. Videos and Animations for Vocabulary Learning: A Study on Difficult Words

    ERIC Educational Resources Information Center

    Lin, Chih-cheng; Tseng, Yi-fang

    2012-01-01

    Studies on using still images and dynamic videos in multimedia annotations produced inconclusive results. A further examination, however, showed that the principle of using videos to explain complex concepts was not observed in the previous studies. This study was intended to investigate whether videos, compared with pictures, better assist…

  4. A novel sub-shot segmentation method for user-generated video

    NASA Astrophysics Data System (ADS)

    Lei, Zhuo; Zhang, Qian; Zheng, Chi; Qiu, Guoping

    2018-04-01

    With the proliferation of the user-generated videos, temporal segmentation is becoming a challengeable problem. Traditional video temporal segmentation methods like shot detection are not able to work on unedited user-generated videos, since they often only contain one single long shot. We propose a novel temporal segmentation framework for user-generated video. It finds similar frames with a tree partitioning min-Hash technique, constructs sparse temporal constrained affinity sub-graphs, and finally divides the video into sub-shot-level segments with a dense-neighbor-based clustering method. Experimental results show that our approach outperforms all the other related works. Furthermore, it is indicated that the proposed approach is able to segment user-generated videos at an average human level.

  5. Estimation of low back moments from video analysis: a validation study.

    PubMed

    Coenen, Pieter; Kingma, Idsart; Boot, Cécile R L; Faber, Gert S; Xu, Xu; Bongers, Paulien M; van Dieën, Jaap H

    2011-09-02

    This study aimed to develop, compare and validate two versions of a video analysis method for assessment of low back moments during occupational lifting tasks since for epidemiological studies and ergonomic practice relatively cheap and easily applicable methods to assess low back loads are needed. Ten healthy subjects participated in a protocol comprising 12 lifting conditions. Low back moments were assessed using two variants of a video analysis method and a lab-based reference method. Repeated measures ANOVAs showed no overall differences in peak moments between the two versions of the video analysis method and the reference method. However, two conditions showed a minor overestimation of one of the video analysis method moments. Standard deviations were considerable suggesting that errors in the video analysis were random. Furthermore, there was a small underestimation of dynamic components and overestimation of the static components of the moments. Intraclass correlations coefficients for peak moments showed high correspondence (>0.85) of the video analyses with the reference method. It is concluded that, when a sufficient number of measurements can be taken, the video analysis method for assessment of low back loads during lifting tasks provides valid estimates of low back moments in ergonomic practice and epidemiological studies for lifts up to a moderate level of asymmetry. Copyright © 2011 Elsevier Ltd. All rights reserved.

  6. Does expert perceptual anticipation transfer to a dissimilar domain?

    PubMed

    Müller, Sean; McLaren, Michelle; Appleby, Brendyn; Rosalie, Simon M

    2015-06-01

    The purpose of this experiment was to extend theoretical understanding of transfer of learning by investigating whether expert perceptual anticipation skill transfers to a dissimilar domain. The capability of expert and near-expert rugby players as well as novices to anticipate skill type within rugby (learning sport) was first examined using a temporal occlusion paradigm. Participants watched video footage of an opponent performing rugby skill types that were temporally occluded at different points in the opponent's action and then made a written prediction. Thereafter, the capability of participants to transfer their anticipation skill to predict pitch type in baseball (transfer sport) was examined. Participants watched video footage of a pitcher throwing different pitch types that were temporally occluded and made a written prediction. Results indicated that expert and near-expert rugby players anticipated significantly better than novices across all occlusion conditions. However, none of the skill groups were able to transfer anticipation skill to predict pitch type in baseball. The findings of this paper, along with existing literature, support the theoretical prediction that transfer of perceptual anticipation is expertise dependent and restricted to similar domains. (c) 2015 APA, all rights reserved).

  7. STS-108 Post Flight Presentation

    NASA Technical Reports Server (NTRS)

    2002-01-01

    The crewmembers of STS-108, Commander Dominic Gorie, Pilot Mark Kelly, and Mission Specialists Linda Godwin and Daniel Tani, narrate this video as footage from the mission is shown. The crew is seen flying into Kennedy Space Center, suiting up, boarding the Endeavour Orbiter, and during launch. Various mission highlights are seen, including the rendezvous with the International Space Station (ISS) and docking of Endeavour, the unloading of the Multipurpose Logistics Module (MPLM), and the spacewalk to install thermal blankets over the Beta Gimbal Assemblies (BGAs) at the bases of the Space Station's solar panels. A glimpse is given into the difficulties of working in a microgravity environment as the crewmembers attempt to eat food before it floats away from them and drink water from a bag. The exchange of the Expedition 4 (Yuri I. Onufrienko, Carl E. Walz, and Daniel W. Bursch) for the Expedition 3 crew (Frank L. Culbertson, Mikhail Turin, and Vladimir N. Dezhurov) is also seen. The Endeavour undocks from the ISS, which is seen over the Caribbean Sea. Endeavour passes over Cuba, and footage of the Swiss Alps is shown. The video ends with the landing of the spacecraft.

  8. Tracking people and cars using 3D modeling and CCTV.

    PubMed

    Edelman, Gerda; Bijhold, Jurrien

    2010-10-10

    The aim of this study was to find a method for the reconstruction of movements of people and cars using CCTV footage and a 3D model of the environment. A procedure is proposed, in which video streams are synchronized and displayed in a 3D model, by using virtual cameras. People and cars are represented by cylinders and boxes, which are moved in the 3D model, according to their movements as shown in the video streams. The procedure was developed and tested in an experimental setup with test persons who logged their GPS coordinates as a recording of the ground truth. Results showed that it is possible to implement this procedure and to reconstruct movements of people and cars from video recordings. The procedure was also applied to a forensic case. In this work we experienced that more situational awareness was created by the 3D model, which made it easier to track people on multiple video streams. Based on all experiences from the experimental set up and the case, recommendations are formulated for use in practice. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  9. Analysis-Preserving Video Microscopy Compression via Correlation and Mathematical Morphology

    PubMed Central

    Shao, Chong; Zhong, Alfred; Cribb, Jeremy; Osborne, Lukas D.; O’Brien, E. Timothy; Superfine, Richard; Mayer-Patel, Ketan; Taylor, Russell M.

    2015-01-01

    The large amount video data produced by multi-channel, high-resolution microscopy system drives the need for a new high-performance domain-specific video compression technique. We describe a novel compression method for video microscopy data. The method is based on Pearson's correlation and mathematical morphology. The method makes use of the point-spread function (PSF) in the microscopy video acquisition phase. We compare our method to other lossless compression methods and to lossy JPEG, JPEG2000 and H.264 compression for various kinds of video microscopy data including fluorescence video and brightfield video. We find that for certain data sets, the new method compresses much better than lossless compression with no impact on analysis results. It achieved a best compressed size of 0.77% of the original size, 25× smaller than the best lossless technique (which yields 20% for the same video). The compressed size scales with the video's scientific data content. Further testing showed that existing lossy algorithms greatly impacted data analysis at similar compression sizes. PMID:26435032

  10. Pediatric Audiometry: The Relative Success of Toy and Video Reinforcers.

    ERIC Educational Resources Information Center

    Doggett, Sheryl; Gans, Donald P.; Stein, Ramona

    2000-01-01

    An operate conditional technique was used to determine the relative success of toys and video shows as reinforcers for testing the hearing of 28 younger (30-month-old) and 28 older (45-month old) children. Animated toys and video shows for children were equally effective as reinforcers for both age groups. (Contains references.) (Author/CR)

  11. Effective Educational Videos: Principles and Guidelines for Maximizing Student Learning from Video Content

    PubMed Central

    Brame, Cynthia J.

    2016-01-01

    Educational videos have become an important part of higher education, providing an important content-delivery tool in many flipped, blended, and online classes. Effective use of video as an educational tool is enhanced when instructors consider three elements: how to manage cognitive load of the video; how to maximize student engagement with the video; and how to promote active learning from the video. This essay reviews literature relevant to each of these principles and suggests practical ways instructors can use these principles when using video as an educational tool. PMID:27789532

  12. Baby Killers: Documentation and Evolution of Scuttle Fly (Diptera: Phoridae) Parasitism of Ant (Hymenoptera: Formicidae) Brood

    PubMed Central

    Brown, Brian V.; Hash, John M.; Porras, Wendy; Amorim, Dalton de Souza

    2017-01-01

    Abstract Background Numerous well-documented associations occur among species of scuttle flies (Diptera: Phoridae) and ants (Hymenoptera: Formicidae), but examples of brood parasitism are rare and the mechanisms of parasitism often remain unsubstantiated. New information We present two video-documented examples of ant brood (larvae and pupae) parasitism by scuttle flies. In footage from Estação Biológica de Boracéia in Brazil, adult females of Ceratoconus setipennis Borgmeier can be seen attacking workers of Linepithema humile (Mayr) species group while they are carrying brood, and ovipositing directly onto brood in the nest. In another remarkable example, footage from the Soltis Center, near Peñas Blancas in Costa Rica, shows adult females of an unidentified species of the Apocephalus grandipalpus Borgmeier group mounting Pheidole Westwood brood upside-down and ovipositing while the brood are being transported by workers. Analysis of evolutionary relationships (in preparation) among Apocephalus Coquillett species shows that this is a newly derived behavior within the genus, as the A. grandipalpus group arises within a group of adult ant parasitoids. In contrast, relationships of Ceratoconus Borgmeier have not been studied, and the lifestyles of the other species in the genus are largely unknown. PMID:28325980

  13. Using Short Videos to Teach Research Ethics

    NASA Astrophysics Data System (ADS)

    Loui, M. C.

    2014-12-01

    Created with support from the National Science Foundation, EthicsCORE (www.natonalethicscenter.org) is an online resource center for ethics in science and engineering. Among the resources, EthicsCORE hosts short video vignettes produced at the University of Nebraska - Lincoln that dramatize problems in the responsible conduct of research, such as peer review of journal submissions, and mentoring relationships between faculty and graduate students. I will use one of the video vignettes in an interactive pedagogical demonstration. After showing the video, I will ask participants to engage in a think-pair-share activity on the professional obligations of researchers. During the sharing phase, participants will supply the reasons for these obligations.

  14. Video segmentation using keywords

    NASA Astrophysics Data System (ADS)

    Ton-That, Vinh; Vong, Chi-Tai; Nguyen-Dao, Xuan-Truong; Tran, Minh-Triet

    2018-04-01

    At DAVIS-2016 Challenge, many state-of-art video segmentation methods achieve potential results, but they still much depend on annotated frames to distinguish between background and foreground. It takes a lot of time and efforts to create these frames exactly. In this paper, we introduce a method to segment objects from video based on keywords given by user. First, we use a real-time object detection system - YOLOv2 to identify regions containing objects that have labels match with the given keywords in the first frame. Then, for each region identified from the previous step, we use Pyramid Scene Parsing Network to assign each pixel as foreground or background. These frames can be used as input frames for Object Flow algorithm to perform segmentation on entire video. We conduct experiments on a subset of DAVIS-2016 dataset in half the size of its original size, which shows that our method can handle many popular classes in PASCAL VOC 2012 dataset with acceptable accuracy, about 75.03%. We suggest widely testing by combining other methods to improve this result in the future.

  15. Video Time Encoding Machines

    PubMed Central

    Lazar, Aurel A.; Pnevmatikakis, Eftychios A.

    2013-01-01

    We investigate architectures for time encoding and time decoding of visual stimuli such as natural and synthetic video streams (movies, animation). The architecture for time encoding is akin to models of the early visual system. It consists of a bank of filters in cascade with single-input multi-output neural circuits. Neuron firing is based on either a threshold-and-fire or an integrate-and-fire spiking mechanism with feedback. We show that analog information is represented by the neural circuits as projections on a set of band-limited functions determined by the spike sequence. Under Nyquist-type and frame conditions, the encoded signal can be recovered from these projections with arbitrary precision. For the video time encoding machine architecture, we demonstrate that band-limited video streams of finite energy can be faithfully recovered from the spike trains and provide a stable algorithm for perfect recovery. The key condition for recovery calls for the number of neurons in the population to be above a threshold value. PMID:21296708

  16. Open-source telemedicine platform for wireless medical video communication.

    PubMed

    Panayides, A; Eleftheriou, I; Pantziaris, M

    2013-01-01

    An m-health system for real-time wireless communication of medical video based on open-source software is presented. The objective is to deliver a low-cost telemedicine platform which will allow for reliable remote diagnosis m-health applications such as emergency incidents, mass population screening, and medical education purposes. The performance of the proposed system is demonstrated using five atherosclerotic plaque ultrasound videos. The videos are encoded at the clinically acquired resolution, in addition to lower, QCIF, and CIF resolutions, at different bitrates, and four different encoding structures. Commercially available wireless local area network (WLAN) and 3.5G high-speed packet access (HSPA) wireless channels are used to validate the developed platform. Objective video quality assessment is based on PSNR ratings, following calibration using the variable frame delay (VFD) algorithm that removes temporal mismatch between original and received videos. Clinical evaluation is based on atherosclerotic plaque ultrasound video assessment protocol. Experimental results show that adequate diagnostic quality wireless medical video communications are realized using the designed telemedicine platform. HSPA cellular networks provide for ultrasound video transmission at the acquired resolution, while VFD algorithm utilization bridges objective and subjective ratings.

  17. Open-Source Telemedicine Platform for Wireless Medical Video Communication

    PubMed Central

    Panayides, A.; Eleftheriou, I.; Pantziaris, M.

    2013-01-01

    An m-health system for real-time wireless communication of medical video based on open-source software is presented. The objective is to deliver a low-cost telemedicine platform which will allow for reliable remote diagnosis m-health applications such as emergency incidents, mass population screening, and medical education purposes. The performance of the proposed system is demonstrated using five atherosclerotic plaque ultrasound videos. The videos are encoded at the clinically acquired resolution, in addition to lower, QCIF, and CIF resolutions, at different bitrates, and four different encoding structures. Commercially available wireless local area network (WLAN) and 3.5G high-speed packet access (HSPA) wireless channels are used to validate the developed platform. Objective video quality assessment is based on PSNR ratings, following calibration using the variable frame delay (VFD) algorithm that removes temporal mismatch between original and received videos. Clinical evaluation is based on atherosclerotic plaque ultrasound video assessment protocol. Experimental results show that adequate diagnostic quality wireless medical video communications are realized using the designed telemedicine platform. HSPA cellular networks provide for ultrasound video transmission at the acquired resolution, while VFD algorithm utilization bridges objective and subjective ratings. PMID:23573082

  18. Small Particle Impact Damage on Different Glass Substrates

    NASA Technical Reports Server (NTRS)

    Waxman, R.; Guven, I.; Gray, P.

    2017-01-01

    Impact experiments using sand particles were performed on four distinct glass substrates. The sand particles were characterized using the X-Ray micro-CT technique; 3-D reconstruction of the particles was followed by further size and shape analyses. High-speed video footage from impact tests was used to calculate the incoming and rebound velocities of the individual sand impact events, as well as particle volume. Further, video analysis was used in conjunction with optical and scanning electron microscopy to relate the incoming velocity and shape of the particles to subsequent fractures, including both radial and lateral cracks. Analysis was performed using peridynamic simulations.

  19. Video Observations Inside Channels of Erupting Geysers, Geyser Valley, Russia

    NASA Astrophysics Data System (ADS)

    Belousov, A.; Belousova, M.; Nechaev, A.

    2011-12-01

    Geysers are a variety of hot springs characterized by violent ejections of water and steam separated by periods of repose. While ordinary boiling springs are numerous and occur in many places on Earth, geysers are very rare. In total, less than 1000 geysers are known worldwide, and most of them are located in three large geyser fields: Yellowstone (USA), Geyser Valley (Russia), and El Tatio (Chile). Several physical models were suggested to explain periodic eruptions of geysers, but realistic understanding of processes was hampered by the scarcity of field data on the internal plumbing of geyser systems. Here we present data based on video observations of interior conduit systems for geysers in Geyser Valley in Kamchatka, Russia. To investigate geyser plumbing systems we lowered a video camera (with thermal and water insulation) into the conduits of four erupting geysers. These included Velikan and Bolshoy, the largest geysers in the field, ejecting about 20 and 15 cub.m of water to heights of 25 and 15 m, respectively, with rather stable periods of approximately 5 h and 1 h. We also investigated Vanna and Kovarny, small geysers with irregular regimes, ejecting about ten liters of water to heights as much as 1.5 m, with periods of several minutes. The video footage reveals internal plumbing geometries and hydrodynamic processes that contradict the widely accepted "simple vertical conduit model", which regards geyser eruptions as caused by flashing of superheated water into steam. In contrast, our data fit the long-neglected "boiler model", in which steam accumulates in an underground cavity (boiler) and periodically erupts out through a water-filled, inverted siphon. We describe the physical rationale and conditions for the periodic discharge of steam from a boiler. Channels of the studied geysers are developed by ascending hot water in deposits of several voluminous prehistoric landslides (debris avalanches). The highly irregular contacts between adjacent debris

  20. JPL-20170825-CASSINf-0001-Cassini Nears the End of Its Mission Video File

    NASA Image and Video Library

    2017-08-25

    On Sept. 15, 2017, NASA's Cassini spacecraft will end it mission by diving into the atomosphere of Saturn. Animation: one of Cassini's final passes between Saturn and its rings, Cassini's final 22 orbits, final plunge. Footage: construction of Cassini at JPL. Interview excerpts from Linda Spilker, Cassini Project Scientist; Earl Maize, Cassini Project Manager; Julie Webster, Cassini Spacecraft Operations Manager.

  1. Converting laserdisc video to digital video: a demonstration project using brain animations.

    PubMed

    Jao, C S; Hier, D B; Brint, S U

    1995-01-01

    Interactive laserdiscs are of limited value in large group learning situations due to the expense of establishing multiple workstations. The authors implemented an alternative to laserdisc video by using indexed digital video combined with an expert system. High-quality video was captured from a laserdisc player and combined with waveform audio into an audio-video-interleave (AVI) file format in the Microsoft Video-for-Windows environment (Microsoft Corp., Seattle, WA). With the use of an expert system, a knowledge-based computer program provided random access to these indexed AVI files. The program can be played on any multimedia computer without the need for laserdiscs. This system offers a high level of interactive video without the overhead and cost of a laserdisc player.

  2. High efficiency video coding for ultrasound video communication in m-health systems.

    PubMed

    Panayides, A; Antoniou, Z; Pattichis, M S; Pattichis, C S; Constantinides, A G

    2012-01-01

    Emerging high efficiency video compression methods and wider availability of wireless network infrastructure will significantly advance existing m-health applications. For medical video communications, the emerging video compression and network standards support low-delay and high-resolution video transmission, at the clinically acquired resolution and frame rates. Such advances are expected to further promote the adoption of m-health systems for remote diagnosis and emergency incidents in daily clinical practice. This paper compares the performance of the emerging high efficiency video coding (HEVC) standard to the current state-of-the-art H.264/AVC standard. The experimental evaluation, based on five atherosclerotic plaque ultrasound videos encoded at QCIF, CIF, and 4CIF resolutions demonstrates that 50% reductions in bitrate requirements is possible for equivalent clinical quality.

  3. Multiple Sensor Camera for Enhanced Video Capturing

    NASA Astrophysics Data System (ADS)

    Nagahara, Hajime; Kanki, Yoshinori; Iwai, Yoshio; Yachida, Masahiko

    A resolution of camera has been drastically improved under a current request for high-quality digital images. For example, digital still camera has several mega pixels. Although a video camera has the higher frame-rate, the resolution of a video camera is lower than that of still camera. Thus, the high-resolution is incompatible with the high frame rate of ordinary cameras in market. It is difficult to solve this problem by a single sensor, since it comes from physical limitation of the pixel transfer rate. In this paper, we propose a multi-sensor camera for capturing a resolution and frame-rate enhanced video. Common multi-CCDs camera, such as 3CCD color camera, has same CCD for capturing different spectral information. Our approach is to use different spatio-temporal resolution sensors in a single camera cabinet for capturing higher resolution and frame-rate information separately. We build a prototype camera which can capture high-resolution (2588×1958 pixels, 3.75 fps) and high frame-rate (500×500, 90 fps) videos. We also proposed the calibration method for the camera. As one of the application of the camera, we demonstrate an enhanced video (2128×1952 pixels, 90 fps) generated from the captured videos for showing the utility of the camera.

  4. Humanizing Instructional Videos in Physics: When Less Is More

    NASA Astrophysics Data System (ADS)

    Schroeder, Noah L.; Traxler, Adrienne L.

    2017-06-01

    Many instructors in science, technology, engineering, and mathematics fields are striving to create active learning environments in their classrooms and in doing so are frequently moving the lecture portion of their course into online video format. In this classroom-based study, we used a two group randomized experimental design to examine the efficacy of an instructional video that incorporates a human hand demonstrating and modeling how to solve frictional inclined plane problems compared to an identical video that did not include the human hand. The results show that the learners who viewed the video without the human hand present performed significantly better on a learning test and experienced a significantly better training efficiency than the learners who viewed the video with the human hand present. Meanwhile, those who learned with the human hand present in the instructional video rated the instructor as being more humanlike and engaging. The results have implications for both theory and practice. Implications for those designing instructional videos are discussed, as well as the limitations of the current study.

  5. AlliedSignal driver's viewer enhancement (DVE) for paramilitary and commercial applications

    NASA Astrophysics Data System (ADS)

    Emanuel, Michael; Caron, Hubert; Kovacevic, Branislav; Faina-Cherkaoui, Marcela; Wrobel, Leslie; Turcotte, Gilles

    1999-07-01

    AlliedSignal Driver's Viewer Enhancement (DVE) system is a thermal imager using a 320 X 240 uncooled microbolometer array. This high performance system was initially developed for military combat and tactical wheeled vehicles. It features a very small sensor head remotely mounted from the display, control and processing module. The sensor head has a modular design and is being adapted to various commercial applications such as truck and car-driving aid, using specifically designed low cost optics. Tradeoffs in the system design, system features and test results are discussed in this paper. A short video shows footage of the DVE system while driving at night.

  6. AIDS education video: Karate Kids.

    PubMed

    Lowry, C

    1993-01-01

    Street Kids International, in cooperation with the World Health Organization and the National Film Board of Canada, has developed an animated action-adventure video, "Karate Kids," as part of a cross-cultural program of health education that concerns human immunodeficiency virus (HIV)/acquired immunodeficiency syndrome (AIDS) and targets street children in developing countries. Simple, but explicit, information is delivered during the 22-minute cartoon; the package also includes a training book for educators, and a pocket comic book. Distributed in 17 languages (it is readily adapted to new language versions, independent of the original producers) in over 100 countries, the video is shown in community theaters, hospitals, schools, and prisons, and out of the backs of trucks. It is easily copied, which is encouraged. After 3 years in distribution, field evaluation has demonstrated that the greatest strength of the video is its ability to stimulate discussion where no discussion was taking place before. Critics include those who believe there is no need for it and those who feel it should be used alone. The results of one evaluation study showed use of the video alone was insufficient; those of a cross-cultural participatory evaluation survey indicated a significant impact on knowledge and attitudes when the video was followed by discussion. Another significant aspect of the project is that it treats street children with respect; they are actors, not victims, who have legitimate needs and rights. They become visible in a world that is often unaware of them.

  7. 47 CFR 76.1503 - Carriage of video programming providers on open video systems.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 4 2011-10-01 2011-10-01 false Carriage of video programming providers on open video systems. 76.1503 Section 76.1503 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Open Video Systems § 76.1503...

  8. 47 CFR 76.1503 - Carriage of video programming providers on open video systems.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 4 2012-10-01 2012-10-01 false Carriage of video programming providers on open video systems. 76.1503 Section 76.1503 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Open Video Systems § 76.1503...

  9. 47 CFR 76.1503 - Carriage of video programming providers on open video systems.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 4 2014-10-01 2014-10-01 false Carriage of video programming providers on open video systems. 76.1503 Section 76.1503 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Open Video Systems § 76.1503...

  10. 47 CFR 76.1503 - Carriage of video programming providers on open video systems.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 4 2010-10-01 2010-10-01 false Carriage of video programming providers on open video systems. 76.1503 Section 76.1503 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Open Video Systems § 76.1503...

  11. 47 CFR 76.1503 - Carriage of video programming providers on open video systems.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 4 2013-10-01 2013-10-01 false Carriage of video programming providers on open video systems. 76.1503 Section 76.1503 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Open Video Systems § 76.1503...

  12. [STS-44 Onboard 16mm Photography

    NASA Technical Reports Server (NTRS)

    1991-01-01

    This silent video was filmed by the crew of the STS-44 Space Shuttle using a 16mm camera. Astronauts, Frederick D. Gregory, Terence T. Henricks, F. Story Musgrave, Mario Runco, Jr., James S. Voss, and Thomas J. Hennen, filmed various crew activities inside the shuttle, the deployment of the Defense Support Program satellite (DSP), and several Earth view-footage of arid land masses and cloud cover.

  13. The Daily Show with Jon Stewart: Part 2

    ERIC Educational Resources Information Center

    Trier, James

    2008-01-01

    "The Daily Show With Jon Stewart" is one of the best critical literacy programs on television, and in this Media Literacy column the author suggests ways that teachers can use video clips from the show in their classrooms. (For Part 1, see EJ784683.)

  14. Video enhancement workbench: an operational real-time video image processing system

    NASA Astrophysics Data System (ADS)

    Yool, Stephen R.; Van Vactor, David L.; Smedley, Kirk G.

    1993-01-01

    Video image sequences can be exploited in real-time, giving analysts rapid access to information for military or criminal investigations. Video-rate dynamic range adjustment subdues fluctuations in image intensity, thereby assisting discrimination of small or low- contrast objects. Contrast-regulated unsharp masking enhances differentially shadowed or otherwise low-contrast image regions. Real-time removal of localized hotspots, when combined with automatic histogram equalization, may enhance resolution of objects directly adjacent. In video imagery corrupted by zero-mean noise, real-time frame averaging can assist resolution and location of small or low-contrast objects. To maximize analyst efficiency, lengthy video sequences can be screened automatically for low-frequency, high-magnitude events. Combined zoom, roam, and automatic dynamic range adjustment permit rapid analysis of facial features captured by video cameras recording crimes in progress. When trying to resolve small objects in murky seawater, stereo video places the moving imagery in an optimal setting for human interpretation.

  15. Are YouTube seizure videos misleading? Neurologists do not always agree.

    PubMed

    Brna, P M; Dooley, J M; Esser, M J; Perry, M S; Gordon, K E

    2013-11-01

    The internet has become the first stop for the public and patients to seek health-related information. Video-sharing websites are particularly important sources of information for those seeking answers about seizures and epilepsy. Because of the widespread popularity of YouTube, we sought to explore whether a seizure diagnosis and classification could reliably be applied. All videos related to "seizures" were reviewed, and irrelevant videos were excluded. The remaining 162 nonduplicate videos were analyzed by 4 independent pediatric neurologists who classified the events as epilepsy seizures, nonepileptic seizures, or indeterminate. Videos designated as epilepsy seizures were then classified into focal, generalized, or unclassified. At least 3 of the 4 reviewers agreed that 35% of the videos showed that the events were "epilepsy seizures", at least 3 of the 4 reviewers agreed that 28% of the videos demonstrated that the events were "nonepileptic seizures", and there was good agreement that 7% of the videos showed that the event was "indeterminate". Overall, interrater agreement was moderate at k=0.57 for epilepsy seizures and k=0.43 for nonepileptic seizures. For seizure classification, reviewer agreement was greatest for "generalized seizures" (k=0.45) and intermediate for "focal seizures" (k=0.27), and there was no agreement for unclassified events (k=0.026, p=0.2). Overall, neurology reviewer agreement suggests that only approximately one-third of the videos designated as "seizures" on the most popular video-sharing website, YouTube, definitely depict a seizure. Caution should be exercised in the use of such online video media for accessing educational or self-diagnosis aids for seizures. © 2013.

  16. Industrial-Strength Streaming Video.

    ERIC Educational Resources Information Center

    Avgerakis, George; Waring, Becky

    1997-01-01

    Corporate training, financial services, entertainment, and education are among the top applications for streaming video servers, which send video to the desktop without downloading the whole file to the hard disk, saving time and eliminating copyrights questions. Examines streaming video technology, lists ten tips for better net video, and ranks…

  17. A web-based video annotation system for crowdsourcing surveillance videos

    NASA Astrophysics Data System (ADS)

    Gadgil, Neeraj J.; Tahboub, Khalid; Kirsh, David; Delp, Edward J.

    2014-03-01

    Video surveillance systems are of a great value to prevent threats and identify/investigate criminal activities. Manual analysis of a huge amount of video data from several cameras over a long period of time often becomes impracticable. The use of automatic detection methods can be challenging when the video contains many objects with complex motion and occlusions. Crowdsourcing has been proposed as an effective method for utilizing human intelligence to perform several tasks. Our system provides a platform for the annotation of surveillance video in an organized and controlled way. One can monitor a surveillance system using a set of tools such as training modules, roles and labels, task management. This system can be used in a real-time streaming mode to detect any potential threats or as an investigative tool to analyze past events. Annotators can annotate video contents assigned to them for suspicious activity or criminal acts. First responders are then able to view the collective annotations and receive email alerts about a newly reported incident. They can also keep track of the annotators' training performance, manage their activities and reward their success. By providing this system, the process of video analysis is made more efficient.

  18. Designing online audiovisual heritage services: an empirical study of two comparable online video services

    NASA Astrophysics Data System (ADS)

    Ongena, G.; van de Wijngaert, L. A. L.; Huizer, E.

    2013-03-01

    The purpose of this study is to seek input for a new online audiovisual heritage service. In doing so, we assess comparable online video services to gain insights into the motivations and perceptual innovation characteristics of the video services. The research is based on data from a Dutch survey held among 1,939 online video service users. The results show that online video service held overlapping antecedents but does show differences in motivations and in perceived innovation characteristics. Hence, in general, one can state that in comparison, online video services comply with different needs and have differences in perceived innovation characteristics. This implies that one can design online video services for different needs. In addition to scientific implications, the outcomes also provide guidance for practitioners in implementing new online video services.

  19. 76 FR 55916 - Requirements and Registration for Are You Prepared? Video Contest

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-09

    ... and Registration for Are You Prepared? Video Contest AGENCY: Centers for Disease Control and... general public to make a 60 second video that shows how you are prepared for any emergency and reinforces.... SUPPLEMENTARY INFORMATION: Subject of Challenge Competition Emergency Preparedness Video Contest. September is...

  20. Deep hierarchical attention network for video description

    NASA Astrophysics Data System (ADS)

    Li, Shuohao; Tang, Min; Zhang, Jun

    2018-03-01

    Pairing video to natural language description remains a challenge in computer vision and machine translation. Inspired by image description, which uses an encoder-decoder model for reducing visual scene into a single sentence, we propose a deep hierarchical attention network for video description. The proposed model uses convolutional neural network (CNN) and bidirectional LSTM network as encoders while a hierarchical attention network is used as the decoder. Compared to encoder-decoder models used in video description, the bidirectional LSTM network can capture the temporal structure among video frames. Moreover, the hierarchical attention network has an advantage over single-layer attention network on global context modeling. To make a fair comparison with other methods, we evaluate the proposed architecture with different types of CNN structures and decoders. Experimental results on the standard datasets show that our model has a more superior performance than the state-of-the-art techniques.

  1. Video Image Stabilization and Registration (VISAR) Software

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Two scientists at NASA Marshall Space Flight Center, atmospheric scientist Paul Meyer (left) and solar physicist Dr. David Hathaway, have developed promising new software, called Video Image Stabilization and Registration (VISAR), that may help law enforcement agencies to catch criminals by improving the quality of video recorded at crime scenes, VISAR stabilizes camera motion in the horizontal and vertical as well as rotation and zoom effects; produces clearer images of moving objects; smoothes jagged edges; enhances still images; and reduces video noise of snow. VISAR could also have applications in medical and meteorological imaging. It could steady images of Ultrasounds which are infamous for their grainy, blurred quality. It would be especially useful for tornadoes, tracking whirling objects and helping to determine the tornado's wind speed. This image shows two scientists reviewing an enhanced video image of a license plate taken from a moving automobile.

  2. Digital Literacy and Online Video: Undergraduate Students' Use of Online Video for Coursework

    ERIC Educational Resources Information Center

    Tiernan, Peter; Farren, Margaret

    2017-01-01

    This paper investigates how to enable undergraduate students' use of online video for coursework using a customised video retrieval system (VRS), in order to understand digital literacy with online video in practice. This study examines the key areas influencing the use of online video for assignments such as the learning value of video,…

  3. Evaluation of a radiation survey training video developed from a real-time video radiation detection system.

    PubMed

    Wang, Wei-Hsung; McGlothlin, James D; Smith, Deborah J; Matthews, Kenneth L

    2006-02-01

    This project incorporates radiation survey training into a real-time video radiation detection system, thus providing a practical perspective for the radiation worker on efficient performance of radiation surveys. Regular surveys to evaluate radiation levels are necessary not only to recognize potential radiological hazards but also to keep the radiation exposure as low as reasonably achievable. By developing and implementing an instructional learning system using a real-time radiation survey training video showing specific categorization of work elements, radiation workers trained with this system demonstrated better radiation survey practice.

  4. The development of video game enjoyment in a role playing game.

    PubMed

    Wirth, Werner; Ryffel, Fabian; von Pape, Thilo; Karnowski, Veronika

    2013-04-01

    This study examines the development of video game enjoyment over time. The results of a longitudinal study (N=62) show that enjoyment increases over several sessions. Moreover, results of a multilevel regression model indicate a causal link between the dependent variable video game enjoyment and the predictor variables exploratory behavior, spatial presence, competence, suspense and solution, and simulated experiences of life. These findings are important for video game research because they reveal the antecedents of video game enjoyment in a real-world longitudinal setting. Results are discussed in terms of the dynamics of video game enjoyment under real-world conditions.

  5. Use of videos for Distribution Construction and Maintenance (DC M) training

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Long, G.M.

    This paper presents the results of a survey taken among members of the American Gas Association (AGA)'s Distribution Construction and Maintenance (DC M) committee to gauge the extent, sources, mode of use, and degree of satisfaction with videos as a training aid in distribution construction and maintenance skills. Also cites AGA Engineering Technical Note, DCM-88-3-1, as a catalog of the videos listed by respondents to the survey. Comments on the various sources of training videos and the characteristics of videos from each. Conference presentation included showing of a sampling of video segments from these various sources. 1 fig.

  6. Video flowmeter

    DOEpatents

    Lord, D.E.; Carter, G.W.; Petrini, R.R.

    1983-08-02

    A video flowmeter is described that is capable of specifying flow nature and pattern and, at the same time, the quantitative value of the rate of volumetric flow. An image of a determinable volumetric region within a fluid containing entrained particles is formed and positioned by a rod optic lens assembly on the raster area of a low-light level television camera. The particles are illuminated by light transmitted through a bundle of glass fibers surrounding the rod optic lens assembly. Only particle images having speeds on the raster area below the raster line scanning speed may be used to form a video picture which is displayed on a video screen. The flowmeter is calibrated so that the locus of positions of origin of the video picture gives a determination of the volumetric flow rate of the fluid. 4 figs.

  7. Social Properties of Mobile Video

    NASA Astrophysics Data System (ADS)

    Mitchell, April Slayden; O'Hara, Kenton; Vorbau, Alex

    Mobile video is now an everyday possibility with a wide array of commercially available devices, services, and content. These new technologies have created dramatic shifts in the way video-based media can be produced, consumed, and delivered by people beyond the familiar behaviors associated with fixed TV and video technologies. Such technology revolutions change the way users behave and change their expectations in regards to their mobile video experiences. Building upon earlier studies of mobile video, this paper reports on a study using diary techniques and ethnographic interviews to better understand how people are using commercially available mobile video technologies in their everyday lives. Drawing on reported episodes of mobile video behavior, the study identifies the social motivations and values underpinning these behaviors that help characterize mobile video consumption beyond the simplistic notion of viewing video only to kill time. This paper also discusses the significance of user-generated content and the usage of video in social communities through the description of two mobile video technology services that allow users to create and share content. Implications for adoption and design of mobile video technologies and services are discussed as well.

  8. Why Students Learn More From Dialogue-Than Monologue-Videos: Analyses of Peer Interactions

    ERIC Educational Resources Information Center

    Chi, Michelene T. H.; Kang, Seokmin; Yaghmourian, David L.

    2017-01-01

    In 2 separate studies, we found that college-age students learned more when they collaboratively watched tutorial dialogue-videos than lecture-style monologue-videos. In fact, they can learn as well as the tutees in the dialogue-videos. These results replicate similar findings in the literature showing the advantage of dialogue-videos even when…

  9. Video games.

    PubMed

    Funk, Jeanne B

    2005-06-01

    The video game industry insists that it is doing everything possible to provide information about the content of games so that parents can make informed choices; however, surveys indicate that ratings may not reflect consumer views of the nature of the content. This article describes some of the currently popular video games, as well as developments that are on the horizon, and discusses the status of research on the positive and negative impacts of playing video games. Recommendations are made to help parents ensure that children play games that are consistent with their values.

  10. Video2vec Embeddings Recognize Events When Examples Are Scarce.

    PubMed

    Habibian, Amirhossein; Mensink, Thomas; Snoek, Cees G M

    2017-10-01

    This paper aims for event recognition when video examples are scarce or even completely absent. The key in such a challenging setting is a semantic video representation. Rather than building the representation from individual attribute detectors and their annotations, we propose to learn the entire representation from freely available web videos and their descriptions using an embedding between video features and term vectors. In our proposed embedding, which we call Video2vec, the correlations between the words are utilized to learn a more effective representation by optimizing a joint objective balancing descriptiveness and predictability. We show how learning the Video2vec embedding using a multimodal predictability loss, including appearance, motion and audio features, results in a better predictable representation. We also propose an event specific variant of Video2vec to learn a more accurate representation for the words, which are indicative of the event, by introducing a term sensitive descriptiveness loss. Our experiments on three challenging collections of web videos from the NIST TRECVID Multimedia Event Detection and Columbia Consumer Videos datasets demonstrate: i) the advantages of Video2vec over representations using attributes or alternative embeddings, ii) the benefit of fusing video modalities by an embedding over common strategies, iii) the complementarity of term sensitive descriptiveness and multimodal predictability for event recognition. By its ability to improve predictability of present day audio-visual video features, while at the same time maximizing their semantic descriptiveness, Video2vec leads to state-of-the-art accuracy for both few- and zero-example recognition of events in video.

  11. The Video Head Impulse Test.

    PubMed

    Halmagyi, G M; Chen, Luke; MacDougall, Hamish G; Weber, Konrad P; McGarvie, Leigh A; Curthoys, Ian S

    2017-01-01

    In 1988, we introduced impulsive testing of semicircular canal (SCC) function measured with scleral search coils and showed that it could accurately and reliably detect impaired function even of a single lateral canal. Later we showed that it was also possible to test individual vertical canal function in peripheral and also in central vestibular disorders and proposed a physiological mechanism for why this might be so. For the next 20 years, between 1988 and 2008, impulsive testing of individual SCC function could only be accurately done by a few aficionados with the time and money to support scleral search-coil systems-an expensive, complicated and cumbersome, semi-invasive technique that never made the transition from the research lab to the dizzy clinic. Then, in 2009 and 2013, we introduced a video method of testing function of each of the six canals individually. Since 2009, the method has been taken up by most dizzy clinics around the world, with now close to 100 refereed articles in PubMed. In many dizzy clinics around the world, video Head Impulse Testing has supplanted caloric testing as the initial and in some cases the final test of choice in patients with suspected vestibular disorders. Here, we consider seven current, interesting, and controversial aspects of video Head Impulse Testing: (1) introduction to the test; (2) the progress from the head impulse protocol (HIMPs) to the new variant-suppression head impulse protocol (SHIMPs); (3) the physiological basis for head impulse testing; (4) practical aspects and potential pitfalls of video head impulse testing; (5) problems of vestibulo-ocular reflex gain calculations; (6) head impulse testing in central vestibular disorders; and (7) to stay right up-to-date-new clinical disease patterns emerging from video head impulse testing. With thanks and appreciation we dedicate this article to our friend, colleague, and mentor, Dr Bernard Cohen of Mount Sinai Medical School, New York, who since his first

  12. Digital video clips for improved pedagogy and illustration of scientific research — with illustrative video clips on atomic spectrometry

    NASA Astrophysics Data System (ADS)

    Michel, Robert G.; Cavallari, Jennifer M.; Znamenskaia, Elena; Yang, Karl X.; Sun, Tao; Bent, Gary

    1999-12-01

    This article is an electronic publication in Spectrochimica Acta Electronica (SAE), a section of Spectrochimica Acta Part B (SAB). The hardcopy text is accompanied by an electronic archive, stored on the CD-ROM accompanying this issue. The archive contains video clips. The main article discusses the scientific aspects of the subject and explains the purpose of the video files. Short, 15-30 s, digital video clips are easily controllable at the computer keyboard, which gives a speaker the ability to show fine details through the use of slow motion. Also, they are easily accessed from the computer hard drive for rapid extemporaneous presentation. In addition, they are easily transferred to the Internet for dissemination. From a pedagogical point of view, the act of making a video clip by a student allows for development of powers of observation, while the availability of the technology to make digital video clips gives a teacher the flexibility to demonstrate scientific concepts that would otherwise have to be done as 'live' demonstrations, with all the likely attendant misadventures. Our experience with digital video clips has been through their use in computer-based presentations by undergraduate and graduate students in analytical chemistry classes, and by high school and middle school teachers and their students in a variety of science and non-science classes. In physics teaching laboratories, we have used the hardware to capture digital video clips of dynamic processes, such as projectiles and pendulums, for later mathematical analysis.

  13. Latest Highlights from our Direct Measurement Video Collection

    NASA Astrophysics Data System (ADS)

    Vonk, M.; Bohacek, P. H.

    2014-12-01

    Recent advances in technology have made videos much easier to produce, edit, store, transfer, and view. This has spawned an explosion in a production of a wide variety of different types of pedagogical videos. But with the exception of student-made videos (which are often of poor quality) almost all of the educational videos being produced are passive. No matter how compelling the content, students are expected to simply sit and watch them. Because we feel that being engaged and active are necessary components of student learning, we have been working to create a free online library of Direct Measurement Videos (DMV's). These videos are short high-quality videos of real events, shot in a way that allows students to make measurements directly from the video. Instead of handing students a word problem about a car skidding on ice, we actually show them the car skidding on ice. We then ask them to measure the important quantities, make calculations based on those measurements and solve for unknowns. DMV's are more interesting than their word problem equivalents and frequently inspire further questions about the physics of the situation or about the uncertainty of the measurement in ways that word problems almost never do. We feel that it is simply impossible to a video of a roller coaster or a rocket and then argue that word problems are better. In this talk I will highlight some new additions to our DMV collection. This work is supported by NSF TUES award #1245268

  14. New Integrated Video and Graphics Technology: Digital Video Interactive.

    ERIC Educational Resources Information Center

    Optical Information Systems, 1987

    1987-01-01

    Describes digital video interactive (DVI), a new technology which combines the interactivity of the graphics capabilities in personal computers with the realism of high-quality motion video and multitrack audio in an all-digital integrated system. (MES)

  15. Portrayal of tobacco in Mongolian language YouTube videos: policy gaps.

    PubMed

    Tsai, Feng-Jen; Sainbayar, Bolor

    2016-07-01

    This study examined how effectively current policy measures control depictions of tobacco in Mongolian language YouTube videos. A search of YouTube videos using the Mongolian term for 'tobacco', and employing 'relevance' and 'view count' criteria, resulted in a total sample of 120 videos, from which 38 unique videos were coded and analysed. Most videos were antismoking public service announcements; however, analyses of viewing patterns showed that pro-smoking videos accounted for about two-thirds of all views. Pro-smoking videos were also perceived more positively and had a like:dislike ratio of 4.6 compared with 3.5 and 1.5, respectively, for the magic trick and antismoking videos. Although Mongolia prohibits tobacco advertising, 3 of the pro-smoking videos were made by a tobacco company; additionally, 1 pro-smoking video promoted electronic cigarettes. Given the popularity of Mongolian YouTube videos that promote smoking, policy changes are urgently required to control this medium, and more effectively protect youth and young adults from insidious tobacco marketing. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  16. Correlation Between Arthroscopy Simulator and Video Game Performance: A Cross-Sectional Study of 30 Volunteers Comparing 2- and 3-Dimensional Video Games.

    PubMed

    Jentzsch, Thorsten; Rahm, Stefan; Seifert, Burkhardt; Farei-Campagna, Jan; Werner, Clément M L; Bouaicha, Samy

    2016-07-01

    To investigate the association between arthroscopy simulator performance and video game skills. This study compared the performances of 30 volunteers without experience performing arthroscopies in 3 different tasks of a validated virtual reality knee arthroscopy simulator with the video game experience using a questionnaire and actual performances in 5 different 2- and 3-dimensional (D) video games of varying genres on 2 different platforms. Positive correlations between knee arthroscopy simulator and video game performances (ρ = 0.63, P < .001) as well as experiences (ρ = 0.50, P = .005) were found. The strongest correlations were found for the task of catching (hooking) 6 foreign bodies (virtual rings; "triangulation") and the dribbling performance in a sports game and a first-person shooter game, as well as the meniscus resection and a tile-matching puzzle game (all ρ ≥ 0.60, P < .001). No correlations were found for any of the knee arthroscopy simulator tasks and a strategy game. Although knee arthroscopy performances do not correlate with 2-D strategy video game skills, they show a correlation with 2-D tile-matching puzzle games only for easier tasks with a rather limited focus, and highly correlate with 3-D sports and first-person shooter video games. These findings show that experienced and good 3-D gamers are better arthroscopists than nonexperienced and poor 3-D gamers. Level II, observational cross-sectional study. Copyright © 2016 Arthroscopy Association of North America. Published by Elsevier Inc. All rights reserved.

  17. A content-based news video retrieval system: NVRS

    NASA Astrophysics Data System (ADS)

    Liu, Huayong; He, Tingting

    2009-10-01

    This paper focus on TV news programs and design a content-based news video browsing and retrieval system, NVRS, which is convenient for users to fast browsing and retrieving news video by different categories such as political, finance, amusement, etc. Combining audiovisual features and caption text information, the system automatically segments a complete news program into separate news stories. NVRS supports keyword-based news story retrieval, category-based news story browsing and generates key-frame-based video abstract for each story. Experiments show that the method of story segmentation is effective and the retrieval is also efficient.

  18. Satellite Video Shows Movement of Major U.S. Winter Storm

    NASA Image and Video Library

    2014-02-12

    View a video of the storm here: bit.ly/1m9aJFY This visible image of the winter storm over the U.S. south and East Coast was taken by NOAA's GOES-13 satellite on Feb. 12 at 1855 UTC/1:55 p.m. EST. Snow covered ground can be seen over the Great Lakes region and Ohio Valley. On February 12 at 10 a.m. EST, NOAA's National Weather Service or NWS continued to issue watches and warnings from Texas to New England. Specifically, NWS cited Winter Storm Warnings and Winter Weather Advisories were in effect from eastern Texas eastward across the interior section of southeastern U.S. states and across much of the eastern seaboard including the Appalachians. Winter storm watches are in effect for portions of northern New England as well as along the western slopes of northern and central Appalachians. For updates on local forecasts, watches and warnings, visit NOAA's www.weather.gov webpage. NOAA's Weather Prediction Center or WPC noted the storm is expected to bring "freezing rain spreading into the Carolinas, significant snow accumulations are expected in the interior Mid-Atlantic states tonight into Thursday and ice storm warnings and freezing rain advisories are in effect across much of central Georgia. GOES satellites provide the kind of continuous monitoring necessary for intensive data analysis. Geostationary describes an orbit in which a satellite is always in the same position with respect to the rotating Earth. This allows GOES to hover continuously over one position on Earth's surface, appearing stationary. As a result, GOES provide a constant vigil for the atmospheric "triggers" for severe weather conditions such as tornadoes, flash floods, hail storms and hurricanes. For updated information about the storm system, visit NOAA's WPC website; www.hpc.ncep.noaa.gov/ For more information about GOES satellites, visit: www.goes.noaa.gov/ or goes.gsfc.nasa.gov/ Rob Gutro NASA's Goddard Space Flight Center Credit: NOAA/NASA GOES Project NASA image use policy. NASA Goddard

  19. An Evaluation of Educational Neurological Eye Movement Disorder Videos Posted on Internet Video Sharing Sites.

    PubMed

    Hickman, Simon J

    2016-03-01

    Internet video sharing sites allow the free dissemination of educational material. This study investigated the quality and educational content of videos of eye movement disorders posted on such sites. Educational neurological eye movement videos were identified by entering the titles of the eye movement abnormality into the search boxes of the video sharing sites. Also, suggested links were followed from each video. The number of views, likes, and dislikes for each video were recorded. The videos were then rated for their picture and sound quality. Their educational value was assessed according to whether the video included a description of the eye movement abnormality, the anatomical location of the lesion (if appropriate), and the underlying diagnosis. Three hundred fifty-four of these videos were found on YouTube and Vimeo. There was a mean of 6,443 views per video (range, 1-195,957). One hundred nineteen (33.6%) had no form of commentary about the eye movement disorder shown apart from the title. Forty-seven (13.3%) contained errors in the title or in the text. Eighty (22.6%) had excellent educational value by describing the eye movement abnormality, the anatomical location of the lesion, and the underlying diagnosis. Of these, 30 also had good picture and sound quality. The videos with excellent educational value had a mean of 9.84 "likes" per video compared with 2.37 for those videos without a commentary (P < 0.001). The videos that combined excellent educational value with good picture and sound quality had a mean of 10.23 "likes" per video (P = 0.004 vs videos with no commentary). There was no significant difference in the mean number of "dislikes" between those videos that had no commentary or which contained errors and those with excellent educational value. There are a large number of eye movement videos freely available on these sites; however, due to the lack of peer review, a significant number have poor educational value due to having no commentary

  20. Video Altimeter and Obstruction Detector for an Aircraft

    NASA Technical Reports Server (NTRS)

    Delgado, Frank J.; Abernathy, Michael F.; White, Janis; Dolson, William R.

    2013-01-01

    Video-based altimetric and obstruction detection systems for aircraft have been partially developed. The hardware of a system of this type includes a downward-looking video camera, a video digitizer, a Global Positioning System receiver or other means of measuring the aircraft velocity relative to the ground, a gyroscope based or other attitude-determination subsystem, and a computer running altimetric and/or obstruction-detection software. From the digitized video data, the altimetric software computes the pixel velocity in an appropriate part of the video image and the corresponding angular relative motion of the ground within the field of view of the camera. Then by use of trigonometric relationships among the aircraft velocity, the attitude of the camera, the angular relative motion, and the altitude, the software computes the altitude. The obstruction-detection software performs somewhat similar calculations as part of a larger task in which it uses the pixel velocity data from the entire video image to compute a depth map, which can be correlated with a terrain map, showing locations of potential obstructions. The depth map can be used as real-time hazard display and/or to update an obstruction database.

  1. Video Vectorization via Tetrahedral Remeshing.

    PubMed

    Wang, Chuan; Zhu, Jie; Guo, Yanwen; Wang, Wenping

    2017-02-09

    We present a video vectorization method that generates a video in vector representation from an input video in raster representation. A vector-based video representation offers the benefits of vector graphics, such as compactness and scalability. The vector video we generate is represented by a simplified tetrahedral control mesh over the spatial-temporal video volume, with color attributes defined at the mesh vertices. We present novel techniques for simplification and subdivision of a tetrahedral mesh to achieve high simplification ratio while preserving features and ensuring color fidelity. From an input raster video, our method is capable of generating a compact video in vector representation that allows a faithful reconstruction with low reconstruction errors.

  2. The Influence of National Culture on Educational Videos: The Case of MOOCs

    ERIC Educational Resources Information Center

    Bayeck, Rebecca Yvonne; Choi, Jinhee

    2018-01-01

    This paper discusses the influence of cultural dimensions on Massive Open Online Course (MOOC) introductory videos. The study examined the introductory videos produced by three universities on Coursera platforms using communication theory and Hofstede's cultural dimensions. The results show that introductory videos in MOOCs are influenced by the…

  3. Perioperative outcomes of video- and robot-assisted segmentectomies.

    PubMed

    Rinieri, Philippe; Peillon, Christophe; Salaün, Mathieu; Mahieu, Julien; Bubenheim, Michael; Baste, Jean-Marc

    2016-02-01

    Video-assisted thoracic surgery appears to be technically difficult for segmentectomy. Conversely, robotic surgery could facilitate the performance of segmentectomy. The aim of this study was to compare the early results of video- and robot-assisted segmentectomies. Data were collected prospectively on videothoracoscopy from 2010 and on robotic procedures from 2013. Fifty-one patients who were candidates for minimally invasive segmentectomy were included in the study. Perioperative outcomes of video-assisted and robotic segmentectomies were compared. The minimally invasive segmentectomies included 32 video- and 16 robot-assisted procedures; 3 segmentectomies (2 video-assisted and 1 robot-assisted) were converted to lobectomies. Four conversions to thoracotomy were necessary for anatomical reason or arterial injury, with no uncontrolled bleeding in the robotic arm. There were 7 benign or infectious lesions, 9 pre-invasive lesions, 25 lung cancers, and 10 metastatic diseases. Patient characteristics, type of segment, conversion to thoracotomy, conversion to lobectomy, operative time, postoperative complications, chest tube duration, postoperative stay, and histology were similar in the video and robot groups. Estimated blood loss was significantly higher in the video group (100 vs. 50 mL, p = 0.028). The morbidity rate of minimally invasive segmentectomy was low. The short-term results of video-assisted and robot-assisted segmentectomies were similar, and more data are required to show any advantages between the two techniques. Long-term oncologic outcomes are necessary to evaluate these new surgical practices. © The Author(s) 2016.

  4. Video flowmeter

    DOEpatents

    Lord, David E.; Carter, Gary W.; Petrini, Richard R.

    1983-01-01

    A video flowmeter is described that is capable of specifying flow nature and pattern and, at the same time, the quantitative value of the rate of volumetric flow. An image of a determinable volumetric region within a fluid (10) containing entrained particles (12) is formed and positioned by a rod optic lens assembly (31) on the raster area of a low-light level television camera (20). The particles (12) are illuminated by light transmitted through a bundle of glass fibers (32) surrounding the rod optic lens assembly (31). Only particle images having speeds on the raster area below the raster line scanning speed may be used to form a video picture which is displayed on a video screen (40). The flowmeter is calibrated so that the locus of positions of origin of the video picture gives a determination of the volumetric flow rate of the fluid (10).

  5. Sequential color video to parallel color video converter

    NASA Technical Reports Server (NTRS)

    1975-01-01

    The engineering design, development, breadboard fabrication, test, and delivery of a breadboard field sequential color video to parallel color video converter is described. The converter was designed for use onboard a manned space vehicle to eliminate a flickering TV display picture and to reduce the weight and bulk of previous ground conversion systems.

  6. Video Self-Modeling

    ERIC Educational Resources Information Center

    Buggey, Tom; Ogle, Lindsey

    2012-01-01

    Video self-modeling (VSM) first appeared on the psychology and education stage in the early 1970s. The practical applications of VSM were limited by lack of access to tools for editing video, which is necessary for almost all self-modeling videos. Thus, VSM remained in the research domain until the advent of camcorders and VCR/DVD players and,…

  7. Real-time video compressing under DSP/BIOS

    NASA Astrophysics Data System (ADS)

    Chen, Qiu-ping; Li, Gui-ju

    2009-10-01

    This paper presents real-time MPEG-4 Simple Profile video compressing based on the DSP processor. The programming framework of video compressing is constructed using TMS320C6416 Microprocessor, TDS510 simulator and PC. It uses embedded real-time operating system DSP/BIOS and the API functions to build periodic function, tasks and interruptions etcs. Realize real-time video compressing. To the questions of data transferring among the system. Based on the architecture of the C64x DSP, utilized double buffer switched and EDMA data transfer controller to transit data from external memory to internal, and realize data transition and processing at the same time; the architecture level optimizations are used to improve software pipeline. The system used DSP/BIOS to realize multi-thread scheduling. The whole system realizes high speed transition of a great deal of data. Experimental results show the encoder can realize real-time encoding of 768*576, 25 frame/s video images.

  8. Physics Girl: Where Education meets Cat Videos

    NASA Astrophysics Data System (ADS)

    Cowern, Dianna

    YouTube is usually considered an entertainment medium to watch cats, gaming, and music videos. But educational channels have been gaining momentum on the platform, some garnering millions of subscribers and billions of views. The Physics Girl YouTube channel is an educational series with PBS Digital Studios created by Dianna Cowern. Using Physics Girl as an example, this talk will examine what it takes to start a short-form educational video series, including logistics and resources. One benefit of video is that every failure is documented on camera and can, and will, be used in this talk as a learning tool. We will look at the channels demographical reach, discuss best practices for effective physics outreach, and survey how online media and technology can facilitate good and bad learning. The aim of this talk is to show how videos are a unique way to share science and enrich the learning experience, in and out of a classroom.

  9. Voluntary and involuntary emotional memory following an analogue traumatic stressor: the differential effects of communality in men and women.

    PubMed

    Kamboj, Sunjeev K; Oldfield, Lucy; Loewenberger, Alana; Das, Ravi K; Bisby, James; Brewin, Chris R

    2014-12-01

    Men and women show differences in performance on emotional processing tasks. Sex also interacts with personality traits to affect information processing. Here we examine effects of sex, and two personality traits that are differentially expressed in men and women - instrumentality and communality - on voluntary and involuntary memory for distressing video-footage. On session one, participants (n = 39 men; 40 women) completed the Bem Sex-Role Inventory, which assesses communal and instrumental traits. After viewing film-footage of death/serious injury, participants recorded daily involuntary memories (intrusions) relating to the footage on an online diary for seven days, returning on day eight for a second session to perform a voluntary memory task relating to the film. Communality interacted with sex such that men with higher levels of communality reported more frequent involuntary memories. Alternatively, a communality × sex interaction reflected a tendency for women with high levels of communality to perform more poorly on the voluntary recognition memory task. The study involved healthy volunteers with no history of significant psychological disorder. Future research with clinical populations will help to determine the generalizability of the current findings. Communality has separate effects on voluntary and involuntary emotional memory. We suggest that high levels of communality in men and women may confer vulnerability to the negative effects of stressful events either through the over-encoding of sensory/perceptual-information in men or the reduced encoding of contextualised, verbally-based, voluntarily accessible representations in women. Copyright © 2014 Elsevier Ltd. All rights reserved.

  10. Dynamic frame resizing with convolutional neural network for efficient video compression

    NASA Astrophysics Data System (ADS)

    Kim, Jaehwan; Park, Youngo; Choi, Kwang Pyo; Lee, JongSeok; Jeon, Sunyoung; Park, JeongHoon

    2017-09-01

    In the past, video codecs such as vc-1 and H.263 used a technique to encode reduced-resolution video and restore original resolution from the decoder for improvement of coding efficiency. The techniques of vc-1 and H.263 Annex Q are called dynamic frame resizing and reduced-resolution update mode, respectively. However, these techniques have not been widely used due to limited performance improvements that operate well only under specific conditions. In this paper, video frame resizing (reduced/restore) technique based on machine learning is proposed for improvement of coding efficiency. The proposed method features video of low resolution made by convolutional neural network (CNN) in encoder and reconstruction of original resolution using CNN in decoder. The proposed method shows improved subjective performance over all the high resolution videos which are dominantly consumed recently. In order to assess subjective quality of the proposed method, Video Multi-method Assessment Fusion (VMAF) which showed high reliability among many subjective measurement tools was used as subjective metric. Moreover, to assess general performance, diverse bitrates are tested. Experimental results showed that BD-rate based on VMAF was improved by about 51% compare to conventional HEVC. Especially, VMAF values were significantly improved in low bitrate. Also, when the method is subjectively tested, it had better subjective visual quality in similar bit rate.

  11. Video image stabilization and registration--plus

    NASA Technical Reports Server (NTRS)

    Hathaway, David H. (Inventor)

    2009-01-01

    A method of stabilizing a video image displayed in multiple video fields of a video sequence includes the steps of: subdividing a selected area of a first video field into nested pixel blocks; determining horizontal and vertical translation of each of the pixel blocks in each of the pixel block subdivision levels from the first video field to a second video field; and determining translation of the image from the first video field to the second video field by determining a change in magnification of the image from the first video field to the second video field in each of horizontal and vertical directions, and determining shear of the image from the first video field to the second video field in each of the horizontal and vertical directions.

  12. 47 CFR 79.3 - Video description of video programming.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... description per calendar quarter, either during prime time or on children's programming; (2) Television... technical capability necessary to pass through the video description, unless using the technology for... video description per calendar quarter during prime time or on children's programming, on each channel...

  13. 47 CFR 79.3 - Video description of video programming.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... description per calendar quarter, either during prime time or on children's programming; (2) Television... technical capability necessary to pass through the video description, unless using the technology for... video description per calendar quarter during prime time or on children's programming, on each channel...

  14. Impact of video games on plasticity of the hippocampus.

    PubMed

    West, G L; Konishi, K; Diarra, M; Benady-Chorney, J; Drisdelle, B L; Dahmani, L; Sodums, D J; Lepore, F; Jolicoeur, P; Bohbot, V D

    2017-08-08

    The hippocampus is critical to healthy cognition, yet results in the current study show that action video game players have reduced grey matter within the hippocampus. A subsequent randomised longitudinal training experiment demonstrated that first-person shooting games reduce grey matter within the hippocampus in participants using non-spatial memory strategies. Conversely, participants who use hippocampus-dependent spatial strategies showed increased grey matter in the hippocampus after training. A control group that trained on 3D-platform games displayed growth in either the hippocampus or the functionally connected entorhinal cortex. A third study replicated the effect of action video game training on grey matter in the hippocampus. These results show that video games can be beneficial or detrimental to the hippocampal system depending on the navigation strategy that a person employs and the genre of the game.Molecular Psychiatry advance online publication, 8 August 2017; doi:10.1038/mp.2017.155.

  15. 76 FR 68117 - Video Description: Implementation of the Twenty-First Century Communications and Video...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-11-03

    ... FEDERAL COMMUNICATIONS COMMISSION 47 CFR Parts 79 [MB Docket No. 11-43; FCC 11-126] Video Description: Implementation of the Twenty-First Century Communications and Video Accessibility Act of 2010... implementation of the Video Description elements of the Twenty-First Century Communications and Video...

  16. Web-video-mining-supported workflow modeling for laparoscopic surgeries.

    PubMed

    Liu, Rui; Zhang, Xiaoli; Zhang, Hao

    2016-11-01

    As quality assurance is of strong concern in advanced surgeries, intelligent surgical systems are expected to have knowledge such as the knowledge of the surgical workflow model (SWM) to support their intuitive cooperation with surgeons. For generating a robust and reliable SWM, a large amount of training data is required. However, training data collected by physically recording surgery operations is often limited and data collection is time-consuming and labor-intensive, severely influencing knowledge scalability of the surgical systems. The objective of this research is to solve the knowledge scalability problem in surgical workflow modeling with a low cost and labor efficient way. A novel web-video-mining-supported surgical workflow modeling (webSWM) method is developed. A novel video quality analysis method based on topic analysis and sentiment analysis techniques is developed to select high-quality videos from abundant and noisy web videos. A statistical learning method is then used to build the workflow model based on the selected videos. To test the effectiveness of the webSWM method, 250 web videos were mined to generate a surgical workflow for the robotic cholecystectomy surgery. The generated workflow was evaluated by 4 web-retrieved videos and 4 operation-room-recorded videos, respectively. The evaluation results (video selection consistency n-index ≥0.60; surgical workflow matching degree ≥0.84) proved the effectiveness of the webSWM method in generating robust and reliable SWM knowledge by mining web videos. With the webSWM method, abundant web videos were selected and a reliable SWM was modeled in a short time with low labor cost. Satisfied performances in mining web videos and learning surgery-related knowledge show that the webSWM method is promising in scaling knowledge for intelligent surgical systems. Copyright © 2016 Elsevier B.V. All rights reserved.

  17. Proof of Concept of Automated Collision Detection Technology in Rugby Sevens.

    PubMed

    Clarke, Anthea C; Anson, Judith M; Pyne, David B

    2017-04-01

    Clarke, AC, Anson, JM, and Pyne, DB. Proof of concept of automated collision detection technology in rugby sevens. J Strength Cond Res 31(4): 1116-1120, 2017-Developments in microsensor technology allow for automated detection of collisions in various codes of football, removing the need for time-consuming postprocessing of video footage. However, little research is available on the ability of microsensor technology to be used across various sports or genders. Game video footage was matched with microsensor-detected collisions (GPSports) in one men's (n = 12 players) and one women's (n = 12) rugby sevens match. True-positive, false-positive, and false-negative events between video and microsensor-detected collisions were used to calculate recall (ability to detect a collision) and precision (accurately identify a collision). The precision was similar between the men's and women's rugby sevens game (∼0.72; scale 0.00-1.00); however, the recall in the women's game (0.45) was less than that for the men's game (0.69). This resulted in 45% of collisions for men and 62% of collisions for women being incorrectly labeled. Currently, the automated collision detection system in GPSports microtechnology units has only modest utility in rugby sevens, and it seems that a rugby sevens-specific algorithm is needed. Differences in measures between the men's and women's game may be a result of physical size, and strength, and physicality, as well as technical and tactical factors.

  18. Making good physics videos

    NASA Astrophysics Data System (ADS)

    Lincoln, James

    2017-05-01

    Online videos are an increasingly important way technology is contributing to the improvement of physics teaching. Students and teachers have begun to rely on online videos to provide them with content knowledge and instructional strategies. Online audiences are expecting greater production value, and departments are sometimes requesting educators to post video pre-labs or to flip our classrooms. In this article, I share my advice on creating engaging physics videos.

  19. Automatic attention-based prioritization of unconstrained video for compression

    NASA Astrophysics Data System (ADS)

    Itti, Laurent

    2004-06-01

    We apply a biologically-motivated algorithm that selects visually-salient regions of interest in video streams to multiply-foveated video compression. Regions of high encoding priority are selected based on nonlinear integration of low-level visual cues, mimicking processing in primate occipital and posterior parietal cortex. A dynamic foveation filter then blurs (foveates) every frame, increasingly with distance from high-priority regions. Two variants of the model (one with continuously-variable blur proportional to saliency at every pixel, and the other with blur proportional to distance from three independent foveation centers) are validated against eye fixations from 4-6 human observers on 50 video clips (synthetic stimuli, video games, outdoors day and night home video, television newscast, sports, talk-shows, etc). Significant overlap is found between human and algorithmic foveations on every clip with one variant, and on 48 out of 50 clips with the other. Substantial compressed file size reductions by a factor 0.5 on average are obtained for foveated compared to unfoveated clips. These results suggest a general-purpose usefulness of the algorithm in improving compression ratios of unconstrained video.

  20. Echocardiogram video summarization

    NASA Astrophysics Data System (ADS)

    Ebadollahi, Shahram; Chang, Shih-Fu; Wu, Henry D.; Takoma, Shin

    2001-05-01

    This work aims at developing innovative algorithms and tools for summarizing echocardiogram videos. Specifically, we summarize the digital echocardiogram videos by temporally segmenting them into the constituent views and representing each view by the most informative frame. For the segmentation we take advantage of the well-defined spatio- temporal structure of the echocardiogram videos. Two different criteria are used: presence/absence of color and the shape of the region of interest (ROI) in each frame of the video. The change in the ROI is due to different modes of echocardiograms present in one study. The representative frame is defined to be the frame corresponding to the end- diastole of the heart cycle. To locate the end-diastole we track the ECG of each frame to find the exact time the time- marker on the ECG crosses the peak of the end-diastole we track the ECG of each frame to find the exact time the time- marker on the ECG crosses the peak of the R-wave. The corresponding frame is chosen to be the key-frame. The entire echocardiogram video can be summarized into either a static summary, which is a storyboard type of summary and a dynamic summary, which is a concatenation of the selected segments of the echocardiogram video. To the best of our knowledge, this if the first automated system for summarizing the echocardiogram videos base don visual content.

  1. 47 CFR 79.3 - Video description of video programming.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... transmission by a video programming distributor. (8) Children's Programming. Television programming directed at children 16 years of age and under. (b) The following video programming distributors must provide... or on children's programming, on each programming stream on which they carry one of the top four...

  2. Web Video Event Recognition by Semantic Analysis From Ubiquitous Documents.

    PubMed

    Yu, Litao; Yang, Yang; Huang, Zi; Wang, Peng; Song, Jingkuan; Shen, Heng Tao

    2016-12-01

    In recent years, the task of event recognition from videos has attracted increasing interest in multimedia area. While most of the existing research was mainly focused on exploring visual cues to handle relatively small-granular events, it is difficult to directly analyze video content without any prior knowledge. Therefore, synthesizing both the visual and semantic analysis is a natural way for video event understanding. In this paper, we study the problem of Web video event recognition, where Web videos often describe large-granular events and carry limited textual information. Key challenges include how to accurately represent event semantics from incomplete textual information and how to effectively explore the correlation between visual and textual cues for video event understanding. We propose a novel framework to perform complex event recognition from Web videos. In order to compensate the insufficient expressive power of visual cues, we construct an event knowledge base by deeply mining semantic information from ubiquitous Web documents. This event knowledge base is capable of describing each event with comprehensive semantics. By utilizing this base, the textual cues for a video can be significantly enriched. Furthermore, we introduce a two-view adaptive regression model, which explores the intrinsic correlation between the visual and textual cues of the videos to learn reliable classifiers. Extensive experiments on two real-world video data sets show the effectiveness of our proposed framework and prove that the event knowledge base indeed helps improve the performance of Web video event recognition.

  3. Video prompting versus other instruction strategies for persons with Alzheimer's disease.

    PubMed

    Perilli, Viviana; Lancioni, Giulio E; Hoogeveen, Frans; Caffó, Alessandro; Singh, Nirbhay; O'Reilly, Mark; Sigafoos, Jeff; Cassano, Germana; Oliva, Doretta

    2013-06-01

    Two studies assessed the effectiveness of video prompting as a strategy to support persons with mild and moderate Alzheimer's disease in performing daily activities. In study I, video prompting was compared to an existing strategy relying on verbal instructions. In study II, video prompting was compared to another existing strategy relying on static pictorial cues. Video prompting and the other strategies were counterbalanced across tasks and participants and compared within alternating treatments designs. Video prompting was effective in all participants. Similarly effective were the other 2 strategies, and only occasional differences between the strategies were reported. Two social validation assessments showed that university psychology students and graduates rated the patients' performance with video prompting more favorably than their performance with the other strategies. Video prompting may be considered a valuable alternative to the other strategies to support daily activities in persons with Alzheimer's disease.

  4. Video Transmission for Third Generation Wireless Communication Systems

    PubMed Central

    Gharavi, H.; Alamouti, S. M.

    2001-01-01

    This paper presents a twin-class unequal protected video transmission system over wireless channels. Video partitioning based on a separation of the Variable Length Coded (VLC) Discrete Cosine Transform (DCT) coefficients within each block is considered for constant bitrate transmission (CBR). In the splitting process the fraction of bits assigned to each of the two partitions is adjusted according to the requirements of the unequal error protection scheme employed. Subsequently, partitioning is applied to the ITU-T H.263 coding standard. As a transport vehicle, we have considered one of the leading third generation cellular radio standards known as WCDMA. A dual-priority transmission system is then invoked on the WCDMA system where the video data, after being broken into two streams, is unequally protected. We use a very simple error correction coding scheme for illustration and then propose more sophisticated forms of unequal protection of the digitized video signals. We show that this strategy results in a significantly higher quality of the reconstructed video data when it is transmitted over time-varying multipath fading channels. PMID:27500033

  5. Concept of Video Bookmark (Videomark) and Its Application to the Collaborative Indexing of Lecture Video in Video-Based Distance Education

    ERIC Educational Resources Information Center

    Haga, Hirohide

    2004-01-01

    This article describes the development of the video bookmark, hereinafter referred to as the videomark, and its application to the collaborative indexing of the lecture video in video-based distance education system. The combination of the videomark system with the bulletin board system (BBS), which is another network tool used for discussion, is…

  6. Toward Dietary Assessment via Mobile Phone Video Cameras.

    PubMed

    Chen, Nicholas; Lee, Yun Young; Rabb, Maurice; Schatz, Bruce

    2010-11-13

    Reliable dietary assessment is a challenging yet essential task for determining general health. Existing efforts are manual, require considerable effort, and are prone to underestimation and misrepresentation of food intake. We propose leveraging mobile phones to make this process faster, easier and automatic. Using mobile phones with built-in video cameras, individuals capture short videos of their meals; our software then automatically analyzes the videos to recognize dishes and estimate calories. Preliminary experiments on 20 typical dishes from a local cafeteria show promising results. Our approach complements existing dietary assessment methods to help individuals better manage their diet to prevent obesity and other diet-related diseases.

  7. Teaching autistic children conversational speech using video modeling.

    PubMed Central

    Charlop, M H; Milstein, J P

    1989-01-01

    We assessed the effects of video modeling on acquisition and generalization of conversational skills among autistic children. Three autistic boys observed videotaped conversations consisting of two people discussing specific toys. When criterion for learning was met, generalization of conversational skills was assessed with untrained topics of conversation; new stimuli (toys); unfamiliar persons, siblings, and autistic peers; and other settings. The results indicated that the children learned through video modeling, generalized their conversational skills, and maintained conversational speech over a 15-month period. Video modeling shows much promise as a rapid and effective procedure for teaching complex verbal skills such as conversational speech. PMID:2793634

  8. Video Games Related to Young Adults: Mapping Research Interest

    ERIC Educational Resources Information Center

    Piotrowski, Chris

    2015-01-01

    This study attempts to identify the typological-research domain of the extant literature on video games related to college-age samples (18-29 years-of-age). A content analysis of 264 articles, from PsycINFO for these identifiers, was performed. Findings showed that negative or pathological aspects of video gaming, i.e., violence potential,…

  9. Baby FaceTime: can toddlers learn from online video chat?

    PubMed

    Myers, Lauren J; LeWitt, Rachel B; Gallo, Renee E; Maselli, Nicole M

    2017-07-01

    There is abundant evidence for the 'video deficit': children under 2 years old learn better in person than from video. We evaluated whether these findings applied to video chat by testing whether children aged 12-25 months could form relationships with and learn from on-screen partners. We manipulated social contingency: children experienced either real-time FaceTime conversations or pre-recorded Videos as the partner taught novel words, actions and patterns. Children were attentive and responsive in both conditions, but only children in the FaceTime group responded to the partner in a temporally synced manner. After one week, children in the FaceTime condition (but not the Video condition) preferred and recognized their Partner, learned more novel patterns, and the oldest children learned more novel words. Results extend previous studies to demonstrate that children under 2 years show social and cognitive learning from video chat because it retains social contingency. A video abstract of this article can be viewed at: https://youtu.be/rTXaAYd5adA. © 2016 John Wiley & Sons Ltd.

  10. Feasibility Study On Missile Launch Detection And Trajectory Tracking

    DTIC Science & Technology

    2016-09-01

    Vehicles ( UAVs ) in military operations, their role in a missile defense operation is not well defined. The simulation program discussed in this thesis ...targeting information to an attacking UAV to reliably intercept the missile. B . FURTHER STUDIES The simulation program can be enhanced to improve the...intercept the threat. This thesis explores the challenges in creating a simulation program to process video footage from an unstable platform and the

  11. ISS Expedition 43 Soyuz Rollout

    NASA Image and Video Library

    2015-04-06

    NASA TV (NTV) video file of ISS Expedition 43 Soyuz rollout to launch pad. Includes footage of the rollout by train; Rocket hoisted into upright position; interview with Bob Behnken, Chief of Astronaut Office; Dr. John Charles, chief of the International Science Office of NASA's Human Research Program , Johnson Space Center; and family and friends speaking with and saying goodbye to ISS Expedition 43 - 46 One Year crewmember Scott Kelly .

  12. Preventing Bulk Cash and Weapons Smuggling into Mexico: Establishing an Outbound Policy for the Southwest Border for Customs and Border Protection

    DTIC Science & Technology

    2010-12-01

    Houston, Los Angeles, Phoenix , San Antonio, and San Diego, are significant storage locations, as well as regional and national transportation and...system for human smuggling on the SWB and often use foot guides to guide aliens through the POEs on the SWB. Video footage retrieved on September 1...example, kidnappings in Phoenix rose to 267 and all were drug-related (Finklea, 2010, p. 10). D. WEAPONS TRAFFICKING Weapons are strictly controlled

  13. Before Putting Mouth (and Operation) in Gear, Ensure Brain is Engaged: The importance of Communication and Information in Military Operations

    DTIC Science & Technology

    2009-06-01

    from Ursinus College, a Masters Degree in Organizational Management from the University of Phoenix , and a Masters Degree in National Security and...itself changed. Instead of changing to meet emerging requirements, especially in light of the Tet Offensive, any information campaign came across as...releasing spectacular video footage taken by cameras located on the noses of ‘smart’ weaponry as they glided with uncanny accuracy through the doors

  14. Examination of YouTube videos related to synthetic cannabinoids

    PubMed Central

    Kecojevic, Aleksandar; Basch, Corey H.

    2016-01-01

    The popularity of synthetic cannabinoids (SCBs) is increasing the chance for adverse health issues in the United States. Moreover, social media platforms such as YouTube that provided a platform for user-generated content can convey misinformation or glorify use of SCBs. The aim of this study was to fill this gap by describing the content of the most popular YouTube videos containing content related to the SCBs. Videos with at least 1000 or more views found under the search terms “K2” and “spice” included in the analysis. The collective number of views was over 7.5 million. Nearly half of videos were consumer produced (n = 42). The most common content in the videos was description of K2 (n = 69), followed by mentioning dangers of using K2 (n = 47), mentioning side effects (n = 38) and showing a person using K2 (n = 37). One-third of videos (n = 34) promoted use of K2, while 22 videos mentioned risk of dying as a consequence of using K2. YouTube could be used as a surveillance tool to combat this epidemic, but instead, the most widely videos related to SCBs are uploaded by consumers. The content of these consumer videos on YouTube often provide the viewer with access to view a wide array of uploaders describing, encouraging, participating and promoting use. PMID:27639268

  15. Examination of YouTube videos related to synthetic cannabinoids.

    PubMed

    Fullwood, M Dottington; Kecojevic, Aleksandar; Basch, Corey H

    2016-08-17

    The popularity of synthetic cannabinoids (SCBs) is increasing the chance for adverse health issues in the United States. Moreover, social media platforms such as YouTube that provided a platform for user-generated content can convey misinformation or glorify use of SCBs. The aim of this study was to fill this gap by describing the content of the most popular YouTube videos containing content related to the SCBs. Videos with at least 1000 or more views found under the search terms "K2" and "spice" included in the analysis. The collective number of views was over 7.5 million. Nearly half of videos were consumer produced (n=42). The most common content in the videos was description of K2 (n=69), followed by mentioning dangers of using K2 (n=47), mentioning side effects (n=38) and showing a person using K2 (n=37). One-third of videos (n=34) promoted use of K2, while 22 videos mentioned risk of dying as a consequence of using K2. YouTube could be used as a surveillance tool to combat this epidemic, but instead, the most widely videos related to SCBs are uploaded by consumers. The content of these consumer videos on YouTube often provide the viewer with access to view a wide array of uploaders describing, encouraging, participating and promoting use.

  16. Adaptive compressed sensing of multi-view videos based on the sparsity estimation

    NASA Astrophysics Data System (ADS)

    Yang, Senlin; Li, Xilong; Chong, Xin

    2017-11-01

    The conventional compressive sensing for videos based on the non-adaptive linear projections, and the measurement times is usually set empirically. As a result, the quality of videos reconstruction is always affected. Firstly, the block-based compressed sensing (BCS) with conventional selection for compressive measurements was described. Then an estimation method for the sparsity of multi-view videos was proposed based on the two dimensional discrete wavelet transform (2D DWT). With an energy threshold given beforehand, the DWT coefficients were processed with both energy normalization and sorting by descending order, and the sparsity of the multi-view video can be achieved by the proportion of dominant coefficients. And finally, the simulation result shows that, the method can estimate the sparsity of video frame effectively, and provides an active basis for the selection of compressive observation times. The result also shows that, since the selection of observation times is based on the sparsity estimated with the energy threshold provided, the proposed method can ensure the reconstruction quality of multi-view videos.

  17. Documenting of Geologic Field Activities in Real-Time in Four Dimensions: Apollo 17 as a Case Study for Terrestrial Analogues and Future Exploration

    NASA Technical Reports Server (NTRS)

    Feist, B.; Bleacher, J. E.; Petro, N. E.; Niles, P. B.

    2018-01-01

    During the Apollo exploration of the lunar surface, thousands of still images, 16 mm videos, TV footage, samples, and surface experiments were captured and collected. In addition, observations and descriptions of what was observed was radioed to Mission Control as part of standard communications and subsequently transcribed. The archive of this material represents perhaps the best recorded set of geologic field campaigns and will serve as the example of how to conduct field work on other planetary bodies for decades to come. However, that archive of material exists in disparate locations and formats with varying levels of completeness, making it not easily cross-referenceable. While video and audio exist for the missions, it is not time synchronized, and images taken during the missions are not time or location tagged. Sample data, while robust, is not easily available in a context of where the samples were collected, their descriptions by the astronauts are not connected to them, or the video footage of their collection (if available). A more than five year undertaking to reconstruct and reconcile the Apollo 17 mission archive, from launch through splashdown, has generated an integrated record of the entire mission, resulting in searchable, synchronized image, voice, and video data, with geologic context provided at the time each sample was collected. Through www.apollo17.org the documentation of the field investigation conducted by the Apollo 17 crew is presented in chronologic sequence, with additional context provided by high-resolution Lunar Reconnaissance Orbiter Camera (LROC) Narrow Angle Camera (NAC) images and a corresponding digital terrain model (DTM) of the Taurus-Littrow Valley.

  18. Analysis of a severe head injury in World Cup alpine skiing.

    PubMed

    Yamazaki, Junya; Gilgien, Matthias; Kleiven, Svein; McIntosh, Andrew S; Nachbauer, Werner; Müller, Erich; Bere, Tone; Bahr, Roald; Krosshaug, Tron

    2015-06-01

    Traumatic brain injury (TBI) is the leading cause of death in alpine skiing. It has been found that helmet use can reduce the incidence of head injuries between 15% and 60%. However, knowledge on optimal helmet performance criteria in World Cup alpine skiing is currently limited owing to the lack of biomechanical data from real crash situations. This study aimed to estimate impact velocities in a severe TBI case in World Cup alpine skiing. Video sequences from a TBI case in World Cup alpine skiing were analyzed using a model-based image matching technique. Video sequences from four camera views were obtained in full high-definition (1080p) format. A three-dimensional model of the course was built based on accurate measurements of piste landmarks and matched to the background video footage using the animation software Poser 4. A trunk-neck-head model was used for tracking the skier's trajectory. Immediately before head impact, the downward velocity component was estimated to be 8 m·s⁻¹. After impact, the upward velocity was 3 m·s⁻¹, whereas the velocity parallel to the slope surface was reduced from 33 m·s⁻¹ to 22 m·s⁻¹. The frontal plane angular velocity of the head changed from 80 rad·s⁻¹ left tilt immediately before impact to 20 rad·s⁻¹ right tilt immediately after impact. A unique combination of high-definition video footage and accurate measurements of landmarks in the slope made possible a high-quality analysis of head impact velocity in a severe TBI case. The estimates can provide crucial information on how to prevent TBI through helmet performance criteria and design.

  19. A low delay transmission method of multi-channel video based on FPGA

    NASA Astrophysics Data System (ADS)

    Fu, Weijian; Wei, Baozhi; Li, Xiaobin; Wang, Quan; Hu, Xiaofei

    2018-03-01

    In order to guarantee the fluency of multi-channel video transmission in video monitoring scenarios, we designed a kind of video format conversion method based on FPGA and its DMA scheduling for video data, reduces the overall video transmission delay.In order to sace the time in the conversion process, the parallel ability of FPGA is used to video format conversion. In order to improve the direct memory access (DMA) writing transmission rate of PCIe bus, a DMA scheduling method based on asynchronous command buffer is proposed. The experimental results show that this paper designs a low delay transmission method based on FPGA, which increases the DMA writing transmission rate by 34% compared with the existing method, and then the video overall delay is reduced to 23.6ms.

  20. Consumer-based technology for distribution of surgical videos for objective evaluation.

    PubMed

    Gonzalez, Ray; Martinez, Jose M; Lo Menzo, Emanuele; Iglesias, Alberto R; Ro, Charles Y; Madan, Atul K

    2012-08-01

    The Global Operative Assessment of Laparoscopic Skill (GOALS) is one validated metric utilized to grade laparoscopic skills and has been utilized to score recorded operative videos. To facilitate easier viewing of these recorded videos, we are developing novel techniques to enable surgeons to view these videos. The objective of this study is to determine the feasibility of utilizing widespread current consumer-based technology to assist in distributing appropriate videos for objective evaluation. Videos from residents were recorded via a direct connection from the camera processor via an S-video output via a cable into a hub to connect to a standard laptop computer via a universal serial bus (USB) port. A standard consumer-based video editing program was utilized to capture the video and record in appropriate format. We utilized mp4 format, and depending on the size of the file, the videos were scaled down (compressed), their format changed (using a standard video editing program), or sliced into multiple videos. Standard available consumer-based programs were utilized to convert the video into a more appropriate format for handheld personal digital assistants. In addition, the videos were uploaded to a social networking website and video sharing websites. Recorded cases of laparoscopic cholecystectomy in a porcine model were utilized. Compression was required for all formats. All formats were accessed from home computers, work computers, and iPhones without difficulty. Qualitative analyses by four surgeons demonstrated appropriate quality to grade for these formats. Our preliminary results show promise that, utilizing consumer-based technology, videos can be easily distributed to surgeons to grade via GOALS via various methods. Easy accessibility may help make evaluation of resident videos less complicated and cumbersome.

  1. Design of batch audio/video conversion platform based on JavaEE

    NASA Astrophysics Data System (ADS)

    Cui, Yansong; Jiang, Lianpin

    2018-03-01

    With the rapid development of digital publishing industry, the direction of audio / video publishing shows the diversity of coding standards for audio and video files, massive data and other significant features. Faced with massive and diverse data, how to quickly and efficiently convert to a unified code format has brought great difficulties to the digital publishing organization. In view of this demand and present situation in this paper, basing on the development architecture of Sptring+SpringMVC+Mybatis, and combined with the open source FFMPEG format conversion tool, a distributed online audio and video format conversion platform with a B/S structure is proposed. Based on the Java language, the key technologies and strategies designed in the design of platform architecture are analyzed emphatically in this paper, designing and developing a efficient audio and video format conversion system, which is composed of “Front display system”, "core scheduling server " and " conversion server ". The test results show that, compared with the ordinary audio and video conversion scheme, the use of batch audio and video format conversion platform can effectively improve the conversion efficiency of audio and video files, and reduce the complexity of the work. Practice has proved that the key technology discussed in this paper can be applied in the field of large batch file processing, and has certain practical application value.

  2. Video as a technology for interpersonal communications: a new perspective

    NASA Astrophysics Data System (ADS)

    Whittaker, Steve

    1995-03-01

    Some of the most challenging multimedia applications have involved real- time conferencing, using audio and video to support interpersonal communication. Here we re-examine assumptions about the role, importance and implementation of video information in such systems. Rather than focussing on novel technologies, we present evaluation data relevant to both the classes of real-time multimedia applications we should develop and their design and implementation. Evaluations of videoconferencing systems show that previous work has overestimated the importance of video at the expense of audio. This has strong implications for the implementation of bandwidth allocation and synchronization. Furthermore our recent studies of workplace interaction show that prior work has neglected another potentially vital function of visual information: in assessing the communication availability of others. In this new class of application, rather than providing a supplement to audio information, visual information is used to promote the opportunistic communications that are prevalent in face-to-face settings. We discuss early experiments with such connection applications and identify outstanding design and implementation issues. Finally we examine a different class of application 'video-as-data', where the video image is used to transmit information about the work objects themselves, rather than information about interactants.

  3. Engaging pre-service teachers to teach science contextually with scientific approach instructional video

    NASA Astrophysics Data System (ADS)

    Susantini, E.; Kurniasari, I.; Fauziah, A. N. M.; Prastowo, T.; Kholiq, A.; Rosdiana, L.

    2018-01-01

    Contextual teaching and learning (CTL) present new concepts in real experiences and situations, where students can find out the meaningful relationship between abstract ideas and practical applications. Implementation of CTL using scientific approach fosters teachers to find constructive ways of delivering and organizing science contents in science classroom settings. An instructional video for modelling by using a scientific approach in CTL was then developed. Questionnaires with open-ended questions were used to, asking whether modelling through instructional video could help them to teach science contextually with a scientific approach or not. Data for pre-service teachers’ views were analyzed descriptively. The aims of this research are to engage pre-service teachers in learning how to teach CTL and to show how their responses to learning and how to teach CTL using the video. The study showed that ten pre-service teachers in science department were involved, all observed through videos that demonstrated a combined material of CTL and scientific approach and completed worksheets to analyze the video contents. The results show that pre-service teachers could learn to teach contextual teaching and make use of scientific approach in science classroom settings with the help of model in the video.

  4. Video in the Middle: Purposeful Design of Video-Based Mathematics Professional Development

    ERIC Educational Resources Information Center

    Seago, Nanette; Koellner, Karen; Jacobs, Jennifer

    2018-01-01

    In this article the authors described their exploration of a particular design element they labeled "video in the middle." As part of the video in the middle design, the viewing of carefully selected video clips from teachers' classrooms is sandwiched between pre- and postviewing activities that are expected to support teachers'…

  5. Video guidance, landing, and imaging systems

    NASA Technical Reports Server (NTRS)

    Schappell, R. T.; Knickerbocker, R. L.; Tietz, J. C.; Grant, C.; Rice, R. B.; Moog, R. D.

    1975-01-01

    The adaptive potential of video guidance technology for earth orbital and interplanetary missions was explored. The application of video acquisition, pointing, tracking, and navigation technology was considered to three primary missions: planetary landing, earth resources satellite, and spacecraft rendezvous and docking. It was found that an imaging system can be mechanized to provide a spacecraft or satellite with a considerable amount of adaptability with respect to its environment. It also provides a level of autonomy essential to many future missions and enhances their data gathering ability. The feasibility of an autonomous video guidance system capable of observing a planetary surface during terminal descent and selecting the most acceptable landing site was successfully demonstrated in the laboratory. The techniques developed for acquisition, pointing, and tracking show promise for recognizing and tracking coastlines, rivers, and other constituents of interest. Routines were written and checked for rendezvous, docking, and station-keeping functions.

  6. Android Video Streaming

    DTIC Science & Technology

    2014-05-01

    natural choice. In this document, we describe several aspects of video streaming and the challenges of performing video streaming between Android-based...client application was needed. Typically something like VideoLAN Client ( VLC ) is used for this purpose in a desktop environment. However, while VLC is...a very mature application on Windows and Linux, VLC for Android is still in a beta testing phase, and versions have only been developed to work

  7. Representing videos in tangible products

    NASA Astrophysics Data System (ADS)

    Fageth, Reiner; Weiting, Ralf

    2014-03-01

    Videos can be taken with nearly every camera, digital point and shoot cameras, DSLRs as well as smartphones and more and more with so-called action cameras mounted on sports devices. The implementation of videos while generating QR codes and relevant pictures out of the video stream via a software implementation was contents in last years' paper. This year we present first data about what contents is displayed and how the users represent their videos in printed products, e.g. CEWE PHOTOBOOKS and greeting cards. We report the share of the different video formats used, the number of images extracted out of the video in order to represent the video, the positions in the book and different design strategies compared to regular books.

  8. Quality and noise measurements in mobile phone video capture

    NASA Astrophysics Data System (ADS)

    Petrescu, Doina; Pincenti, John

    2011-02-01

    The quality of videos captured with mobile phones has become increasingly important particularly since resolutions and formats have reached a level that rivals the capabilities available in the digital camcorder market, and since many mobile phones now allow direct playback on large HDTVs. The video quality is determined by the combined quality of the individual parts of the imaging system including the image sensor, the digital color processing, and the video compression, each of which has been studied independently. In this work, we study the combined effect of these elements on the overall video quality. We do this by evaluating the capture under various lighting, color processing, and video compression conditions. First, we measure full reference quality metrics between encoder input and the reconstructed sequence, where the encoder input changes with light and color processing modifications. Second, we introduce a system model which includes all elements that affect video quality, including a low light additive noise model, ISP color processing, as well as the video encoder. Our experiments show that in low light conditions and for certain choices of color processing the system level visual quality may not improve when the encoder becomes more capable or the compression ratio is reduced.

  9. Scalable Video Streaming Relay for Smart Mobile Devices in Wireless Networks

    PubMed Central

    Kwon, Dongwoo; Je, Huigwang; Kim, Hyeonwoo; Ju, Hongtaek; An, Donghyeok

    2016-01-01

    Recently, smart mobile devices and wireless communication technologies such as WiFi, third generation (3G), and long-term evolution (LTE) have been rapidly deployed. Many smart mobile device users can access the Internet wirelessly, which has increased mobile traffic. In 2014, more than half of the mobile traffic around the world was devoted to satisfying the increased demand for the video streaming. In this paper, we propose a scalable video streaming relay scheme. Because many collisions degrade the scalability of video streaming, we first separate networks to prevent excessive contention between devices. In addition, the member device controls the video download rate in order to adapt to video playback. If the data are sufficiently buffered, the member device stops the download. If not, it requests additional video data. We implemented apps to evaluate the proposed scheme and conducted experiments with smart mobile devices. The results showed that our scheme improves the scalability of video streaming in a wireless local area network (WLAN). PMID:27907113

  10. Scalable Video Streaming Relay for Smart Mobile Devices in Wireless Networks.

    PubMed

    Kwon, Dongwoo; Je, Huigwang; Kim, Hyeonwoo; Ju, Hongtaek; An, Donghyeok

    2016-01-01

    Recently, smart mobile devices and wireless communication technologies such as WiFi, third generation (3G), and long-term evolution (LTE) have been rapidly deployed. Many smart mobile device users can access the Internet wirelessly, which has increased mobile traffic. In 2014, more than half of the mobile traffic around the world was devoted to satisfying the increased demand for the video streaming. In this paper, we propose a scalable video streaming relay scheme. Because many collisions degrade the scalability of video streaming, we first separate networks to prevent excessive contention between devices. In addition, the member device controls the video download rate in order to adapt to video playback. If the data are sufficiently buffered, the member device stops the download. If not, it requests additional video data. We implemented apps to evaluate the proposed scheme and conducted experiments with smart mobile devices. The results showed that our scheme improves the scalability of video streaming in a wireless local area network (WLAN).

  11. Video error concealment using block matching and frequency selective extrapolation algorithms

    NASA Astrophysics Data System (ADS)

    P. K., Rajani; Khaparde, Arti

    2017-06-01

    Error Concealment (EC) is a technique at the decoder side to hide the transmission errors. It is done by analyzing the spatial or temporal information from available video frames. It is very important to recover distorted video because they are used for various applications such as video-telephone, video-conference, TV, DVD, internet video streaming, video games etc .Retransmission-based and resilient-based methods, are also used for error removal. But these methods add delay and redundant data. So error concealment is the best option for error hiding. In this paper, the error concealment methods such as Block Matching error concealment algorithm is compared with Frequency Selective Extrapolation algorithm. Both the works are based on concealment of manually error video frames as input. The parameter used for objective quality measurement was PSNR (Peak Signal to Noise Ratio) and SSIM(Structural Similarity Index). The original video frames along with error video frames are compared with both the Error concealment algorithms. According to simulation results, Frequency Selective Extrapolation is showing better quality measures such as 48% improved PSNR and 94% increased SSIM than Block Matching Algorithm.

  12. Teaching introductory undergraduate physics using commercial video games

    NASA Astrophysics Data System (ADS)

    Mohanty, Soumya D.; Cantu, Sergio

    2011-09-01

    Commercial video games are increasingly using sophisticated physics simulations to create a more immersive experience for players. This also makes them a powerful tool for engaging students in learning physics. We provide some examples to show how commercial off-the-shelf games can be used to teach specific topics in introductory undergraduate physics. The examples are selected from a course taught predominantly through the medium of commercial video games.

  13. Two-Stream Transformer Networks for Video-based Face Alignment.

    PubMed

    Liu, Hao; Lu, Jiwen; Feng, Jianjiang; Zhou, Jie

    2017-08-01

    In this paper, we propose a two-stream transformer networks (TSTN) approach for video-based face alignment. Unlike conventional image-based face alignment approaches which cannot explicitly model the temporal dependency in videos and motivated by the fact that consistent movements of facial landmarks usually occur across consecutive frames, our TSTN aims to capture the complementary information of both the spatial appearance on still frames and the temporal consistency information across frames. To achieve this, we develop a two-stream architecture, which decomposes the video-based face alignment into spatial and temporal streams accordingly. Specifically, the spatial stream aims to transform the facial image to the landmark positions by preserving the holistic facial shape structure. Accordingly, the temporal stream encodes the video input as active appearance codes, where the temporal consistency information across frames is captured to help shape refinements. Experimental results on the benchmarking video-based face alignment datasets show very competitive performance of our method in comparisons to the state-of-the-arts.

  14. Informal Physics Learning from Video Games: A Case Study Using Gameplay Videos

    ERIC Educational Resources Information Center

    Croxton, DeVaughn; Kortemeyer, Gerd

    2018-01-01

    Researching informal gameplay can be challenging, since as soon as a formal study design is imposed, it becomes neither casual nor self-motivated. As a case study of a non-invasive design, we analyze publicly posted gameplay videos to assess the effectiveness of a physics educational video game on special relativity. These videos offer unique…

  15. Video Skimming and Characterization through the Combination of Image and Language Understanding Techniques

    NASA Technical Reports Server (NTRS)

    Smith, Michael A.; Kanade, Takeo

    1997-01-01

    Digital video is rapidly becoming important for education, entertainment, and a host of multimedia applications. With the size of the video collections growing to thousands of hours, technology is needed to effectively browse segments in a short time without losing the content of the video. We propose a method to extract the significant audio and video information and create a "skim" video which represents a very short synopsis of the original. The goal of this work is to show the utility of integrating language and image understanding techniques for video skimming by extraction of significant information, such as specific objects, audio keywords and relevant video structure. The resulting skim video is much shorter, where compaction is as high as 20:1, and yet retains the essential content of the original segment.

  16. STS-107 Mission Highlights Resource, Part 3 of 4

    NASA Technical Reports Server (NTRS)

    2003-01-01

    This video, Part 3 of 4, shows the activities of the STS-107 crew during flight days 9 through 12 of the Columbia orbiter's final flight. The crew consists of Commander Rick Husband, Pilot William McCool, Payload Commander Michael Anderson, Mission Specialists David Brown, Kalpana Chawla, and Laurel Clark, and Payload Specialist Ilan Ramon. On flight day 9 David Brown and other crew members are at work on experiments in the Spacehab research module, and imagery is shown from the Mediterranean Israeli Dust Experiment (MEIDEX) on a pass over North Africa and the Horn of Africa. Ilan Ramon narrates part of the footage from flight day 10, and intravehicular activities of the astronauts onboard Columbia are shown, as well as views of the Gulf of Aden, and Lake Chad, which is seen with the back of the orbiter in the foreground. Rick Husband narrates the footage from day 11, which includes cleaning duties and maintenance, as well as an excellent view of the Sinai Peninsula, Israel, and Jordan, as well as the Mediterranean Sea, Red Sea, and Gulf of Aqaba. The highlight of flight day 12 is a conversation between Columbia's crew and the crew of the International Space Station (ISS). A special section of Earth views at the end of the video shows: 1) Atlantic Ocean, Strait of Gibraltar, Mediterranean Sea, Iberian Peninsula, Morocco, and Algeria; 2) Baja Peninsula; 3) Cyprus and Mediterranean Sea; 4) Florida; 5) Earth limb and Pacific Ocean; 6) North Carolina Outer Banks, Cape Hatteras, and Atlantic Ocean; 7) Houston with zoom out to Texas and Louisiana; 8) Mt. Vesuvius (Italy); 9) Earth limb and Atlantic Ocean; 10) Earth limb and terminator, and Pacific Ocean; 11) Saudia Arabia, Yemen, Oman, and Arabian Sea.

  17. Recognizing problem video game use.

    PubMed

    Porter, Guy; Starcevic, Vladan; Berle, David; Fenech, Pauline

    2010-02-01

    It has been increasingly recognized that some people develop problem video game use, defined here as excessive use of video games resulting in various negative psychosocial and/or physical consequences. The main objectives of the present study were to identify individuals with problem video game use and compare them with those without problem video game use on several variables. An international, anonymous online survey was conducted, using a questionnaire with provisional criteria for problem video game use, which the authors have developed. These criteria reflect the crucial features of problem video game use: preoccupation with and loss of control over playing video games and multiple adverse consequences of this activity. A total of 1945 survey participants completed the survey. Respondents who were identified as problem video game users (n = 156, 8.0%) differed significantly from others (n = 1789) on variables that provided independent, preliminary validation of the provisional criteria for problem video game use. They played longer than planned and with greater frequency, and more often played even though they did not want to and despite believing that they should not do it. Problem video game users were more likely to play certain online role-playing games, found it easier to meet people online, had fewer friends in real life, and more often reported excessive caffeine consumption. People with problem video game use can be identified by means of a questionnaire and on the basis of the present provisional criteria, which require further validation. These findings have implications for recognition of problem video game users among individuals, especially adolescents, who present to mental health services. Mental health professionals need to acknowledge the public health significance of the multiple negative consequences of problem video game use.

  18. Video Super-Resolution via Bidirectional Recurrent Convolutional Networks.

    PubMed

    Huang, Yan; Wang, Wei; Wang, Liang

    2018-04-01

    Super resolving a low-resolution video, namely video super-resolution (SR), is usually handled by either single-image SR or multi-frame SR. Single-Image SR deals with each video frame independently, and ignores intrinsic temporal dependency of video frames which actually plays a very important role in video SR. Multi-Frame SR generally extracts motion information, e.g., optical flow, to model the temporal dependency, but often shows high computational cost. Considering that recurrent neural networks (RNNs) can model long-term temporal dependency of video sequences well, we propose a fully convolutional RNN named bidirectional recurrent convolutional network for efficient multi-frame SR. Different from vanilla RNNs, 1) the commonly-used full feedforward and recurrent connections are replaced with weight-sharing convolutional connections. So they can greatly reduce the large number of network parameters and well model the temporal dependency in a finer level, i.e., patch-based rather than frame-based, and 2) connections from input layers at previous timesteps to the current hidden layer are added by 3D feedforward convolutions, which aim to capture discriminate spatio-temporal patterns for short-term fast-varying motions in local adjacent frames. Due to the cheap convolutional operations, our model has a low computational complexity and runs orders of magnitude faster than other multi-frame SR methods. With the powerful temporal dependency modeling, our model can super resolve videos with complex motions and achieve well performance.

  19. A prototype to automate the video subsystem routing for the video distribution subsystem of Space Station Freedom

    NASA Astrophysics Data System (ADS)

    Betz, Jessie M. Bethly

    1993-12-01

    The Video Distribution Subsystem (VDS) for Space Station Freedom provides onboard video communications. The VDS includes three major functions: external video switching; internal video switching; and sync and control generation. The Video Subsystem Routing (VSR) is a part of the VDS Manager Computer Software Configuration Item (VSM/CSCI). The VSM/CSCI is the software which controls and monitors the VDS equipment. VSR activates, terminates, and modifies video services in response to Tier-1 commands to connect video sources to video destinations. VSR selects connection paths based on availability of resources and updates the video routing lookup tables. This project involves investigating the current methodology to automate the Video Subsystem Routing and developing and testing a prototype as 'proof of concept' for designers.

  20. ARPA-E: Innovating Today. Transforming Tomorrow.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rohlfing, Eric; Brown, Kristen; Gerbi, Jennifer

    Innovation and entrepreneurism are integral parts of America’s national fiber and driving forces behind many of the technologies that define our modern lives. It’s this entrepreneurial spirit – in conjunction with world-class institutions and talent – that enable the United States to develop advanced energy technologies that can solve the many challenges we face. Featuring remarks from multiple ARPA-E staff, this video explores how ARPA-E leverages our nation’s resources to help nurture and grow America’s energy innovation community. The video also incorporates footage shot onsite with several ARPA-E awardees who are innovating solutions to transform tomorrow’s energy future.

  1. The Video Head Impulse Test

    PubMed Central

    Halmagyi, G. M.; Chen, Luke; MacDougall, Hamish G.; Weber, Konrad P.; McGarvie, Leigh A.; Curthoys, Ian S.

    2017-01-01

    In 1988, we introduced impulsive testing of semicircular canal (SCC) function measured with scleral search coils and showed that it could accurately and reliably detect impaired function even of a single lateral canal. Later we showed that it was also possible to test individual vertical canal function in peripheral and also in central vestibular disorders and proposed a physiological mechanism for why this might be so. For the next 20 years, between 1988 and 2008, impulsive testing of individual SCC function could only be accurately done by a few aficionados with the time and money to support scleral search-coil systems—an expensive, complicated and cumbersome, semi-invasive technique that never made the transition from the research lab to the dizzy clinic. Then, in 2009 and 2013, we introduced a video method of testing function of each of the six canals individually. Since 2009, the method has been taken up by most dizzy clinics around the world, with now close to 100 refereed articles in PubMed. In many dizzy clinics around the world, video Head Impulse Testing has supplanted caloric testing as the initial and in some cases the final test of choice in patients with suspected vestibular disorders. Here, we consider seven current, interesting, and controversial aspects of video Head Impulse Testing: (1) introduction to the test; (2) the progress from the head impulse protocol (HIMPs) to the new variant—suppression head impulse protocol (SHIMPs); (3) the physiological basis for head impulse testing; (4) practical aspects and potential pitfalls of video head impulse testing; (5) problems of vestibulo-ocular reflex gain calculations; (6) head impulse testing in central vestibular disorders; and (7) to stay right up-to-date—new clinical disease patterns emerging from video head impulse testing. With thanks and appreciation we dedicate this article to our friend, colleague, and mentor, Dr Bernard Cohen of Mount Sinai Medical School, New York, who since his

  2. Burbank uses video camera during installation and routing of HRCS Video Cables

    NASA Image and Video Library

    2012-02-01

    ISS030-E-060104 (1 Feb. 2012) --- NASA astronaut Dan Burbank, Expedition 30 commander, uses a video camera in the Destiny laboratory of the International Space Station during installation and routing of video cable for the High Rate Communication System (HRCS). HRCS will allow for two additional space-to-ground audio channels and two additional downlink video channels.

  3. Self-Recognition in Live Videos by Young Children: Does Video Training Help?

    ERIC Educational Resources Information Center

    Demir, Defne; Skouteris, Helen

    2010-01-01

    The overall aim of the experiment reported here was to establish whether self-recognition in live video can be facilitated when live video training is provided to children aged 2-2.5 years. While the majority of children failed the test of live self-recognition prior to video training, more than half exhibited live self-recognition post video…

  4. Effective Educational Videos: Principles and Guidelines for Maximizing Student Learning from Video Content.

    PubMed

    Brame, Cynthia J

    Educational videos have become an important part of higher education, providing an important content-delivery tool in many flipped, blended, and online classes. Effective use of video as an educational tool is enhanced when instructors consider three elements: how to manage cognitive load of the video; how to maximize student engagement with the video; and how to promote active learning from the video. This essay reviews literature relevant to each of these principles and suggests practical ways instructors can use these principles when using video as an educational tool. © 2016 C. J. Brame. CBE—Life Sciences Education © 2016 The American Society for Cell Biology. This article is distributed by The American Society for Cell Biology under license from the author(s). It is available to the public under an Attribution–Noncommercial–Share Alike 3.0 Unported Creative Commons License (http://creativecommons.org/licenses/by-nc-sa/3.0).

  5. Background-Modeling-Based Adaptive Prediction for Surveillance Video Coding.

    PubMed

    Zhang, Xianguo; Huang, Tiejun; Tian, Yonghong; Gao, Wen

    2014-02-01

    The exponential growth of surveillance videos presents an unprecedented challenge for high-efficiency surveillance video coding technology. Compared with the existing coding standards that were basically developed for generic videos, surveillance video coding should be designed to make the best use of the special characteristics of surveillance videos (e.g., relative static background). To do so, this paper first conducts two analyses on how to improve the background and foreground prediction efficiencies in surveillance video coding. Following the analysis results, we propose a background-modeling-based adaptive prediction (BMAP) method. In this method, all blocks to be encoded are firstly classified into three categories. Then, according to the category of each block, two novel inter predictions are selectively utilized, namely, the background reference prediction (BRP) that uses the background modeled from the original input frames as the long-term reference and the background difference prediction (BDP) that predicts the current data in the background difference domain. For background blocks, the BRP can effectively improve the prediction efficiency using the higher quality background as the reference; whereas for foreground-background-hybrid blocks, the BDP can provide a better reference after subtracting its background pixels. Experimental results show that the BMAP can achieve at least twice the compression ratio on surveillance videos as AVC (MPEG-4 Advanced Video Coding) high profile, yet with a slightly additional encoding complexity. Moreover, for the foreground coding performance, which is crucial to the subjective quality of moving objects in surveillance videos, BMAP also obtains remarkable gains over several state-of-the-art methods.

  6. Video Toroid Cavity Imager

    DOEpatents

    Gerald, II, Rex E.; Sanchez, Jairo; Rathke, Jerome W.

    2004-08-10

    A video toroid cavity imager for in situ measurement of electrochemical properties of an electrolytic material sample includes a cylindrical toroid cavity resonator containing the sample and employs NMR and video imaging for providing high-resolution spectral and visual information of molecular characteristics of the sample on a real-time basis. A large magnetic field is applied to the sample under controlled temperature and pressure conditions to simultaneously provide NMR spectroscopy and video imaging capabilities for investigating electrochemical transformations of materials or the evolution of long-range molecular aggregation during cooling of hydrocarbon melts. The video toroid cavity imager includes a miniature commercial video camera with an adjustable lens, a modified compression coin cell imager with a fiat circular principal detector element, and a sample mounted on a transparent circular glass disk, and provides NMR information as well as a video image of a sample, such as a polymer film, with micrometer resolution.

  7. Joint Attention Development in Low-risk Very Low Birth Weight Infants at Around 18 Months of Age.

    PubMed

    Yamaoka, Noriko; Takada, Satoshi

    2016-10-18

    The purpose of this study was to clarify the developmental characteristics of joint attention in very low birth weight (VLBW) infants with a low risk of complications. Section B of the Checklist for Autism in Toddlers (CHAT) was administered to 31 VLBW and 45 normal birth weight (NBW) infants aged 18-22 months, while the sessions were recorded with a video camera. A semi-structured observation scale was developed to assess infants' joint attention from the video footage, and was shown to be reliable. VLBW, compared to NBW, infants showed significantly poorer skills in 2 of 4 items on responding to joint attention, and in 6 of 10 items on initiating joint attention. VLBW infants need more clues in order to produce joint attention. The difficulty was attributed to insufficient verbal and fine motor function skills. Continuous follow-up evaluation is essential for both high-risk and low-risk VLBW infants and their parents.

  8. Memory for images intense enough to draw an administration's attention: television and the "war on terror".

    PubMed

    Hutchinson, David; Bradley, Samuel D

    2009-03-01

    In the recent United States-led "war on terror," including ongoing engagements in Iraq and Afghanistan, news organizations have been accused of showing a negative view of developments on the ground. In particular, news depictions of casualties have brought accusations of anti-Americanism and aiding and abetting the terrorists' cause. In this study, video footage of war from television news stories was manipulated to investigate the effects of negative compelling images on cognitive resource allocation, physiological arousal, and recognition memory. Results of a within-subjects experiment indicate that negatively valenced depictions of casualties and destruction elicit greater attention and physiological arousal than positive and low-intensity images. Recognition memory for visual information in the graphic negative news condition was highest, whereas audio recognition for this condition was lowest. The results suggest that negative, high-intensity video imagery diverts cognitive resources away from the encoding of verbal information in the newscast, positioning visual images and not the spoken narrative as a primary channel of viewer learning.

  9. How musical are music video game players?

    PubMed

    Pasinski, Amanda C; Hannon, Erin E; Snyder, Joel S

    2016-10-01

    Numerous studies have shown that formal musical training is associated with sensory, motor, and cognitive advantages in individuals of various ages. However, the nature of the observed differences between musicians and nonmusicians is poorly understood, and little is known about the listening skills of individuals who engage in alternative types of everyday musical activities. Here, we show that people who have frequently played music video games outperform nonmusicians controls on a battery of music perception tests. These findings reveal that enhanced musical aptitude can be found among individuals who play music video games, raising the possibility that music video games could potentially enhance music perception skills in individuals across a broad spectrum of society who are otherwise unable to invest the time and/or money required to learn a musical instrument.

  10. Video library for video imaging detection at intersection stop lines.

    DOT National Transportation Integrated Search

    2010-04-01

    The objective of this activity was to record video that could be used for controlled : evaluation of video image vehicle detection system (VIVDS) products and software upgrades to : existing products based on a list of conditions that might be diffic...

  11. Impulsive noise removal from color video with morphological filtering

    NASA Astrophysics Data System (ADS)

    Ruchay, Alexey; Kober, Vitaly

    2017-09-01

    This paper deals with impulse noise removal from color video. The proposed noise removal algorithm employs a switching filtering for denoising of color video; that is, detection of corrupted pixels by means of a novel morphological filtering followed by removal of the detected pixels on the base of estimation of uncorrupted pixels in the previous scenes. With the help of computer simulation we show that the proposed algorithm is able to well remove impulse noise in color video. The performance of the proposed algorithm is compared in terms of image restoration metrics with that of common successful algorithms.

  12. A complexity-scalable software-based MPEG-2 video encoder.

    PubMed

    Chen, Guo-bin; Lu, Xin-ning; Wang, Xing-guo; Liu, Ji-lin

    2004-05-01

    With the development of general-purpose processors (GPP) and video signal processing algorithms, it is possible to implement a software-based real-time video encoder on GPP, and its low cost and easy upgrade attract developers' interests to transfer video encoding from specialized hardware to more flexible software. In this paper, the encoding structure is set up first to support complexity scalability; then a lot of high performance algorithms are used on the key time-consuming modules in coding process; finally, at programming level, processor characteristics are considered to improve data access efficiency and processing parallelism. Other programming methods such as lookup table are adopted to reduce the computational complexity. Simulation results showed that these ideas could not only improve the global performance of video coding, but also provide great flexibility in complexity regulation.

  13. Delay Discounting of Video Game Players: Comparison of Time Duration Among Gamers.

    PubMed

    Buono, Frank D; Sprong, Matthew E; Lloyd, Daniel P; Cutter, Christopher J; Printz, Destiny M B; Sullivan, Ryan M; Moore, Brent A

    2017-02-01

    Video game addiction or Internet game disorder, as proposed by the DSM-5 (Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition), has similar criterion characteristics to other impulse control disorders. There is limited research examining video game addiction within a behavioral economic framework using delay discounting. The current study evaluated delay-discounting patterns of money and video game play by usual weekly hours of video game play. A total of 104 participants were split into 1 of 3 groups of video game players (low, medium, and high) and were asked to complete a monetary and video game discounting procedure through an online survey. Results showed significant differences between groups within both the monetary (p = 0.003) and video game discounting procedures (p = 0.004). Additionally, a positive linear relationship was noted between the groups across both procedures. The results of the current article reinforce previous findings that more hours of video game use are associated with greater impulsivity and provide implications for future research.

  14. Obesity in the new media: a content analysis of obesity videos on YouTube.

    PubMed

    Yoo, Jina H; Kim, Junghyun

    2012-01-01

    This study examines (1) how the topics of obesity are framed and (2) how obese persons are portrayed on YouTube video clips. The analysis of 417 obesity videos revealed that a newer medium like YouTube, similar to traditional media, appeared to assign responsibility and solutions for obesity mainly to individuals and their behaviors, although there was a tendency that some video categories have started to show other causal claims or solutions. However, due to the prevailing emphasis on personal causes and solutions, numerous YouTube videos had a theme of weight-based teasing, or showed obese persons engaging in stereotypical eating behaviors. We discuss a potential impact of YouTube videos on shaping viewers' perceptions about obesity and further reinforcing stigmatization of obese persons.

  15. The Children's Video Marketplace.

    ERIC Educational Resources Information Center

    Ducey, Richard V.

    This report examines a growing submarket, the children's video marketplace, which comprises broadcast, cable, and video programming for children 2 to 11 years old. A description of the tremendous growth in the availability and distribution of children's programming is presented, the economics of the children's video marketplace are briefly…

  16. Video Cartridges and Cassettes.

    ERIC Educational Resources Information Center

    Kletter, Richard C.; Hudson, Heather

    The economic and social significance of video cassettes (viewer-controlled playback system) is explored in this report. The potential effect of video cassettes on industrial training, education, libraries, and television is analyzed in conjunction with the anticipated hardware developments. The entire video cassette industry is reviewed firm by…

  17. Camera network video summarization

    NASA Astrophysics Data System (ADS)

    Panda, Rameswar; Roy-Chowdhury, Amit K.

    2017-05-01

    Networks of vision sensors are deployed in many settings, ranging from security needs to disaster response to environmental monitoring. Many of these setups have hundreds of cameras and tens of thousands of hours of video. The difficulty of analyzing such a massive volume of video data is apparent whenever there is an incident that requires foraging through vast video archives to identify events of interest. As a result, video summarization, that automatically extract a brief yet informative summary of these videos, has attracted intense attention in the recent years. Much progress has been made in developing a variety of ways to summarize a single video in form of a key sequence or video skim. However, generating a summary from a set of videos captured in a multi-camera network still remains as a novel and largely under-addressed problem. In this paper, with the aim of summarizing videos in a camera network, we introduce a novel representative selection approach via joint embedding and capped l21-norm minimization. The objective function is two-fold. The first is to capture the structural relationships of data points in a camera network via an embedding, which helps in characterizing the outliers and also in extracting a diverse set of representatives. The second is to use a capped l21-norm to model the sparsity and to suppress the influence of data outliers in representative selection. We propose to jointly optimize both of the objectives, such that embedding can not only characterize the structure, but also indicate the requirements of sparse representative selection. Extensive experiments on standard multi-camera datasets well demonstrate the efficacy of our method over state-of-the-art methods.

  18. Learning from Online Video Lectures

    ERIC Educational Resources Information Center

    Brecht, H. David

    2012-01-01

    This study empirically examines the instructional value of online video lectures--videos that a course's instructor prepares to supplement classroom or online-broadcast lectures. The study examines data from a classroom course, where the videos have a slower, more step-by-step lecture style than the classroom lectures; student use of videos is…

  19. Common and Innovative Visuals: A sparsity modeling framework for video.

    PubMed

    Abdolhosseini Moghadam, Abdolreza; Kumar, Mrityunjay; Radha, Hayder

    2014-05-02

    Efficient video representation models are critical for many video analysis and processing tasks. In this paper, we present a framework based on the concept of finding the sparsest solution to model video frames. To model the spatio-temporal information, frames from one scene are decomposed into two components: (i) a common frame, which describes the visual information common to all the frames in the scene/segment, and (ii) a set of innovative frames, which depicts the dynamic behaviour of the scene. The proposed approach exploits and builds on recent results in the field of compressed sensing to jointly estimate the common frame and the innovative frames for each video segment. We refer to the proposed modeling framework by CIV (Common and Innovative Visuals). We show how the proposed model can be utilized to find scene change boundaries and extend CIV to videos from multiple scenes. Furthermore, the proposed model is robust to noise and can be used for various video processing applications without relying on motion estimation and detection or image segmentation. Results for object tracking, video editing (object removal, inpainting) and scene change detection are presented to demonstrate the efficiency and the performance of the proposed model.

  20. The energy expenditure of an activity-promoting video game compared to sedentary video games and TV watching.

    PubMed

    Mitre, Naim; Foster, Randal C; Lanningham-Foster, Lorraine; Levine, James A

    2011-01-01

    In the present study we investigated the effect of television watching and the use of activity-promoting video games on energy expenditure in obese and lean children. Energy expenditure and physical activity were measured while participants were watching television, playing a video game on a traditional sedentary video game console, and while playing the same video game on an activity-promoting video game console. Energy expenditure was significantly greater than television watching and playing video games on a sedentary video game console when children played the video game on the activity-promoting console. When examining movement with accelerometry, children moved significantly more when playing the video game on the Nintendo Wii console. Activity-promoting video games have shown to increase movement, and be an important tool to raise energy expenditure by 50% when compared to sedentary activities of daily living.