Sample records for generation advanced video

  1. Next Generation Advanced Video Guidance Sensor Development and Test

    NASA Technical Reports Server (NTRS)

    Howard, Richard T.; Bryan, Thomas C.; Lee, Jimmy; Robertson, Bryan

    2009-01-01

    The Advanced Video Guidance Sensor (AVGS) was the primary docking sensor for the Orbital Express mission. The sensor performed extremely well during the mission, and the technology has been proven on orbit in other flights too. Parts obsolescence issues prevented the construction of more AVGS units, so the next generation of sensor was designed with current parts and updated to support future programs. The Next Generation Advanced Video Guidance Sensor (NGAVGS) has been tested as a breadboard, two different brassboard units, and a prototype. The testing revealed further improvements that could be made and demonstrated capability beyond that ever demonstrated by the sensor on orbit. This paper presents some of the sensor history, parts obsolescence issues, radiation concerns, and software improvements to the NGAVGS. In addition, some of the testing and test results are presented. The NGAVGS has shown that it will meet the general requirements for any space proximity operations or docking need.

  2. Next Generation Advanced Video Guidance Sensor

    NASA Technical Reports Server (NTRS)

    Lee, Jimmy; Spencer, Susan; Bryan, Tom; Johnson, Jimmie; Robertson, Bryan

    2008-01-01

    The first autonomous rendezvous and docking in the history of the U.S. Space Program was successfully accomplished by Orbital Express, using the Advanced Video Guidance Sensor (AVGS) as the primary docking sensor. The United States now has a mature and flight proven sensor technology for supporting Crew Exploration Vehicles (CEV) and Commercial Orbital Transport. Systems (COTS) Automated Rendezvous and Docking (AR&D). AVGS has a proven pedigree, based on extensive ground testing and flight demonstrations. The AVGS on the Demonstration of Autonomous Rendezvous Technology (DART)mission operated successfully in "spot mode" out to 2 km. The first generation rendezvous and docking sensor, the Video Guidance Sensor (VGS), was developed and successfully flown on Space Shuttle flights in 1997 and 1998. Parts obsolescence issues prevent the construction of more AVGS. units, and the next generation sensor must be updated to support the CEV and COTS programs. The flight proven AR&D sensor is being redesigned to update parts and add additional. capabilities for CEV and COTS with the development of the Next, Generation AVGS (NGAVGS) at the Marshall Space Flight Center. The obsolete imager and processor are being replaced with new radiation tolerant parts. In addition, new capabilities might include greater sensor range, auto ranging, and real-time video output. This paper presents an approach to sensor hardware trades, use of highly integrated laser components, and addresses the needs of future vehicles that may rendezvous and dock with the International Space Station (ISS) and other Constellation vehicles. It will also discuss approaches for upgrading AVGS to address parts obsolescence, and concepts for minimizing the sensor footprint, weight, and power requirements. In addition, parts selection and test plans for the NGAVGS will be addressed to provide a highly reliable flight qualified sensor. Expanded capabilities through innovative use of existing capabilities will also be

  3. Next Generation Advanced Video Guidance Sensor: Low Risk Rendezvous and Docking Sensor

    NASA Technical Reports Server (NTRS)

    Lee, Jimmy; Carrington, Connie; Spencer, Susan; Bryan, Thomas; Howard, Ricky T.; Johnson, Jimmie

    2008-01-01

    The Next Generation Advanced Video Guidance Sensor (NGAVGS) is being built and tested at MSFC. This paper provides an overview of current work on the NGAVGS, a summary of the video guidance heritage, and the AVGS performance on the Orbital Express mission. This paper also provides a discussion of applications to ISS cargo delivery vehicles, CEV, and future lunar applications.

  4. The Next Generation Advanced Video Guidance Sensor: Flight Heritage and Current Development

    NASA Technical Reports Server (NTRS)

    Howard, Richard T.; Bryan, Thomas C.

    2009-01-01

    The Next Generation Advanced Video Guidance Sensor (NGAVGS) is the latest in a line of sensors that have flown four times in the last 10 years. The NGAVGS has been under development for the last two years as a long-range proximity operations and docking sensor for use in an Automated Rendezvous and Docking (AR&D) system. The first autonomous rendezvous and docking in the history of the U.S. Space Program was successfully accomplished by Orbital Express, using the Advanced Video Guidance Sensor (AVGS) as the primary docking sensor. That flight proved that the United States now has a mature and flight proven sensor technology for supporting Crew Exploration Vehicles (CEV) and Commercial Orbital Transport Systems (COTS) Automated Rendezvous and Docking (AR&D). NASA video sensors have worked well in the past: the AVGS used on the Demonstration of Autonomous Rendezvous Technology (DART) mission operated successfully in "spot mode" out to 2 km, and the first generation rendezvous and docking sensor, the Video Guidance Sensor (VGS), was developed and successfully flown on Space Shuttle flights in 1997 and 1998. This paper presents the flight heritage and results of the sensor technology, some hardware trades for the current sensor, and discusses the needs of future vehicles that may rendezvous and dock with the International Space Station (ISS) and other Constellation vehicles. It also discusses approaches for upgrading AVGS to address parts obsolescence, and concepts for minimizing the sensor footprint, weight, and power requirements. In addition, the testing of the various NGAVGS development units will be discussed along with the use of the NGAVGS as a proximity operations and docking sensor.

  5. The Advanced Video Guidance Sensor: Orbital Express and the Next Generation

    NASA Technical Reports Server (NTRS)

    Howard, Richard T.; Heaton, Andrew F.; Pinson, Robin M.; Carrington, Connie L.; Lee, James E.; Bryan, Thomas C.; Robertson, Bryan A.; Spencer, Susan H.; Johnson, Jimmie E.

    2008-01-01

    The Orbital Express (OE) mission performed the first autonomous rendezvous and docking in the history of the United States on May 5-6, 2007 with the Advanced Video Guidance Sensor (AVGS) acting as one of the primary docking sensors. Since that event, the OE spacecraft performed four more rendezvous and docking maneuvers, each time using the AVGS as one of the docking sensors. The Marshall Space Flight Center's (MSFC's) AVGS is a nearfield proximity operations sensor that was integrated into the Autonomous Rendezvous and Capture Sensor System (ARCSS) on OE. The ARCSS provided the relative state knowledge to allow the OE spacecraft to rendezvous and dock. The AVGS is a mature sensor technology designed to support Automated Rendezvous and Docking (AR&D) operations. It is a video-based laser-illuminated sensor that can determine the relative position and attitude between itself and its target. Due to parts obsolescence, the AVGS that was flown on OE can no longer be manufactured. MSFC has been working on the next generation of AVGS for application to future Constellation missions. This paper provides an overview of the performance of the AVGS on Orbital Express and discusses the work on the Next Generation AVGS (NGAVGS).

  6. Advanced Video Guidance Sensor and next-generation autonomous docking sensors

    NASA Astrophysics Data System (ADS)

    Granade, Stephen R.

    2004-09-01

    In recent decades, NASA's interest in spacecraft rendezvous and proximity operations has grown. Additional instrumentation is needed to improve manned docking operations' safety, as well as to enable telerobotic operation of spacecraft or completely autonomous rendezvous and docking. To address this need, Advanced Optical Systems, Inc., Orbital Sciences Corporation, and Marshall Space Flight Center have developed the Advanced Video Guidance Sensor (AVGS) under the auspices of the Demonstration of Autonomous Rendezvous Technology (DART) program. Given a cooperative target comprising several retro-reflectors, AVGS provides six-degree-of-freedom information at ranges of up to 300 meters for the DART target. It does so by imaging the target, then performing pattern recognition on the resulting image. Longer range operation is possible through different target geometries. Now that AVGS is being readied for its test flight in 2004, the question is: what next? Modifications can be made to AVGS, including different pattern recognition algorithms and changes to the retro-reflector targets, to make it more robust and accurate. AVGS could be coupled with other space-qualified sensors, such as a laser range-and-bearing finder, that would operate at longer ranges. Different target configurations, including the use of active targets, could result in significant miniaturization over the current AVGS package. We will discuss these and other possibilities for a next-generation docking sensor or sensor suite that involve AVGS.

  7. Advanced Video Guidance Sensor and Next Generation Autonomous Docking Sensors

    NASA Technical Reports Server (NTRS)

    Granade, Stephen R.

    2004-01-01

    In recent decades, NASA's interest in spacecraft rendezvous and proximity operations has grown. Additional instrumentation is needed to improve manned docking operations' safety, as well as to enable telerobotic operation of spacecraft or completely autonomous rendezvous and docking. To address this need, Advanced Optical Systems, Inc., Orbital Sciences Corporation, and Marshall Space Flight Center have developed the Advanced Video Guidance Sensor (AVGS) under the auspices of the Demonstration of Autonomous Rendezvous Technology (DART) program. Given a cooperative target comprising several retro-reflectors, AVGS provides six-degree-of-freedom information at ranges of up to 300 meters for the DART target. It does so by imaging the target, then performing pattern recognition on the resulting image. Longer range operation is possible through different target geometries. Now that AVGS is being readied for its test flight in 2004, the question is: what next? Modifications can be made to AVGS, including different pattern recognition algorithms and changes to the retro-reflector targets, to make it more robust and accurate. AVGS could be coupled with other space-qualified sensors, such as a laser range-and-bearing finder, that would operate at longer ranges. Different target configurations, including the use of active targets, could result in significant miniaturization over the current AVGS package. We will discuss these and other possibilities for a next-generation docking sensor or sensor suite that involve AVGS.

  8. Recent advances in nondestructive evaluation made possible by novel uses of video systems

    NASA Technical Reports Server (NTRS)

    Generazio, Edward R.; Roth, Don J.

    1990-01-01

    Complex materials are being developed for use in future advanced aerospace systems. High temperature materials have been targeted as a major area of materials development. The development of composites consisting of ceramic matrix and ceramic fibers or whiskers is currently being aggressively pursued internationally. These new advanced materials are difficult and costly to produce; however, their low density and high operating temperature range are needed for the next generation of advanced aerospace systems. These materials represent a challenge to the nondestructive evaluation community. Video imaging techniques not only enhance the nondestructive evaluation, but they are also required for proper evaluation of these advanced materials. Specific research examples are given, highlighting the impact that video systems have had on the nondestructive evaluation of ceramics. An image processing technique for computerized determination of grain and pore size distribution functions from microstructural images is discussed. The uses of video and computer systems for displaying, evaluating, and interpreting ultrasonic image data are presented.

  9. The Generative Effects of Instructional Organizers with Computer-Based Interactive Video.

    ERIC Educational Resources Information Center

    Kenny, Richard F.

    This study compared the use of three instructional organizers--the advance organizer (AO), the participatory pictorial graphic organizer (PGO), and the final form pictorial graphic organizer (FGO)--in the design and use of computer-based interactive video (CBIV) programs. That is, it attempted to determine whether a less generative or more…

  10. Advanced Video Data-Acquisition System For Flight Research

    NASA Technical Reports Server (NTRS)

    Miller, Geoffrey; Richwine, David M.; Hass, Neal E.

    1996-01-01

    Advanced video data-acquisition system (AVDAS) developed to satisfy variety of requirements for in-flight video documentation. Requirements range from providing images for visualization of airflows around fighter airplanes at high angles of attack to obtaining safety-of-flight documentation. F/A-18 AVDAS takes advantage of very capable systems like NITE Hawk forward-looking infrared (FLIR) pod and recent video developments like miniature charge-couple-device (CCD) color video cameras and other flight-qualified video hardware.

  11. A novel sub-shot segmentation method for user-generated video

    NASA Astrophysics Data System (ADS)

    Lei, Zhuo; Zhang, Qian; Zheng, Chi; Qiu, Guoping

    2018-04-01

    With the proliferation of the user-generated videos, temporal segmentation is becoming a challengeable problem. Traditional video temporal segmentation methods like shot detection are not able to work on unedited user-generated videos, since they often only contain one single long shot. We propose a novel temporal segmentation framework for user-generated video. It finds similar frames with a tree partitioning min-Hash technique, constructs sparse temporal constrained affinity sub-graphs, and finally divides the video into sub-shot-level segments with a dense-neighbor-based clustering method. Experimental results show that our approach outperforms all the other related works. Furthermore, it is indicated that the proposed approach is able to segment user-generated videos at an average human level.

  12. Automatic generation of pictorial transcripts of video programs

    NASA Astrophysics Data System (ADS)

    Shahraray, Behzad; Gibbon, David C.

    1995-03-01

    An automatic authoring system for the generation of pictorial transcripts of video programs which are accompanied by closed caption information is presented. A number of key frames, each of which represents the visual information in a segment of the video (i.e., a scene), are selected automatically by performing a content-based sampling of the video program. The textual information is recovered from the closed caption signal and is initially segmented based on its implied temporal relationship with the video segments. The text segmentation boundaries are then adjusted, based on lexical analysis and/or caption control information, to account for synchronization errors due to possible delays in the detection of scene boundaries or the transmission of the caption information. The closed caption text is further refined through linguistic processing for conversion to lower- case with correct capitalization. The key frames and the related text generate a compact multimedia presentation of the contents of the video program which lends itself to efficient storage and transmission. This compact representation can be viewed on a computer screen, or used to generate the input to a commercial text processing package to generate a printed version of the program.

  13. An Emerging Learning Design for Student-Generated "iVideos"

    ERIC Educational Resources Information Center

    Kearney, Matthew; Jones, Glynis; Roberts, Lynn

    2012-01-01

    This paper describes an emerging learning design for a popular genre of learner-generated video projects: "Ideas Videos" or "iVideos." These advocacy-style videos are short, two-minute, digital videos designed "to evoke powerful experiences about educative ideas" (Wong, Mishra, Koehler & Siebenthal, 2007, p1). We…

  14. Advanced Video Analysis Needs for Human Performance Evaluation

    NASA Technical Reports Server (NTRS)

    Campbell, Paul D.

    1994-01-01

    Evaluators of human task performance in space missions make use of video as a primary source of data. Extraction of relevant human performance information from video is often a labor-intensive process requiring a large amount of time on the part of the evaluator. Based on the experiences of several human performance evaluators, needs were defined for advanced tools which could aid in the analysis of video data from space missions. Such tools should increase the efficiency with which useful information is retrieved from large quantities of raw video. They should also provide the evaluator with new analytical functions which are not present in currently used methods. Video analysis tools based on the needs defined by this study would also have uses in U.S. industry and education. Evaluation of human performance from video data can be a valuable technique in many industrial and institutional settings where humans are involved in operational systems and processes.

  15. Advanced High-Definition Video Cameras

    NASA Technical Reports Server (NTRS)

    Glenn, William

    2007-01-01

    A product line of high-definition color video cameras, now under development, offers a superior combination of desirable characteristics, including high frame rates, high resolutions, low power consumption, and compactness. Several of the cameras feature a 3,840 2,160-pixel format with progressive scanning at 30 frames per second. The power consumption of one of these cameras is about 25 W. The size of the camera, excluding the lens assembly, is 2 by 5 by 7 in. (about 5.1 by 12.7 by 17.8 cm). The aforementioned desirable characteristics are attained at relatively low cost, largely by utilizing digital processing in advanced field-programmable gate arrays (FPGAs) to perform all of the many functions (for example, color balance and contrast adjustments) of a professional color video camera. The processing is programmed in VHDL so that application-specific integrated circuits (ASICs) can be fabricated directly from the program. ["VHDL" signifies VHSIC Hardware Description Language C, a computing language used by the United States Department of Defense for describing, designing, and simulating very-high-speed integrated circuits (VHSICs).] The image-sensor and FPGA clock frequencies in these cameras have generally been much higher than those used in video cameras designed and manufactured elsewhere. Frequently, the outputs of these cameras are converted to other video-camera formats by use of pre- and post-filters.

  16. Realistic generation of natural phenomena based on video synthesis

    NASA Astrophysics Data System (ADS)

    Wang, Changbo; Quan, Hongyan; Li, Chenhui; Xiao, Zhao; Chen, Xiao; Li, Peng; Shen, Liuwei

    2009-10-01

    Research on the generation of natural phenomena has many applications in special effects of movie, battlefield simulation and virtual reality, etc. Based on video synthesis technique, a new approach is proposed for the synthesis of natural phenomena, including flowing water and fire flame. From the fire and flow video, the seamless video of arbitrary length is generated. Then, the interaction between wind and fire flame is achieved through the skeleton of flame. Later, the flow is also synthesized by extending the video textures using an edge resample method. Finally, we can integrate the synthesized natural phenomena into a virtual scene.

  17. Noise-Riding Video Signal Threshold Generation Scheme for a Plurality of Video Signal Channels

    DTIC Science & Technology

    2007-02-12

    on the selected one signal channel to generate a new video signal threshold . The processing resource has an output to provide the new video signal threshold to the comparator circuit corresponding to the selected signal channel.

  18. Automated content and quality assessment of full-motion-video for the generation of meta data

    NASA Astrophysics Data System (ADS)

    Harguess, Josh

    2015-05-01

    Virtually all of the video data (and full-motion-video (FMV)) that is currently collected and stored in support of missions has been corrupted to various extents by image acquisition and compression artifacts. Additionally, video collected by wide-area motion imagery (WAMI) surveillance systems and unmanned aerial vehicles (UAVs) and similar sources is often of low quality or in other ways corrupted so that it is not worth storing or analyzing. In order to make progress in the problem of automatic video analysis, the first problem that should be solved is deciding whether the content of the video is even worth analyzing to begin with. We present a work in progress to address three types of scenes which are typically found in real-world data stored in support of Department of Defense (DoD) missions: no or very little motion in the scene, large occlusions in the scene, and fast camera motion. Each of these produce video that is generally not usable to an analyst or automated algorithm for mission support and therefore should be removed or flagged to the user as such. We utilize recent computer vision advances in motion detection and optical flow to automatically assess FMV for the identification and generation of meta-data (or tagging) of video segments which exhibit unwanted scenarios as described above. Results are shown on representative real-world video data.

  19. Film grain noise modeling in advanced video coding

    NASA Astrophysics Data System (ADS)

    Oh, Byung Tae; Kuo, C.-C. Jay; Sun, Shijun; Lei, Shawmin

    2007-01-01

    A new technique for film grain noise extraction, modeling and synthesis is proposed and applied to the coding of high definition video in this work. The film grain noise is viewed as a part of artistic presentation by people in the movie industry. On one hand, since the film grain noise can boost the natural appearance of pictures in high definition video, it should be preserved in high-fidelity video processing systems. On the other hand, video coding with film grain noise is expensive. It is desirable to extract film grain noise from the input video as a pre-processing step at the encoder and re-synthesize the film grain noise and add it back to the decoded video as a post-processing step at the decoder. Under this framework, the coding gain of the denoised video is higher while the quality of the final reconstructed video can still be well preserved. Following this idea, we present a method to remove film grain noise from image/video without distorting its original content. Besides, we describe a parametric model containing a small set of parameters to represent the extracted film grain noise. The proposed model generates the film grain noise that is close to the real one in terms of power spectral density and cross-channel spectral correlation. Experimental results are shown to demonstrate the efficiency of the proposed scheme.

  20. Considerations in video playback design: using optic flow analysis to examine motion characteristics of live and computer-generated animation sequences.

    PubMed

    Woo, Kevin L; Rieucau, Guillaume

    2008-07-01

    The increasing use of the video playback technique in behavioural ecology reveals a growing need to ensure better control of the visual stimuli that focal animals experience. Technological advances now allow researchers to develop computer-generated animations instead of using video sequences of live-acting demonstrators. However, care must be taken to match the motion characteristics (speed and velocity) of the animation to the original video source. Here, we presented a tool based on the use of an optic flow analysis program to measure the resemblance of motion characteristics of computer-generated animations compared to videos of live-acting animals. We examined three distinct displays (tail-flick (TF), push-up body rock (PUBR), and slow arm wave (SAW)) exhibited by animations of Jacky dragons (Amphibolurus muricatus) that were compared to the original video sequences of live lizards. We found no significant differences between the motion characteristics of videos and animations across all three displays. Our results showed that our animations are similar the speed and velocity features of each display. Researchers need to ensure that similar motion characteristics in animation and video stimuli are represented, and this feature is a critical component in the future success of the video playback technique.

  1. Automated Music Video Generation Using Multi-level Feature-based Segmentation

    NASA Astrophysics Data System (ADS)

    Yoon, Jong-Chul; Lee, In-Kwon; Byun, Siwoo

    The expansion of the home video market has created a requirement for video editing tools to allow ordinary people to assemble videos from short clips. However, professional skills are still necessary to create a music video, which requires a stream to be synchronized with pre-composed music. Because the music and the video are pre-generated in separate environments, even a professional producer usually requires a number of trials to obtain a satisfactory synchronization, which is something that most amateurs are unable to achieve.

  2. Automated Knowledge Generation with Persistent Surveillance Video

    DTIC Science & Technology

    2008-03-26

    5 2.1 Artificial Intelligence . . . . . . . . . . . . . . . . . . . . 5 2.1.1 Formal Logic . . . . . . . . . . . . . . . . . . . 6 2.1.2...background of Artificial Intelligence and the reasoning engines that will be applied to generate knowledge from data. Section 2.2 discusses background on...generation from persistent video. 4 II. Background In this chapter, we will discuss the background of Artificial Intelligence, Semantic Web, image

  3. Practical system for generating digital mixed reality video holograms.

    PubMed

    Song, Joongseok; Kim, Changseob; Park, Hanhoon; Park, Jong-Il

    2016-07-10

    We propose a practical system that can effectively mix the depth data of real and virtual objects by using a Z buffer and can quickly generate digital mixed reality video holograms by using multiple graphic processing units (GPUs). In an experiment, we verify that real objects and virtual objects can be merged naturally in free viewing angles, and the occlusion problem is well handled. Furthermore, we demonstrate that the proposed system can generate mixed reality video holograms at 7.6 frames per second. Finally, the system performance is objectively verified by users' subjective evaluations.

  4. Multiple Generations on Video Tape Recorders.

    ERIC Educational Resources Information Center

    Wiens, Jacob H.

    Helical scan video tape recorders were tested for their dubbing characteristics in order to make selection data available to media personnel. The equipment, two recorders of each type tested, was submitted by the manufacturers. The test was designed to produce quality evaluations for three generations of a single tape, thereby encompassing all…

  5. Control Software for Advanced Video Guidance Sensor

    NASA Technical Reports Server (NTRS)

    Howard, Richard T.; Book, Michael L.; Bryan, Thomas C.

    2006-01-01

    Embedded software has been developed specifically for controlling an Advanced Video Guidance Sensor (AVGS). A Video Guidance Sensor is an optoelectronic system that provides guidance for automated docking of two vehicles. Such a system includes pulsed laser diodes and a video camera, the output of which is digitized. From the positions of digitized target images and known geometric relationships, the relative position and orientation of the vehicles are computed. The present software consists of two subprograms running in two processors that are parts of the AVGS. The subprogram in the first processor receives commands from an external source, checks the commands for correctness, performs commanded non-image-data-processing control functions, and sends image data processing parts of commands to the second processor. The subprogram in the second processor processes image data as commanded. Upon power-up, the software performs basic tests of functionality, then effects a transition to a standby mode. When a command is received, the software goes into one of several operational modes (e.g. acquisition or tracking). The software then returns, to the external source, the data appropriate to the command.

  6. Video Transmission for Third Generation Wireless Communication Systems

    PubMed Central

    Gharavi, H.; Alamouti, S. M.

    2001-01-01

    This paper presents a twin-class unequal protected video transmission system over wireless channels. Video partitioning based on a separation of the Variable Length Coded (VLC) Discrete Cosine Transform (DCT) coefficients within each block is considered for constant bitrate transmission (CBR). In the splitting process the fraction of bits assigned to each of the two partitions is adjusted according to the requirements of the unequal error protection scheme employed. Subsequently, partitioning is applied to the ITU-T H.263 coding standard. As a transport vehicle, we have considered one of the leading third generation cellular radio standards known as WCDMA. A dual-priority transmission system is then invoked on the WCDMA system where the video data, after being broken into two streams, is unequally protected. We use a very simple error correction coding scheme for illustration and then propose more sophisticated forms of unequal protection of the digitized video signals. We show that this strategy results in a significantly higher quality of the reconstructed video data when it is transmitted over time-varying multipath fading channels. PMID:27500033

  7. Innovative Video Diagnostic Equipment for Material Science

    NASA Technical Reports Server (NTRS)

    Capuano, G.; Titomanlio, D.; Soellner, W.; Seidel, A.

    2012-01-01

    Materials science experiments under microgravity increasingly rely on advanced optical systems to determine the physical properties of the samples under investigation. This includes video systems with high spatial and temporal resolution. The acquisition, handling, storage and transmission to ground of the resulting video data are very challenging. Since the available downlink data rate is limited, the capability to compress the video data significantly without compromising the data quality is essential. We report on the development of a Digital Video System (DVS) for EML (Electro Magnetic Levitator) which provides real-time video acquisition, high compression using advanced Wavelet algorithms, storage and transmission of a continuous flow of video with different characteristics in terms of image dimensions and frame rates. The DVS is able to operate with the latest generation of high-performance cameras acquiring high resolution video images up to 4Mpixels@60 fps or high frame rate video images up to about 1000 fps@512x512pixels.

  8. The Video Generation.

    ERIC Educational Resources Information Center

    Provenzo, Eugene F., Jr.

    1992-01-01

    Video games are neither neutral nor harmless but represent very specific social and symbolic constructs. Research on the social content of today's video games reveals that sex bias and gender stereotyping are widely evident throughout the Nintendo games. Violence and aggression also pervade the great majority of the games. (MLF)

  9. Randomized, Controlled Trial of an Advance Care Planning Video Decision Support Tool for Patients With Advanced Heart Failure.

    PubMed

    El-Jawahri, Areej; Paasche-Orlow, Michael K; Matlock, Dan; Stevenson, Lynne Warner; Lewis, Eldrin F; Stewart, Garrick; Semigran, Marc; Chang, Yuchiao; Parks, Kimberly; Walker-Corkery, Elizabeth S; Temel, Jennifer S; Bohossian, Hacho; Ooi, Henry; Mann, Eileen; Volandes, Angelo E

    2016-07-05

    Conversations about goals of care and cardiopulmonary resuscitation (CPR)/intubation for patients with advanced heart failure can be difficult. This study examined the impact of a video decision support tool and patient checklist on advance care planning for patients with heart failure. This was a multisite, randomized, controlled trial of a video-assisted intervention and advance care planning checklist versus a verbal description in 246 patients ≥64 years of age with heart failure and an estimated likelihood of death of >50% within 2 years. Intervention participants received a verbal description for goals of care (life-prolonging care, limited care, and comfort care) and CPR/intubation plus a 6-minute video depicting the 3 levels of care, CPR/intubation, and an advance care planning checklist. Control subjects received only the verbal description. The primary analysis compared the proportion of patients preferring comfort care between study arms immediately after the intervention. Secondary outcomes were CPR/intubation preferences and knowledge (6-item test; range, 0-6) after intervention. In the intervention group, 27 (22%) chose life-prolonging care, 31 (25%) chose limited care, 63 (51%) selected comfort care, and 2 (2%) were uncertain. In the control group, 50 (41%) chose life-prolonging care, 27 (22%) selected limited care, 37 (30%) chose comfort care, and 8 (7%) were uncertain (P<0.001). Intervention participants (compared with control subjects) were more likely to forgo CPR (68% versus 35%; P<0.001) and intubation (77% versus 48%; P<0.001) and had higher mean knowledge scores (4.1 versus 3.0; P<0.001). Patients with heart failure who viewed a video were more informed, more likely to select a focus on comfort, and less likely to desire CPR/intubation compared with patients receiving verbal information only. URL: http://www.clinicaltrials.gov. Unique identifier: NCT01589120. © 2016 American Heart Association, Inc.

  10. 3rd-generation MW/LWIR sensor engine for advanced tactical systems

    NASA Astrophysics Data System (ADS)

    King, Donald F.; Graham, Jason S.; Kennedy, Adam M.; Mullins, Richard N.; McQuitty, Jeffrey C.; Radford, William A.; Kostrzewa, Thomas J.; Patten, Elizabeth A.; McEwan, Thomas F.; Vodicka, James G.; Wootan, John J.

    2008-04-01

    Raytheon has developed a 3rd-Generation FLIR Sensor Engine (3GFSE) for advanced U.S. Army systems. The sensor engine is based around a compact, productized detector-dewar assembly incorporating a 640 x 480 staring dual-band (MW/LWIR) focal plane array (FPA) and a dual-aperture coldshield mechanism. The capability to switch the coldshield aperture and operate at either of two widely-varying f/#s will enable future multi-mode tactical systems to more fully exploit the many operational advantages offered by dual-band FPAs. RVS has previously demonstrated high-performance dual-band MW/LWIR FPAs in 640 x 480 and 1280 x 720 formats with 20 μm pitch. The 3GFSE includes compact electronics that operate the dual-band FPA and variable-aperture mechanism, and perform 14-bit analog-to-digital conversion of the FPA output video. Digital signal processing electronics perform "fixed" two-point non-uniformity correction (NUC) of the video from both bands and optional dynamic scene-based NUC; advanced enhancement processing of the output video is also supported. The dewar-electronics assembly measures approximately 4.75 x 2.25 x 1.75 inches. A compact, high-performance linear cooler and cooler electronics module provide the necessary FPA cooling over a military environmental temperature range. 3GFSE units are currently being assembled and integrated at RVS, with the first units planned for delivery to the US Army.

  11. Video coding for next-generation surveillance systems

    NASA Astrophysics Data System (ADS)

    Klasen, Lena M.; Fahlander, Olov

    1997-02-01

    Video is used as recording media in surveillance system and also more frequently by the Swedish Police Force. Methods for analyzing video using an image processing system have recently been introduced at the Swedish National Laboratory of Forensic Science, and new methods are in focus in a research project at Linkoping University, Image Coding Group. The accuracy of the result of those forensic investigations often depends on the quality of the video recordings, and one of the major problems when analyzing videos from crime scenes is the poor quality of the recordings. Enhancing poor image quality might add manipulative or subjective effects and does not seem to be the right way of getting reliable analysis results. The surveillance system in use today is mainly based on video techniques, VHS or S-VHS, and the weakest link is the video cassette recorder, (VCR). Multiplexers for selecting one of many camera outputs for recording is another problem as it often filters the video signal, and recording is limited to only one of the available cameras connected to the VCR. A way to get around the problem of poor recording is to simultaneously record all camera outputs digitally. It is also very important to build such a system bearing in mind that image processing analysis methods becomes more important as a complement to the human eye. Using one or more cameras gives a large amount of data, and the need for data compression is more than obvious. Crime scenes often involve persons or moving objects, and the available coding techniques are more or less useful. Our goal is to propose a possible system, being the best compromise with respect to what needs to be recorded, movements in the recorded scene, loss of information and resolution etc., to secure the efficient recording of the crime and enable forensic analysis. The preventative effective of having a well functioning surveillance system and well established image analysis methods is not to be neglected. Aspects of

  12. Teaching Bioethics via the Production of Student-Generated Videos

    ERIC Educational Resources Information Center

    Willmott, Christopher J. R.

    2015-01-01

    There is growing recognition that science is not conducted in a vacuum and that advances in the biosciences have ethical and social implications for the wider community. An exercise is described in which undergraduate students work in teams to produce short videos about the science and ethical dimensions of current developments in biomedicine.…

  13. Augmenting advance care planning in poor prognosis cancer with a video decision aid: a preintervention-postintervention study.

    PubMed

    Volandes, Angelo E; Levin, Tomer T; Slovin, Susan; Carvajal, Richard D; O'Reilly, Eileen M; Keohan, Mary Louise; Theodoulou, Maria; Dickler, Maura; Gerecitano, John F; Morris, Michael; Epstein, Andrew S; Naka-Blackstone, Anastazia; Walker-Corkery, Elizabeth S; Chang, Yuchiao; Noy, Ariela

    2012-09-01

    The authors tested whether an educational video on the goals of care in advanced cancer (life-prolonging care, basic care, or comfort care) helped patients understand these goals and had an impact on their preferences for resuscitation. A survey of 80 patients with advanced cancer was conducted before and after they viewed an educational video. The outcomes of interest included changes in goals of care preference and knowledge and consistency of preferences with code status. Before viewing the video, 10 patients (13%) preferred life-prolonging care, 24 patients (30%) preferred basic care, 29 patients (36%) preferred comfort care, and 17 patients (21%) were unsure. Preferences did not change after the video, when 9 patients (11%) chose life-prolonging care, 28 patients (35%) chose basic care, 29 patients (36%) chose comfort care, and, 14 patients (18%) were unsure (P = .28). Compared with baseline, after the video presentation, more patients did not want cardiopulmonary resuscitation (CPR) (71% vs 62%; P = .03) or ventilation (80% vs 67%; P = .008). Knowledge about goals of care and likelihood of resuscitation increased after the video (P < .001). Of the patients who did not want CPR or ventilation after the video augmentation, only 4 patients (5%) had a documented do-not-resuscitate order in their medical record (kappa statistic, -0.01; 95% confidence interval, -0.06 to 0.04). Acceptability of the video was high. Patients with advanced cancer did not change care preferences after viewing the video, but fewer wanted CPR or ventilation. Documented code status was inconsistent with patient preferences. Patients were more knowledgeable after the video, reported that the video was acceptable, and said they would recommend it to others. The current results indicated that this type of video may enable patients to visualize "goals of care," enriching patient understanding of worsening health states and better informing decision making. Copyright © 2012 American Cancer

  14. X-Ray Calibration Facility/Advanced Video Guidance Sensor Test

    NASA Technical Reports Server (NTRS)

    Johnston, N. A. S.; Howard, R. T.; Watson, D. W.

    2004-01-01

    The advanced video guidance sensor was tested in the X-Ray Calibration facility at Marshall Space Flight Center to establish performance during vacuum. Two sensors were tested and a timeline for each are presented. The sensor and test facility are discussed briefly. A new test stand was also developed. A table establishing sensor bias and spot size growth for several ranges is detailed along with testing anomalies.

  15. Towards a next generation open-source video codec

    NASA Astrophysics Data System (ADS)

    Bankoski, Jim; Bultje, Ronald S.; Grange, Adrian; Gu, Qunshan; Han, Jingning; Koleszar, John; Mukherjee, Debargha; Wilkins, Paul; Xu, Yaowu

    2013-02-01

    Google has recently been developing a next generation opensource video codec called VP9, as part of the experimental branch of the libvpx repository included in the WebM project (http://www.webmproject.org/). Starting from the VP8 video codec released by Google in 2010 as the baseline, a number of enhancements and new tools have been added to improve the coding efficiency. This paper provides a technical overview of the current status of this project along with comparisons and other stateoftheart video codecs H. 264/AVC and HEVC. The new tools that have been added so far include: larger prediction block sizes up to 64x64, various forms of compound INTER prediction, more modes for INTRA prediction, ⅛pel motion vectors and 8tap switchable subpel interpolation filters, improved motion reference generation and motion vector coding, improved entropy coding and framelevel entropy adaptation for various symbols, improved loop filtering, incorporation of Asymmetric Discrete Sine Transforms and larger 16x16 and 32x32 DCTs, frame level segmentation to group similar areas together, etc. Other tools and various bitstream features are being actively worked on as well. The VP9 bitstream is expected to be finalized by earlyto mid2013. Results show VP9 to be quite competitive in performance with mainstream stateoftheart codecs.

  16. Structured student-generated videos for first-year students at a dental school in Malaysia.

    PubMed

    Omar, Hanan; Khan, Saad A; Toh, Chooi G

    2013-05-01

    Student-generated videos provide an authentic learning experience for students, enhance motivation and engagement, improve communication skills, and improve collaborative learning skills. This article describes the development and implementation of a student-generated video activity as part of a knowledge, observation, simulation, and experience (KOSE) program at the School of Dentistry, International Medical University, Kuala Lumpur, Malaysia. It also reports the students' perceptions of an activity that introduced first-year dental students (n=44) to clinical scenarios involving patients and dental team aiming to improve professional behavior and communication skills. The learning activity was divided into three phases: preparatory phase, video production phase, and video-watching. Students were organized into five groups and were instructed to generate videos addressing given clinical scenarios. Following the activity, students' perceptions were assessed with a questionnaire. The results showed that 86 percent and 88 percent, respectively, of the students agreed that preparation of the activity enhanced their understanding of the role of dentists in provision of health care and the role of enhanced teamwork. In addition, 86 percent and 75 percent, respectively, agreed that the activity improved their communication and project management skills. Overall, the dental students perceived that the student-generated video activity was a positive experience and enabled them to play the major role in driving their learning process.

  17. Real-time UAV trajectory generation using feature points matching between video image sequences

    NASA Astrophysics Data System (ADS)

    Byun, Younggi; Song, Jeongheon; Han, Dongyeob

    2017-09-01

    Unmanned aerial vehicles (UAVs), equipped with navigation systems and video capability, are currently being deployed for intelligence, reconnaissance and surveillance mission. In this paper, we present a systematic approach for the generation of UAV trajectory using a video image matching system based on SURF (Speeded up Robust Feature) and Preemptive RANSAC (Random Sample Consensus). Video image matching to find matching points is one of the most important steps for the accurate generation of UAV trajectory (sequence of poses in 3D space). We used the SURF algorithm to find the matching points between video image sequences, and removed mismatching by using the Preemptive RANSAC which divides all matching points to outliers and inliers. The inliers are only used to determine the epipolar geometry for estimating the relative pose (rotation and translation) between image sequences. Experimental results from simulated video image sequences showed that our approach has a good potential to be applied to the automatic geo-localization of the UAVs system

  18. Development of a video-based education and process change intervention to improve advance cardiopulmonary resuscitation decision-making.

    PubMed

    Waldron, Nicholas; Johnson, Claire E; Saul, Peter; Waldron, Heidi; Chong, Jeffrey C; Hill, Anne-Marie; Hayes, Barbara

    2016-10-06

    Advance cardiopulmonary resuscitation (CPR) decision-making and escalation of care discussions are variable in routine clinical practice. We aimed to explore physician barriers to advance CPR decision-making in an inpatient hospital setting and develop a pragmatic intervention to support clinicians to undertake and document routine advance care planning discussions. Two focus groups, which involved eight consultants and ten junior doctors, were conducted following a review of the current literature. A subsequent iterative consensus process developed two intervention elements: (i) an updated 'Goals of Patient Care' (GOPC) form and process; (ii) an education video and resources for teaching advance CPR decision-making and communication. A multidisciplinary group of health professionals and policy-makers with experience in systems development, education and research provided critical feedback. Three key themes emerged from the focus groups and the literature, which identified a structure for the intervention: (i) knowing what to say; (ii) knowing how to say it; (iii) wanting to say it. The themes informed the development of a video to provide education about advance CPR decision-making framework, improving communication and contextualising relevant clinical issues. Critical feedback assisted in refining the video and further guided development and evolution of a medical GOPC approach to discussing and recording medical treatment and advance care plans. Through an iterative process of consultation and review, video-based education and an expanded GOPC form and approach were developed to address physician and systemic barriers to advance CPR decision-making and documentation. Implementation and evaluation across hospital settings is required to examine utility and determine effect on quality of care.

  19. Orbital Express Advanced Video Guidance Sensor: Ground Testing, Flight Results and Comparisons

    NASA Technical Reports Server (NTRS)

    Pinson, Robin M.; Howard, Richard T.; Heaton, Andrew F.

    2008-01-01

    Orbital Express (OE) was a successful mission demonstrating automated rendezvous and docking. The 2007 mission consisted of two spacecraft, the Autonomous Space Transport Robotic Operations (ASTRO) and the Next Generation Serviceable Satellite (NEXTSat) that were designed to work together and test a variety of service operations in orbit. The Advanced Video Guidance Sensor, AVGS, was included as one of the primary proximity navigation sensors on board the ASTRO. The AVGS was one of four sensors that provided relative position and attitude between the two vehicles. Marshall Space Flight Center was responsible for the AVGS software and testing (especially the extensive ground testing), flight operations support, and analyzing the flight data. This paper briefly describes the historical mission, the data taken on-orbit, the ground testing that occurred, and finally comparisons between flight data and ground test data for two different flight regimes.

  20. Learner-Generated Digital Video: Using Ideas Videos in Teacher Education

    ERIC Educational Resources Information Center

    Kearney, Matthew

    2013-01-01

    This qualitative study investigates the efficacy of "Ideas Videos" (or "iVideos") in pre-service teacher education. It explores the experiences of student teachers and their lecturer engaging with this succinct, advocacy-style video genre designed to evoke emotions about powerful ideas in Education (Wong, Mishra, Koehler, &…

  1. Multi-modal highlight generation for sports videos using an information-theoretic excitability measure

    NASA Astrophysics Data System (ADS)

    Hasan, Taufiq; Bořil, Hynek; Sangwan, Abhijeet; L Hansen, John H.

    2013-12-01

    The ability to detect and organize `hot spots' representing areas of excitement within video streams is a challenging research problem when techniques rely exclusively on video content. A generic method for sports video highlight selection is presented in this study which leverages both video/image structure as well as audio/speech properties. Processing begins where the video is partitioned into small segments and several multi-modal features are extracted from each segment. Excitability is computed based on the likelihood of the segmental features residing in certain regions of their joint probability density function space which are considered both exciting and rare. The proposed measure is used to rank order the partitioned segments to compress the overall video sequence and produce a contiguous set of highlights. Experiments are performed on baseball videos based on signal processing advancements for excitement assessment in the commentators' speech, audio energy, slow motion replay, scene cut density, and motion activity as features. Detailed analysis on correlation between user excitability and various speech production parameters is conducted and an effective scheme is designed to estimate the excitement level of commentator's speech from the sports videos. Subjective evaluation of excitability and ranking of video segments demonstrate a higher correlation with the proposed measure compared to well-established techniques indicating the effectiveness of the overall approach.

  2. Two schemes for rapid generation of digital video holograms using PC cluster

    NASA Astrophysics Data System (ADS)

    Park, Hanhoon; Song, Joongseok; Kim, Changseob; Park, Jong-Il

    2017-12-01

    Computer-generated holography (CGH), which is a process of generating digital holograms, is computationally expensive. Recently, several methods/systems of parallelizing the process using graphic processing units (GPUs) have been proposed. Indeed, use of multiple GPUs or a personal computer (PC) cluster (each PC with GPUs) enabled great improvements in the process speed. However, extant literature has less often explored systems involving rapid generation of multiple digital holograms and specialized systems for rapid generation of a digital video hologram. This study proposes a system that uses a PC cluster and is able to more efficiently generate a video hologram. The proposed system is designed to simultaneously generate multiple frames and accelerate the generation by parallelizing the CGH computations across a number of frames, as opposed to separately generating each individual frame while parallelizing the CGH computations within each frame. The proposed system also enables the subprocesses for generating each frame to execute in parallel through multithreading. With these two schemes, the proposed system significantly reduced the data communication time for generating a digital hologram when compared with that of the state-of-the-art system.

  3. Three-directional motion-compensation mask-based novel look-up table on graphics processing units for video-rate generation of digital holographic videos of three-dimensional scenes.

    PubMed

    Kwon, Min-Woo; Kim, Seung-Cheol; Kim, Eun-Soo

    2016-01-20

    A three-directional motion-compensation mask-based novel look-up table method is proposed and implemented on graphics processing units (GPUs) for video-rate generation of digital holographic videos of three-dimensional (3D) scenes. Since the proposed method is designed to be well matched with the software and memory structures of GPUs, the number of compute-unified-device-architecture kernel function calls can be significantly reduced. This results in a great increase of the computational speed of the proposed method, allowing video-rate generation of the computer-generated hologram (CGH) patterns of 3D scenes. Experimental results reveal that the proposed method can generate 39.8 frames of Fresnel CGH patterns with 1920×1080 pixels per second for the test 3D video scenario with 12,088 object points on dual GPU boards of NVIDIA GTX TITANs, and they confirm the feasibility of the proposed method in the practical application fields of electroholographic 3D displays.

  4. Lights, Camera, Action: Advancing Learning, Research, and Program Evaluation through Video Production in Educational Leadership Preparation

    ERIC Educational Resources Information Center

    Friend, Jennifer; Militello, Matthew

    2015-01-01

    This article analyzes specific uses of digital video production in the field of educational leadership preparation, advancing a three-part framework that includes the use of video in (a) teaching and learning, (b) research methods, and (c) program evaluation and service to the profession. The first category within the framework examines videos…

  5. Subjective evaluation of next-generation video compression algorithms: a case study

    NASA Astrophysics Data System (ADS)

    De Simone, Francesca; Goldmann, Lutz; Lee, Jong-Seok; Ebrahimi, Touradj; Baroncini, Vittorio

    2010-08-01

    This paper describes the details and the results of the subjective quality evaluation performed at EPFL, as a contribution to the effort of the Joint Collaborative Team on Video Coding (JCT-VC) for the definition of the next-generation video coding standard. The performance of 27 coding technologies have been evaluated with respect to two H.264/MPEG-4 AVC anchors, considering high definition (HD) test material. The test campaign involved a total of 494 naive observers and took place over a period of four weeks. While similar tests have been conducted as part of the standardization process of previous video coding technologies, the test campaign described in this paper is by far the most extensive in the history of video coding standardization. The obtained subjective quality scores show high consistency and support an accurate comparison of the performance of the different coding solutions.

  6. Automatic Keyframe Summarization of User-Generated Video

    DTIC Science & Technology

    2014-06-01

    using the framework presented in this paper. 12 Scenery Technology has been developed that classifies the genre of a video. Here, video genres are...types of videos that shares similarities in content and structure. Many genres of video footage exist. Some examples include news, sports, movies...cartoons, and commercials. Rasheed et al. [42] classify video genres (comedy, action, drama, and horror) with low-level video statistics, such as average

  7. Automated Generation of Geo-Referenced Mosaics From Video Data Collected by Deep-Submergence Vehicles: Preliminary Results

    NASA Astrophysics Data System (ADS)

    Rhzanov, Y.; Beaulieu, S.; Soule, S. A.; Shank, T.; Fornari, D.; Mayer, L. A.

    2005-12-01

    Many advances in understanding geologic, tectonic, biologic, and sedimentologic processes in the deep ocean are facilitated by direct observation of the seafloor. However, making such observations is both difficult and expensive. Optical systems (e.g., video, still camera, or direct observation) will always be constrained by the severe attenuation of light in the deep ocean, limiting the field of view to distances that are typically less than 10 meters. Acoustic systems can 'see' much larger areas, but at the cost of spatial resolution. Ultimately, scientists want to study and observe deep-sea processes in the same way we do land-based phenomena so that the spatial distribution and juxtaposition of processes and features can be resolved. We have begun development of algorithms that will, in near real-time, generate mosaics from video collected by deep-submergence vehicles. Mosaics consist of >>10 video frames and can cover 100's of square-meters. This work builds on a publicly available still and video mosaicking software package developed by Rzhanov and Mayer. Here we present the results of initial tests of data collection methodologies (e.g., transects across the seafloor and panoramas across features of interest), algorithm application, and GIS integration conducted during a recent cruise to the Eastern Galapagos Spreading Center (0 deg N, 86 deg W). We have developed a GIS database for the region that will act as a means to access and display mosaics within a geospatially-referenced framework. We have constructed numerous mosaics using both video and still imagery and assessed the quality of the mosaics (including registration errors) under different lighting conditions and with different navigation procedures. We have begun to develop algorithms for efficient and timely mosaicking of collected video as well as integration with navigation data for georeferencing the mosaics. Initial results indicate that operators must be properly versed in the control of the

  8. Web-video-mining-supported workflow modeling for laparoscopic surgeries.

    PubMed

    Liu, Rui; Zhang, Xiaoli; Zhang, Hao

    2016-11-01

    As quality assurance is of strong concern in advanced surgeries, intelligent surgical systems are expected to have knowledge such as the knowledge of the surgical workflow model (SWM) to support their intuitive cooperation with surgeons. For generating a robust and reliable SWM, a large amount of training data is required. However, training data collected by physically recording surgery operations is often limited and data collection is time-consuming and labor-intensive, severely influencing knowledge scalability of the surgical systems. The objective of this research is to solve the knowledge scalability problem in surgical workflow modeling with a low cost and labor efficient way. A novel web-video-mining-supported surgical workflow modeling (webSWM) method is developed. A novel video quality analysis method based on topic analysis and sentiment analysis techniques is developed to select high-quality videos from abundant and noisy web videos. A statistical learning method is then used to build the workflow model based on the selected videos. To test the effectiveness of the webSWM method, 250 web videos were mined to generate a surgical workflow for the robotic cholecystectomy surgery. The generated workflow was evaluated by 4 web-retrieved videos and 4 operation-room-recorded videos, respectively. The evaluation results (video selection consistency n-index ≥0.60; surgical workflow matching degree ≥0.84) proved the effectiveness of the webSWM method in generating robust and reliable SWM knowledge by mining web videos. With the webSWM method, abundant web videos were selected and a reliable SWM was modeled in a short time with low labor cost. Satisfied performances in mining web videos and learning surgery-related knowledge show that the webSWM method is promising in scaling knowledge for intelligent surgical systems. Copyright © 2016 Elsevier B.V. All rights reserved.

  9. Advanced Video Guidance Sensor (AVGS) Development Testing

    NASA Technical Reports Server (NTRS)

    Howard, Richard T.; Johnston, Albert S.; Bryan, Thomas C.; Book, Michael L.

    2004-01-01

    NASA's Marshall Space Flight Center was the driving force behind the development of the Advanced Video Guidance Sensor, an active sensor system that provides near-range sensor data as part of an automatic rendezvous and docking system. The sensor determines the relative positions and attitudes between the active sensor and the passive target at ranges up to 300 meters. The AVGS uses laser diodes to illuminate retro-reflectors in the target, a solid-state camera to detect the return from the target, and image capture electronics and a digital signal processor to convert the video information into the relative positions and attitudes. The AVGS will fly as part of the Demonstration of Autonomous Rendezvous Technologies (DART) in October, 2004. This development effort has required a great deal of testing of various sorts at every phase of development. Some of the test efforts included optical characterization of performance with the intended target, thermal vacuum testing, performance tests in long range vacuum facilities, EMI/EMC tests, and performance testing in dynamic situations. The sensor has been shown to track a target at ranges of up to 300 meters, both in vacuum and ambient conditions, to survive and operate during the thermal vacuum cycling specific to the DART mission, to handle EM1 well, and to perform well in dynamic situations.

  10. Miniaturized video-rate epi-third-harmonic-generation fiber-microscope.

    PubMed

    Chia, Shih-Hsuan; Yu, Che-Hang; Lin, Chih-Han; Cheng, Nai-Chia; Liu, Tzu-Ming; Chan, Ming-Che; Chen, I-Hsiu; Sun, Chi-Kuang

    2010-08-02

    With a micro-electro-mechanical system (MEMS) mirror, we successfully developed a miniaturized epi-third-harmonic-generation (epi-THG) fiber-microscope with a video frame rate (31 Hz), which was designed for in vivo optical biopsy of human skin. With a large-mode-area (LMA) photonic crystal fiber (PCF) and a regular microscopic objective, the nonlinear distortion of the ultrafast pulses delivery could be much reduced while still achieving a 0.4 microm lateral resolution for epi-THG signals. In vivo real time virtual biopsy of the Asian skin with a video rate (31 Hz) and a sub-micron resolution was obtained. The result indicates that this miniaturized system was compact enough for the least invasive hand-held clinical use.

  11. Gaze inspired subtitle position evaluation for MOOCs videos

    NASA Astrophysics Data System (ADS)

    Chen, Hongli; Yan, Mengzhen; Liu, Sijiang; Jiang, Bo

    2017-06-01

    Online educational resources, such as MOOCs, is becoming increasingly popular, especially in higher education field. One most important media type for MOOCs is course video. Besides traditional bottom-position subtitle accompany to the videos, in recent years, researchers try to develop more advanced algorithms to generate speaker-following style subtitles. However, the effectiveness of such subtitle is still unclear. In this paper, we investigate the relationship between subtitle position and the learning effect after watching the video on tablet devices. Inspired with image based human eye tracking technique, this work combines the objective gaze estimation statistics with subjective user study to achieve a convincing conclusion - speaker-following subtitles are more suitable for online educational videos.

  12. Video Games as a Training Tool to Prepare the Next Generation of Cyber Warriors

    DTIC Science & Technology

    2014-10-01

    2. REPORT TYPE N/A 3. DATES COVERED - 4 . TITLE AND SUBTITLE Video Games as a Training Tool to Prepare the Next Generation of Cyber Warriors...CYBERSECURITY WORKFORCE SHORTAGE .......................................................................... 3 4 1.1 GREATER CYBERSECURITY EDUCATION IS... 4 6 2.1 HOW VIDEO GAMES CAN BE EFFECTIVE LEARNING TOOLS

  13. Simulation and ground testing with the Advanced Video Guidance Sensor

    NASA Technical Reports Server (NTRS)

    Howard, Richard T.; Johnston, Albert S.; Bryan, Thomas C.; Book, Michael L.

    2005-01-01

    The Advanced Video Guidance Sensor (AVGS), an active sensor system that provides near-range 6-degree-of-freedom sensor data, has been developed as part of an automatic rendezvous and docking system for the Demonstration of Autonomous Rendezvous Technology (DART). The sensor determines the relative positions and attitudes between the active sensor and the passive target at ranges up to 300 meters. The AVGS uses laser diodes to illuminate retro-reflectors in the target, a solid-state imager to detect the light returned from the target, and image capture electronics and a digital signal processor to convert the video information into the relative positions and attitudes. The development of the sensor, through initial prototypes, final prototypes, and three flight units, has required a great deal of testing at every phase, and the different types of testing, their effectiveness, and their results, are presented in this paper, focusing on the testing of the flight units. Testing has improved the sensor's performance.

  14. High efficiency video coding for ultrasound video communication in m-health systems.

    PubMed

    Panayides, A; Antoniou, Z; Pattichis, M S; Pattichis, C S; Constantinides, A G

    2012-01-01

    Emerging high efficiency video compression methods and wider availability of wireless network infrastructure will significantly advance existing m-health applications. For medical video communications, the emerging video compression and network standards support low-delay and high-resolution video transmission, at the clinically acquired resolution and frame rates. Such advances are expected to further promote the adoption of m-health systems for remote diagnosis and emergency incidents in daily clinical practice. This paper compares the performance of the emerging high efficiency video coding (HEVC) standard to the current state-of-the-art H.264/AVC standard. The experimental evaluation, based on five atherosclerotic plaque ultrasound videos encoded at QCIF, CIF, and 4CIF resolutions demonstrates that 50% reductions in bitrate requirements is possible for equivalent clinical quality.

  15. What do we do with all this video? Better understanding public engagement for image and video annotation

    NASA Astrophysics Data System (ADS)

    Wiener, C.; Miller, A.; Zykov, V.

    2016-12-01

    Advanced robotic vehicles are increasingly being used by oceanographic research vessels to enable more efficient and widespread exploration of the ocean, particularly the deep ocean. With cutting-edge capabilities mounted onto robotic vehicles, data at high resolutions is being generated more than ever before, enabling enhanced data collection and the potential for broader participation. For example, high resolution camera technology not only improves visualization of the ocean environment, but also expands the capacity to engage participants remotely through increased use of telepresence and virtual reality techniques. Schmidt Ocean Institute is a private, non-profit operating foundation established to advance the understanding of the world's oceans through technological advancement, intelligent observation and analysis, and open sharing of information. Telepresence-enabled research is an important component of Schmidt Ocean Institute's science research cruises, which this presentation will highlight. Schmidt Ocean Institute is one of the only research programs that make their entire underwater vehicle dive series available online, creating a collection of video that enables anyone to follow deep sea research in real time. We encourage students, educators and the general public to take advantage of freely available dive videos. Additionally, other SOI-supported internet platforms, have engaged the public in image and video annotation activities. Examples of these new online platforms, which utilize citizen scientists to annotate scientific image and video data will be provided. This presentation will include an introduction to SOI-supported video and image tagging citizen science projects, real-time robot tracking, live ship-to-shore communications, and an array of outreach activities that enable scientists to interact with the public and explore the ocean in fascinating detail.

  16. Picturing Video

    NASA Technical Reports Server (NTRS)

    2000-01-01

    Video Pics is a software program that generates high-quality photos from video. The software was developed under an SBIR contract with Marshall Space Flight Center by Redhawk Vision, Inc.--a subsidiary of Irvine Sensors Corporation. Video Pics takes information content from multiple frames of video and enhances the resolution of a selected frame. The resulting image has enhanced sharpness and clarity like that of a 35 mm photo. The images are generated as digital files and are compatible with image editing software.

  17. Randomized Controlled Trial of a Video Decision Support Tool for Cardiopulmonary Resuscitation Decision Making in Advanced Cancer

    PubMed Central

    Volandes, Angelo E.; Paasche-Orlow, Michael K.; Mitchell, Susan L.; El-Jawahri, Areej; Davis, Aretha Delight; Barry, Michael J.; Hartshorn, Kevan L.; Jackson, Vicki Ann; Gillick, Muriel R.; Walker-Corkery, Elizabeth S.; Chang, Yuchiao; López, Lenny; Kemeny, Margaret; Bulone, Linda; Mann, Eileen; Misra, Sumi; Peachey, Matt; Abbo, Elmer D.; Eichler, April F.; Epstein, Andrew S.; Noy, Ariela; Levin, Tomer T.; Temel, Jennifer S.

    2013-01-01

    Purpose Decision making regarding cardiopulmonary resuscitation (CPR) is challenging. This study examined the effect of a video decision support tool on CPR preferences among patients with advanced cancer. Patients and Methods We performed a randomized controlled trial of 150 patients with advanced cancer from four oncology centers. Participants in the control arm (n = 80) listened to a verbal narrative describing CPR and the likelihood of successful resuscitation. Participants in the intervention arm (n = 70) listened to the identical narrative and viewed a 3-minute video depicting a patient on a ventilator and CPR being performed on a simulated patient. The primary outcome was participants' preference for or against CPR measured immediately after exposure to either modality. Secondary outcomes were participants' knowledge of CPR (score range of 0 to 4, with higher score indicating more knowledge) and comfort with video. Results The mean age of participants was 62 years (standard deviation, 11 years); 49% were women, 44% were African American or Latino, and 47% had lung or colon cancer. After the verbal narrative, in the control arm, 38 participants (48%) wanted CPR, 41 (51%) wanted no CPR, and one (1%) was uncertain. In contrast, in the intervention arm, 14 participants (20%) wanted CPR, 55 (79%) wanted no CPR, and 1 (1%) was uncertain (unadjusted odds ratio, 3.5; 95% CI, 1.7 to 7.2; P < .001). Mean knowledge scores were higher in the intervention arm than in the control arm (3.3 ± 1.0 v 2.6 ± 1.3, respectively; P < .001), and 65 participants (93%) in the intervention arm were comfortable watching the video. Conclusion Participants with advanced cancer who viewed a video of CPR were less likely to opt for CPR than those who listened to a verbal narrative. PMID:23233708

  18. Randomized controlled trial of a video decision support tool for cardiopulmonary resuscitation decision making in advanced cancer.

    PubMed

    Volandes, Angelo E; Paasche-Orlow, Michael K; Mitchell, Susan L; El-Jawahri, Areej; Davis, Aretha Delight; Barry, Michael J; Hartshorn, Kevan L; Jackson, Vicki Ann; Gillick, Muriel R; Walker-Corkery, Elizabeth S; Chang, Yuchiao; López, Lenny; Kemeny, Margaret; Bulone, Linda; Mann, Eileen; Misra, Sumi; Peachey, Matt; Abbo, Elmer D; Eichler, April F; Epstein, Andrew S; Noy, Ariela; Levin, Tomer T; Temel, Jennifer S

    2013-01-20

    Decision making regarding cardiopulmonary resuscitation (CPR) is challenging. This study examined the effect of a video decision support tool on CPR preferences among patients with advanced cancer. We performed a randomized controlled trial of 150 patients with advanced cancer from four oncology centers. Participants in the control arm (n = 80) listened to a verbal narrative describing CPR and the likelihood of successful resuscitation. Participants in the intervention arm (n = 70) listened to the identical narrative and viewed a 3-minute video depicting a patient on a ventilator and CPR being performed on a simulated patient. The primary outcome was participants' preference for or against CPR measured immediately after exposure to either modality. Secondary outcomes were participants' knowledge of CPR (score range of 0 to 4, with higher score indicating more knowledge) and comfort with video. The mean age of participants was 62 years (standard deviation, 11 years); 49% were women, 44% were African American or Latino, and 47% had lung or colon cancer. After the verbal narrative, in the control arm, 38 participants (48%) wanted CPR, 41 (51%) wanted no CPR, and one (1%) was uncertain. In contrast, in the intervention arm, 14 participants (20%) wanted CPR, 55 (79%) wanted no CPR, and 1 (1%) was uncertain (unadjusted odds ratio, 3.5; 95% CI, 1.7 to 7.2; P < .001). Mean knowledge scores were higher in the intervention arm than in the control arm (3.3 ± 1.0 v 2.6 ± 1.3, respectively; P < .001), and 65 participants (93%) in the intervention arm were comfortable watching the video. Participants with advanced cancer who viewed a video of CPR were less likely to opt for CPR than those who listened to a verbal narrative.

  19. Video decision support tool for advance care planning in dementia: randomised controlled trial

    PubMed Central

    Paasche-Orlow, Michael K; Barry, Michael J; Gillick, Muriel R; Minaker, Kenneth L; Chang, Yuchiao; Cook, E Francis; Abbo, Elmer D; El-Jawahri, Areej; Mitchell, Susan L

    2009-01-01

    Objective To evaluate the effect of a video decision support tool on the preferences for future medical care in older people if they develop advanced dementia, and the stability of those preferences after six weeks. Design Randomised controlled trial conducted between 1 September 2007 and 30 May 2008. Setting Four primary care clinics (two geriatric and two adult medicine) affiliated with three academic medical centres in Boston. Participants Convenience sample of 200 older people (≥65 years) living in the community with previously scheduled appointments at one of the clinics. Mean age was 75 and 58% were women. Intervention Verbal narrative alone (n=106) or with a video decision support tool (n=94). Main outcome measures Preferred goal of care: life prolonging care (cardiopulmonary resuscitation, mechanical ventilation), limited care (admission to hospital, antibiotics, but not cardiopulmonary resuscitation), or comfort care (treatment only to relieve symptoms). Preferences after six weeks. The principal category for analysis was the difference in proportions of participants in each group who preferred comfort care. Results Among participants receiving the verbal narrative alone, 68 (64%) chose comfort care, 20 (19%) chose limited care, 15 (14%) chose life prolonging care, and three (3%) were uncertain. In the video group, 81 (86%) chose comfort care, eight (9%) chose limited care, four (4%) chose life prolonging care, and one (1%) was uncertain (χ2=13.0, df=3, P=0.003). Among all participants the factors associated with a greater likelihood of opting for comfort care were being a college graduate or higher, good or better health status, greater health literacy, white race, and randomisation to the video arm. In multivariable analysis, participants in the video group were more likely to prefer comfort care than those in the verbal group (adjusted odds ratio 3.9, 95% confidence interval 1.8 to 8.6). Participants were re-interviewed after six weeks. Among the 94

  20. On the definition of adapted audio/video profiles for high-quality video calling services over LTE/4G

    NASA Astrophysics Data System (ADS)

    Ndiaye, Maty; Quinquis, Catherine; Larabi, Mohamed Chaker; Le Lay, Gwenael; Saadane, Hakim; Perrine, Clency

    2014-01-01

    During the last decade, the important advances and widespread availability of mobile technology (operating systems, GPUs, terminal resolution and so on) have encouraged a fast development of voice and video services like video-calling. While multimedia services have largely grown on mobile devices, the generated increase of data consumption is leading to the saturation of mobile networks. In order to provide data with high bit-rates and maintain performance as close as possible to traditional networks, the 3GPP (The 3rd Generation Partnership Project) worked on a high performance standard for mobile called Long Term Evolution (LTE). In this paper, we aim at expressing recommendations related to audio and video media profiles (selection of audio and video codecs, bit-rates, frame-rates, audio and video formats) for a typical video-calling services held over LTE/4G mobile networks. These profiles are defined according to targeted devices (smartphones, tablets), so as to ensure the best possible quality of experience (QoE). Obtained results indicate that for a CIF format (352 x 288 pixels) which is usually used for smartphones, the VP8 codec provides a better image quality than the H.264 codec for low bitrates (from 128 to 384 kbps). However sequences with high motion, H.264 in slow mode is preferred. Regarding audio, better results are globally achieved using wideband codecs offering good quality except for opus codec (at 12.2 kbps).

  1. Normalized Metadata Generation for Human Retrieval Using Multiple Video Surveillance Cameras.

    PubMed

    Jung, Jaehoon; Yoon, Inhye; Lee, Seungwon; Paik, Joonki

    2016-06-24

    Since it is impossible for surveillance personnel to keep monitoring videos from a multiple camera-based surveillance system, an efficient technique is needed to help recognize important situations by retrieving the metadata of an object-of-interest. In a multiple camera-based surveillance system, an object detected in a camera has a different shape in another camera, which is a critical issue of wide-range, real-time surveillance systems. In order to address the problem, this paper presents an object retrieval method by extracting the normalized metadata of an object-of-interest from multiple, heterogeneous cameras. The proposed metadata generation algorithm consists of three steps: (i) generation of a three-dimensional (3D) human model; (ii) human object-based automatic scene calibration; and (iii) metadata generation. More specifically, an appropriately-generated 3D human model provides the foot-to-head direction information that is used as the input of the automatic calibration of each camera. The normalized object information is used to retrieve an object-of-interest in a wide-range, multiple-camera surveillance system in the form of metadata. Experimental results show that the 3D human model matches the ground truth, and automatic calibration-based normalization of metadata enables a successful retrieval and tracking of a human object in the multiple-camera video surveillance system.

  2. Normalized Metadata Generation for Human Retrieval Using Multiple Video Surveillance Cameras

    PubMed Central

    Jung, Jaehoon; Yoon, Inhye; Lee, Seungwon; Paik, Joonki

    2016-01-01

    Since it is impossible for surveillance personnel to keep monitoring videos from a multiple camera-based surveillance system, an efficient technique is needed to help recognize important situations by retrieving the metadata of an object-of-interest. In a multiple camera-based surveillance system, an object detected in a camera has a different shape in another camera, which is a critical issue of wide-range, real-time surveillance systems. In order to address the problem, this paper presents an object retrieval method by extracting the normalized metadata of an object-of-interest from multiple, heterogeneous cameras. The proposed metadata generation algorithm consists of three steps: (i) generation of a three-dimensional (3D) human model; (ii) human object-based automatic scene calibration; and (iii) metadata generation. More specifically, an appropriately-generated 3D human model provides the foot-to-head direction information that is used as the input of the automatic calibration of each camera. The normalized object information is used to retrieve an object-of-interest in a wide-range, multiple-camera surveillance system in the form of metadata. Experimental results show that the 3D human model matches the ground truth, and automatic calibration-based normalization of metadata enables a successful retrieval and tracking of a human object in the multiple-camera video surveillance system. PMID:27347961

  3. Video Captions for Online Courses: Do YouTube's Auto-Generated Captions Meet Deaf Students' Needs?

    ERIC Educational Resources Information Center

    Parton, Becky Sue

    2016-01-01

    Providing captions for videos used in online courses is an area of interest for institutions of higher education. There are legal and ethical ramifications as well as time constraints to consider. Captioning tools are available, but some universities rely on the auto-generated YouTube captions. This study looked at a particular type of video--the…

  4. Teaching French Transformational Grammar by Means of Computer-Generated Video-Tapes.

    ERIC Educational Resources Information Center

    Adler, Alfred; Thomas, Jean Jacques

    This paper describes a pilot program in an integrated media presentation of foreign languages and the production and usage of seven computer-generated video tapes which demonstrate various aspects of French syntax. This instructional set could form the basis for CAI lessons in which the student is presented images identical to those on the video…

  5. Video Vectorization via Tetrahedral Remeshing.

    PubMed

    Wang, Chuan; Zhu, Jie; Guo, Yanwen; Wang, Wenping

    2017-02-09

    We present a video vectorization method that generates a video in vector representation from an input video in raster representation. A vector-based video representation offers the benefits of vector graphics, such as compactness and scalability. The vector video we generate is represented by a simplified tetrahedral control mesh over the spatial-temporal video volume, with color attributes defined at the mesh vertices. We present novel techniques for simplification and subdivision of a tetrahedral mesh to achieve high simplification ratio while preserving features and ensuring color fidelity. From an input raster video, our method is capable of generating a compact video in vector representation that allows a faithful reconstruction with low reconstruction errors.

  6. A Randomized Controlled Trial of a Cardiopulmonary Resuscitation Video in Advance Care Planning for Progressive Pancreas and Hepatobiliary Cancer Patients

    PubMed Central

    Volandes, Angelo E.; Chen, Ling Y.; Gary, Kristen A.; Li, Yuelin; Agre, Patricia; Levin, Tomer T.; Reidy, Diane L.; Meng, Raymond D.; Segal, Neil H.; Yu, Kenneth H.; Abou-Alfa, Ghassan K.; Janjigian, Yelena Y.; Kelsen, David P.; O'Reilly, Eileen M.

    2013-01-01

    Abstract Background Cardiopulmonary resuscitation (CPR) is an important advance directive (AD) topic in patients with progressive cancer; however such discussions are challenging. Objective This study investigates whether video educational information about CPR engenders broader advance care planning (ACP) discourse. Methods Patients with progressive pancreas or hepatobiliary cancer were randomized to an educational CPR video or a similar CPR narrative. The primary end-point was the difference in ACP documentation one month posttest between arms. Secondary end-points included study impressions; pre- and post-intervention knowledge of and preferences for CPR and mechanical ventilation; and longitudinal patient outcomes. Results Fifty-six subjects were consented and analyzed. Rates of ACP documentation (either formal ADs or documented discussions) were 40% in the video arm (12/30) compared to 15% in the narrative arm (4/26), OR=3.6 [95% CI: 0.9–18.0], p=0.07. Post-intervention knowledge was higher in both arms. Posttest, preferences for CPR had changed in the video arm but not in the narrative arm. Preferences regarding mechanical ventilation did not change in either arm. The majority of subjects in both arms reported the information as helpful and comfortable to discuss, and they recommended it to others. More deaths occurred in the video arm compared to the narrative arm, and more subjects died in hospice settings in the video arm. Conclusions This pilot randomized trial addressing downstream ACP effects of video versus narrative decision tools demonstrated a trend towards more ACP documentation in video subjects. This trend, as well as other video effects, is the subject of ongoing study. PMID:23725233

  7. Aggressive driving video and non-contact enforcement (ADVANCE): drivers' reaction to violation notices : summary of survey results

    DOT National Transportation Integrated Search

    2001-01-01

    ADVANCE is an integration of state of the practice, off-the-shelf technologies which include video, speed measurement, distance measurement, and digital imaging that detects UDAs in the traffic stream and subsequently notifies violators by ma...

  8. Advanced Coal-Based Power Generations

    NASA Technical Reports Server (NTRS)

    Robson, F. L.

    1982-01-01

    Advanced power-generation systems using coal-derived fuels are evaluated in two-volume report. Report considers fuel cells, combined gas- and steam-turbine cycles, and magnetohydrodynamic (MHD) energy conversion. Presents technological status of each type of system and analyzes performance of each operating on medium-Btu fuel gas, either delivered via pipeline to powerplant or generated by coal-gasification process at plantsite.

  9. Principal-Generated YouTube Video as a Method of Improving Parental Involvement

    ERIC Educational Resources Information Center

    Richards, Joey

    2013-01-01

    The purpose of this study was to evaluate the involvement level of parents and reveal whether principal-generated YouTube videos for regular communication would enhance levels of parental involvement at one North Texas Christian Middle School (pseudonym). The following questions guided this study: 1. What is the beginning level of parental…

  10. Qualitative and Quantitative Evaluation of Three Types of Student-Generated Videos as Instructional Support in Organic Chemistry Laboratories

    ERIC Educational Resources Information Center

    Box, Melinda C.; Dunnagan, Cathi L.; Hirsh, Lauren A. S.; Cherry, Clinton R.; Christianson, Kayla A.; Gibson, Radiance J.; Wolfe, Michael I.; Gallardo-Williams, Maria T.

    2017-01-01

    This study was designed to evaluate the effectiveness of student-generated videos as a supplement to teaching assistant (TA) instruction in an undergraduate organic chemistry laboratory. Three videos covering different aspects of lab instruction (experimental technique, use of instrumentation, and calculations) were produced using…

  11. The Role of Collaboration and Feedback in Advancing Student Learning in Media Literacy and Video Production

    ERIC Educational Resources Information Center

    Casinghino, Carl

    2015-01-01

    Teaching advanced video production is an art that requires great sensitivity to the process of providing feedback that helps students to learn and grow. Some students experience difficulty in developing narrative sequences or cause-and-effect strings of motion picture sequences. But when students learn to work collaboratively through the revision…

  12. Internet Teleprescence by Real-Time View-Dependent Image Generation with Omnidirectional Video Camera

    NASA Astrophysics Data System (ADS)

    Morita, Shinji; Yamazawa, Kazumasa; Yokoya, Naokazu

    2003-01-01

    This paper describes a new networked telepresence system which realizes virtual tours into a visualized dynamic real world without significant time delay. Our system is realized by the following three steps: (1) video-rate omnidirectional image acquisition, (2) transportation of an omnidirectional video stream via internet, and (3) real-time view-dependent perspective image generation from the omnidirectional video stream. Our system is applicable to real-time telepresence in the situation where the real world to be seen is far from an observation site, because the time delay from the change of user"s viewing direction to the change of displayed image is small and does not depend on the actual distance between both sites. Moreover, multiple users can look around from a single viewpoint in a visualized dynamic real world in different directions at the same time. In experiments, we have proved that the proposed system is useful for internet telepresence.

  13. A generic flexible and robust approach for intelligent real-time video-surveillance systems

    NASA Astrophysics Data System (ADS)

    Desurmont, Xavier; Delaigle, Jean-Francois; Bastide, Arnaud; Macq, Benoit

    2004-05-01

    In this article we present a generic, flexible and robust approach for an intelligent real-time video-surveillance system. A previous version of the system was presented in [1]. The goal of these advanced tools is to provide help to operators by detecting events of interest in visual scenes and highlighting alarms and compute statistics. The proposed system is a multi-camera platform able to handle different standards of video inputs (composite, IP, IEEE1394 ) and which can basically compress (MPEG4), store and display them. This platform also integrates advanced video analysis tools, such as motion detection, segmentation, tracking and interpretation. The design of the architecture is optimised to playback, display, and process video flows in an efficient way for video-surveillance application. The implementation is distributed on a scalable computer cluster based on Linux and IP network. It relies on POSIX threads for multitasking scheduling. Data flows are transmitted between the different modules using multicast technology and under control of a TCP-based command network (e.g. for bandwidth occupation control). We report here some results and we show the potential use of such a flexible system in third generation video surveillance system. We illustrate the interest of the system in a real case study, which is the indoor surveillance.

  14. Creating a YouTube-Like Collaborative Environment in Mathematics: Integrating Animated Geogebra Constructions and Student-Generated Screencast Videos

    ERIC Educational Resources Information Center

    Lazarus, Jill; Roulet, Geoffrey

    2013-01-01

    This article discusses the integration of student-generated GeoGebra applets and Jing screencast videos to create a YouTube-like medium for sharing in mathematics. The value of combining dynamic mathematics software and screencast videos for facilitating communication and representations in a digital era is demonstrated herein. We share our…

  15. Hierarchical video summarization

    NASA Astrophysics Data System (ADS)

    Ratakonda, Krishna; Sezan, M. Ibrahim; Crinon, Regis J.

    1998-12-01

    We address the problem of key-frame summarization of vide in the absence of any a priori information about its content. This is a common problem that is encountered in home videos. We propose a hierarchical key-frame summarization algorithm where a coarse-to-fine key-frame summary is generated. A hierarchical key-frame summary facilitates multi-level browsing where the user can quickly discover the content of the video by accessing its coarsest but most compact summary and then view a desired segment of the video with increasingly more detail. At the finest level, the summary is generated on the basis of color features of video frames, using an extension of a recently proposed key-frame extraction algorithm. The finest level key-frames are recursively clustered using a novel pairwise K-means clustering approach with temporal consecutiveness constraint. We also address summarization of MPEG-2 compressed video without fully decoding the bitstream. We also propose efficient mechanisms that facilitate decoding the video when the hierarchical summary is utilized in browsing and playback of video segments starting at selected key-frames.

  16. Unstructured viscous grid generation by advancing-front method

    NASA Technical Reports Server (NTRS)

    Pirzadeh, Shahyar

    1993-01-01

    A new method of generating unstructured triangular/tetrahedral grids with high-aspect-ratio cells is proposed. The method is based on new grid-marching strategy referred to as 'advancing-layers' for construction of highly stretched cells in the boundary layer and the conventional advancing-front technique for generation of regular, equilateral cells in the inviscid-flow region. Unlike the existing semi-structured viscous grid generation techniques, the new procedure relies on a totally unstructured advancing-front grid strategy resulting in a substantially enhanced grid flexibility and efficiency. The method is conceptually simple but powerful, capable of producing high quality viscous grids for complex configurations with ease. A number of two-dimensional, triangular grids are presented to demonstrate the methodology. The basic elements of the method, however, have been primarily designed with three-dimensional problems in mind, making it extendible for tetrahedral, viscous grid generation.

  17. Advanced text and video analytics for proactive decision making

    NASA Astrophysics Data System (ADS)

    Bowman, Elizabeth K.; Turek, Matt; Tunison, Paul; Porter, Reed; Thomas, Steve; Gintautas, Vadas; Shargo, Peter; Lin, Jessica; Li, Qingzhe; Gao, Yifeng; Li, Xiaosheng; Mittu, Ranjeev; Rosé, Carolyn Penstein; Maki, Keith; Bogart, Chris; Choudhari, Samrihdi Shree

    2017-05-01

    Today's warfighters operate in a highly dynamic and uncertain world, and face many competing demands. Asymmetric warfare and the new focus on small, agile forces has altered the framework by which time critical information is digested and acted upon by decision makers. Finding and integrating decision-relevant information is increasingly difficult in data-dense environments. In this new information environment, agile data algorithms, machine learning software, and threat alert mechanisms must be developed to automatically create alerts and drive quick response. Yet these advanced technologies must be balanced with awareness of the underlying context to accurately interpret machine-processed indicators and warnings and recommendations. One promising approach to this challenge brings together information retrieval strategies from text, video, and imagery. In this paper, we describe a technology demonstration that represents two years of tri-service research seeking to meld text and video for enhanced content awareness. The demonstration used multisource data to find an intelligence solution to a problem using a common dataset. Three technology highlights from this effort include 1) Incorporation of external sources of context into imagery normalcy modeling and anomaly detection capabilities, 2) Automated discovery and monitoring of targeted users from social media text, regardless of language, and 3) The concurrent use of text and imagery to characterize behaviour using the concept of kinematic and text motifs to detect novel and anomalous patterns. Our demonstration provided a technology baseline for exploiting heterogeneous data sources to deliver timely and accurate synopses of data that contribute to a dynamic and comprehensive worldview.

  18. Investigating the quality of video consultations performed using fourth generation (4G) mobile telecommunications.

    PubMed

    Caffery, Liam J; Smith, Anthony C

    2015-09-01

    The use of fourth-generation (4G) mobile telecommunications to provide real-time video consultations were investigated in this study with the aims of determining if 4G is a suitable telecommunications technology; and secondly, to identify if variation in perceived audio and video quality were due to underlying network performance. Three patient end-points that used 4G Internet connections were evaluated. Consulting clinicians recorded their perception of audio and video quality using the International Telecommunications Union scales during clinics with these patient end-points. These scores were used to calculate a mean opinion score (MOS). The network performance metrics were obtained for each session and the relationships between these metrics and the session's quality scores were tested. Clinicians scored the quality of 50 hours of video consultations, involving 36 clinic sessions. The MOS for audio was 4.1 ± 0.62 and the MOS for video was 4.4 ± 0.22. Image impairment and effort to listen were also rated favourably. There was no correlation between audio or video quality and the network metrics of packet loss or jitter. These findings suggest that 4G networks are an appropriate telecommunication technology to deliver real-time video consultations. Variations in quality scores observed during this study were not explained by the packet loss and jitter in the underlying network. Before establishing a telemedicine service, the performance of the 4G network should be assessed at the location of the proposed service. This is due to known variability in performance of 4G networks. © The Author(s) 2015.

  19. Nonchronological video synopsis and indexing.

    PubMed

    Pritch, Yael; Rav-Acha, Alex; Peleg, Shmuel

    2008-11-01

    The amount of captured video is growing with the increased numbers of video cameras, especially the increase of millions of surveillance cameras that operate 24 hours a day. Since video browsing and retrieval is time consuming, most captured video is never watched or examined. Video synopsis is an effective tool for browsing and indexing of such a video. It provides a short video representation, while preserving the essential activities of the original video. The activity in the video is condensed into a shorter period by simultaneously showing multiple activities, even when they originally occurred at different times. The synopsis video is also an index into the original video by pointing to the original time of each activity. Video Synopsis can be applied to create a synopsis of an endless video streams, as generated by webcams and by surveillance cameras. It can address queries like "Show in one minute the synopsis of this camera broadcast during the past day''. This process includes two major phases: (i) An online conversion of the endless video stream into a database of objects and activities (rather than frames). (ii) A response phase, generating the video synopsis as a response to the user's query.

  20. A novel key-frame extraction approach for both video summary and video index.

    PubMed

    Lei, Shaoshuai; Xie, Gang; Yan, Gaowei

    2014-01-01

    Existing key-frame extraction methods are basically video summary oriented; yet the index task of key-frames is ignored. This paper presents a novel key-frame extraction approach which can be available for both video summary and video index. First a dynamic distance separability algorithm is advanced to divide a shot into subshots based on semantic structure, and then appropriate key-frames are extracted in each subshot by SVD decomposition. Finally, three evaluation indicators are proposed to evaluate the performance of the new approach. Experimental results show that the proposed approach achieves good semantic structure for semantics-based video index and meanwhile produces video summary consistent with human perception.

  1. Three-dimensional hybrid grid generation using advancing front techniques

    NASA Technical Reports Server (NTRS)

    Steinbrenner, John P.; Noack, Ralph W.

    1995-01-01

    A new 3-dimensional hybrid grid generation technique has been developed, based on ideas of advancing fronts for both structured and unstructured grids. In this approach, structured grids are first generate independently around individual components of the geometry. Fronts are initialized on these structure grids, and advanced outward so that new cells are extracted directly from the structured grids. Employing typical advancing front techniques, cells are rejected if they intersect the existing front or fail other criteria When no more viable structured cells exist further cells are advanced in an unstructured manner to close off the overall domain, resulting in a grid of 'hybrid' form. There are two primary advantages to the hybrid formulation. First, generating blocks with limited regard to topology eliminates the bottleneck encountered when a multiple block system is used to fully encapsulate a domain. Individual blocks may be generated free of external constraints, which will significantly reduce the generation time. Secondly, grid points near the body (presumably with high aspect ratio) will still maintain a structured (non-triangular or tetrahedral) character, thereby maximizing grid quality and solution accuracy near the surface.

  2. An advance care plan decision support video before major surgery: a patient- and family-centred approach.

    PubMed

    Isenberg, Sarina R; Crossnohere, Norah L; Patel, Manali I; Conca-Cheng, Alison; Bridges, John F P; Swoboda, Sandy M; Smith, Thomas J; Pawlik, Timothy M; Weiss, Matthew; Volandes, Angelo E; Schuster, Anne; Miller, Judith A; Pastorini, Carolyn; Roter, Debra L; Aslakson, Rebecca A

    2018-06-01

    Video-based advanc care planning (ACP) tools have been studied in varied medical contexts; however, none have been developed for patients undergoing major surgery. Using a patient- and family-centredness approach, our objective was to implement human-centred design (HCD) to develop an ACP decision support video for patients and their family members when preparing for major surgery. The study investigators partnered with surgical patients and their family members, surgeons and other health professionals to design an ACP decision support video using key HCD principles. Adapting Maguire's HCD stages from computer science to the surgical context, while also incorporating Elwyn et al 's specifications for patient-oriented decision support tool development, we used a six-stage HCD process to develop the video: (1) plan HCD process; (2) specify where video will be used; (3) specify user and organisational requirements; (4) produce and test prototypes; (5) carry out user-based assessment; (6) field test with end users. Over 450 stakeholders were engaged in the development process contributing to setting objectives, applying for funding, providing feedback on the storyboard and iterations of the decision tool video. Throughout the HCD process, stakeholders' opinions were compiled and conflicting approaches negotiated resulting in a tool that addressed stakeholders' concerns. Our patient- and family-centred approach using HCD facilitated discussion and the ability to elicit and balance sometimes competing viewpoints. The early engagement of users and stakeholders throughout the development process may help to ensure tools address the stated needs of these individuals. NCT02489799. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  3. Informative-frame filtering in endoscopy videos

    NASA Astrophysics Data System (ADS)

    An, Yong Hwan; Hwang, Sae; Oh, JungHwan; Lee, JeongKyu; Tavanapong, Wallapak; de Groen, Piet C.; Wong, Johnny

    2005-04-01

    Advances in video technology are being incorporated into today"s healthcare practice. For example, colonoscopy is an important screening tool for colorectal cancer. Colonoscopy allows for the inspection of the entire colon and provides the ability to perform a number of therapeutic operations during a single procedure. During a colonoscopic procedure, a tiny video camera at the tip of the endoscope generates a video signal of the internal mucosa of the colon. The video data are displayed on a monitor for real-time analysis by the endoscopist. Other endoscopic procedures include upper gastrointestinal endoscopy, enteroscopy, bronchoscopy, cystoscopy, and laparoscopy. However, a significant number of out-of-focus frames are included in this type of videos since current endoscopes are equipped with a single, wide-angle lens that cannot be focused. The out-of-focus frames do not hold any useful information. To reduce the burdens of the further processes such as computer-aided image processing or human expert"s examinations, these frames need to be removed. We call an out-of-focus frame as non-informative frame and an in-focus frame as informative frame. We propose a new technique to classify the video frames into two classes, informative and non-informative frames using a combination of Discrete Fourier Transform (DFT), Texture Analysis, and K-Means Clustering. The proposed technique can evaluate the frames without any reference image, and does not need any predefined threshold value. Our experimental studies indicate that it achieves over 96% of four different performance metrics (i.e. precision, sensitivity, specificity, and accuracy).

  4. A Course-Embedded Comparison of Instructor-Generated Videos of Either an Instructor Alone or an Instructor and a Student

    ERIC Educational Resources Information Center

    Cooper, Katelyn M.; Ding, Lu; Stephens, Michelle D.; Chi, Michelene T. H.; Brownell, Sara E.

    2018-01-01

    Instructor-generated videos have become a popular way to engage students with material before a class, yet this is a relatively unexplored area of research. There is support for the use of videos in which instructors tutor students, but few studies have been conducted within the context of a classroom. In this study, conducted in a…

  5. Video Analytics for Indexing, Summarization and Searching of Video Archives

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Trease, Harold E.; Trease, Lynn L.

    This paper will be submitted to the proceedings The Eleventh IASTED International Conference on. Signal and Image Processing. Given a video or video archive how does one effectively and quickly summarize, classify, and search the information contained within the data? This paper addresses these issues by describing a process for the automated generation of a table-of-contents and keyword, topic-based index tables that can be used to catalogue, summarize, and search large amounts of video data. Having the ability to index and search the information contained within the videos, beyond just metadata tags, provides a mechanism to extract and identify "useful"more » content from image and video data.« less

  6. Video Clips for Youtube: Collaborative Video Creation as an Educational Concept for Knowledge Acquisition and Attitude Change Related to Obesity Stigmatization

    ERIC Educational Resources Information Center

    Zahn, Carmen; Schaeffeler, Norbert; Giel, Katrin Elisabeth; Wessel, Daniel; Thiel, Ansgar; Zipfel, Stephan; Hesse, Friedrich W.

    2014-01-01

    Mobile phones and advanced web-based video tools have pushed forward new paradigms for using video in education: Today, students can readily create and broadcast their own digital videos for others and create entirely new patterns of video-based information structures for modern online-communities and multimedia environments. This paradigm shift…

  7. Technology advancement of an oxygen generation subsystem

    NASA Technical Reports Server (NTRS)

    Lee, M. K.; Burke, K. A.; Schubert, F. H.; Wynveen, R. A.

    1979-01-01

    An oxygen generation subsystem based on water electrolysis was developed and tested to further advance the concept and technology of the spacecraft air revitalization system. Emphasis was placed on demonstrating the subsystem integration concept and hardware maturity at a subsystem level. The integration concept of the air revitalization system was found to be feasible. Hardware and technology of the oxygen generation subsystem was demonstrated to be close to the preprototype level. Continued development of the oxygen generation technology is recommended to further reduce the total weight penalties of the oxygen generation subsystem through optimization.

  8. Coding tools investigation for next generation video coding based on HEVC

    NASA Astrophysics Data System (ADS)

    Chen, Jianle; Chen, Ying; Karczewicz, Marta; Li, Xiang; Liu, Hongbin; Zhang, Li; Zhao, Xin

    2015-09-01

    The new state-of-the-art video coding standard, H.265/HEVC, has been finalized in 2013 and it achieves roughly 50% bit rate saving compared to its predecessor, H.264/MPEG-4 AVC. This paper provides the evidence that there is still potential for further coding efficiency improvements. A brief overview of HEVC is firstly given in the paper. Then, our improvements on each main module of HEVC are presented. For instance, the recursive quadtree block structure is extended to support larger coding unit and transform unit. The motion information prediction scheme is improved by advanced temporal motion vector prediction, which inherits the motion information of each small block within a large block from a temporal reference picture. Cross component prediction with linear prediction model improves intra prediction and overlapped block motion compensation improves the efficiency of inter prediction. Furthermore, coding of both intra and inter prediction residual is improved by adaptive multiple transform technique. Finally, in addition to deblocking filter and SAO, adaptive loop filter is applied to further enhance the reconstructed picture quality. This paper describes above-mentioned techniques in detail and evaluates their coding performance benefits based on the common test condition during HEVC development. The simulation results show that significant performance improvement over HEVC standard can be achieved, especially for the high resolution video materials.

  9. School-Context Videos in Janus-Faced Online Publicity: Learner-Generated Digital Video Production Going Online

    ERIC Educational Resources Information Center

    Palmgren-Neuvonen, Laura; Jaakkola, Maarit; Korkeamäki, Riitta-Liisa

    2015-01-01

    This article reports a case study on sChOOLtv, an online television for primary and secondary schools that aims to bridge the media gap between in-school and out-of-school learning environments. Contrary to its creators' expectations, the number of published videos has not increased since its establishment. Furthermore, the videos were mostly…

  10. A video event trigger for high frame rate, high resolution video technology

    NASA Astrophysics Data System (ADS)

    Williams, Glenn L.

    1991-12-01

    When video replaces film the digitized video data accumulates very rapidly, leading to a difficult and costly data storage problem. One solution exists for cases when the video images represent continuously repetitive 'static scenes' containing negligible activity, occasionally interrupted by short events of interest. Minutes or hours of redundant video frames can be ignored, and not stored, until activity begins. A new, highly parallel digital state machine generates a digital trigger signal at the onset of a video event. High capacity random access memory storage coupled with newly available fuzzy logic devices permits the monitoring of a video image stream for long term or short term changes caused by spatial translation, dilation, appearance, disappearance, or color change in a video object. Pretrigger and post-trigger storage techniques are then adaptable for archiving the digital stream from only the significant video images.

  11. A video event trigger for high frame rate, high resolution video technology

    NASA Technical Reports Server (NTRS)

    Williams, Glenn L.

    1991-01-01

    When video replaces film the digitized video data accumulates very rapidly, leading to a difficult and costly data storage problem. One solution exists for cases when the video images represent continuously repetitive 'static scenes' containing negligible activity, occasionally interrupted by short events of interest. Minutes or hours of redundant video frames can be ignored, and not stored, until activity begins. A new, highly parallel digital state machine generates a digital trigger signal at the onset of a video event. High capacity random access memory storage coupled with newly available fuzzy logic devices permits the monitoring of a video image stream for long term or short term changes caused by spatial translation, dilation, appearance, disappearance, or color change in a video object. Pretrigger and post-trigger storage techniques are then adaptable for archiving the digital stream from only the significant video images.

  12. Advancement of thyroid surgery video recording: A comparison between two full HD head mounted video cameras.

    PubMed

    Ortensi, Andrea; Panunzi, Andrea; Trombetta, Silvia; Cattaneo, Alberto; Sorrenti, Salvatore; D'Orazi, Valerio

    2017-05-01

    The aim of this study was to test two different video cameras and recording systems used in thyroid surgery in our Department. This is meant to be an attempt to record the real point of view of the magnified vision of surgeon, so as to make the viewer aware of the difference with the naked eye vision. In this retrospective study, we recorded and compared twenty thyroidectomies performed using loupes magnification and microsurgical technique: ten were recorded with GoPro ® 4 Session action cam (commercially available) and ten with our new prototype of head mounted video camera. Settings were selected before surgery for both cameras. The recording time is about from 1 to 2 h for GoPro ® and from 3 to 5 h for our prototype. The average time of preparation to fit the camera on the surgeon's head and set the functionality is about 5 min for GoPro ® and 7-8 min for the prototype, mostly due to HDMI wiring cable. Videos recorded with the prototype require no further editing, which is mandatory for videos recorded with GoPro ® to highlight the surgical details. the present study showed that our prototype of video camera, compared with GoPro ® 4 Session, guarantees best results in terms of surgical video recording quality, provides to the viewer the exact perspective of the microsurgeon and shows accurately his magnified view through the loupes in thyroid surgery. These recordings are surgical aids for teaching and education and might be a method of self-analysis of surgical technique. Copyright © 2017 IJS Publishing Group Ltd. Published by Elsevier Ltd. All rights reserved.

  13. Object tracking mask-based NLUT on GPUs for real-time generation of holographic videos of three-dimensional scenes.

    PubMed

    Kwon, M-W; Kim, S-C; Yoon, S-E; Ho, Y-S; Kim, E-S

    2015-02-09

    A new object tracking mask-based novel-look-up-table (OTM-NLUT) method is proposed and implemented on graphics-processing-units (GPUs) for real-time generation of holographic videos of three-dimensional (3-D) scenes. Since the proposed method is designed to be matched with software and memory structures of the GPU, the number of compute-unified-device-architecture (CUDA) kernel function calls and the computer-generated hologram (CGH) buffer size of the proposed method have been significantly reduced. It therefore results in a great increase of the computational speed of the proposed method and enables real-time generation of CGH patterns of 3-D scenes. Experimental results show that the proposed method can generate 31.1 frames of Fresnel CGH patterns with 1,920 × 1,080 pixels per second, on average, for three test 3-D video scenarios with 12,666 object points on three GPU boards of NVIDIA GTX TITAN, and confirm the feasibility of the proposed method in the practical application of electro-holographic 3-D displays.

  14. Video bandwidth compression system

    NASA Astrophysics Data System (ADS)

    Ludington, D.

    1980-08-01

    The objective of this program was the development of a Video Bandwidth Compression brassboard model for use by the Air Force Avionics Laboratory, Wright-Patterson Air Force Base, in evaluation of bandwidth compression techniques for use in tactical weapons and to aid in the selection of particular operational modes to be implemented in an advanced flyable model. The bandwidth compression system is partitioned into two major divisions: the encoder, which processes the input video with a compression algorithm and transmits the most significant information; and the decoder where the compressed data is reconstructed into a video image for display.

  15. Polarization-modulated second harmonic generation ellipsometric microscopy at video rate.

    PubMed

    DeWalt, Emma L; Sullivan, Shane Z; Schmitt, Paul D; Muir, Ryan D; Simpson, Garth J

    2014-08-19

    Fast 8 MHz polarization modulation coupled with analytical modeling, fast beam-scanning, and synchronous digitization (SD) have enabled simultaneous nonlinear optical Stokes ellipsometry (NOSE) and polarized laser transmittance imaging with image acquisition rates up to video rate. In contrast to polarimetry, in which the polarization state of the exiting beam is recorded, NOSE enables recovery of the complex-valued Jones tensor of the sample that describes all polarization-dependent observables of the measurement. Every video-rate scan produces a set of 30 images (10 for each detector with three detectors operating in parallel), each of which corresponds to a different polarization-dependent result. Linear fitting of this image set contracts it down to a set of five parameters for each detector in second harmonic generation (SHG) and three parameters for the transmittance of the incident beam. These parameters can in turn be used to recover the Jones tensor elements of the sample. Following validation of the approach using z-cut quartz, NOSE microscopy was performed for microcrystals of both naproxen and glucose isomerase. When weighted by the measurement time, NOSE microscopy was found to provide a substantial (>7 decades) improvement in the signal-to-noise ratio relative to our previous measurements based on the rotation of optical elements and a 3-fold improvement relative to previous single-point NOSE approaches.

  16. Indexed Captioned Searchable Videos: A Learning Companion for STEM Coursework

    ERIC Educational Resources Information Center

    Tuna, Tayfun; Subhlok, Jaspal; Barker, Lecia; Shah, Shishir; Johnson, Olin; Hovey, Christopher

    2017-01-01

    Videos of classroom lectures have proven to be a popular and versatile learning resource. A key shortcoming of the lecture video format is accessing the content of interest hidden in a video. This work meets this challenge with an advanced video framework featuring topical indexing, search, and captioning (ICS videos). Standard optical character…

  17. Packetized video on MAGNET

    NASA Astrophysics Data System (ADS)

    Lazar, Aurel A.; White, John S.

    1986-11-01

    Theoretical analysis of an ILAN model of MAGNET, an integrated network testbed developed at Columbia University, shows that the bandwidth freed up by video and voice calls during periods of little movement in the images and silence periods in the speech signals could be utilized efficiently for graphics and data transmission. Based on these investigations, an architecture supporting adaptive protocols that are dynamically controlled by the requirements of a fluctuating load and changing user environment has been advanced. To further analyze the behavior of the network, a real-time packetized video system has been implemented. This system is embedded in the real time multimedia workstation EDDY that integrates video, voice and data traffic flows. Protocols supporting variable bandwidth, constant quality packetized video transport are descibed in detail.

  18. Advanced Video Activity Analytics (AVAA): Human Performance Model Report

    DTIC Science & Technology

    2017-12-01

    NOTICES Disclaimers The findings in this report are not to be construed as an official Department of the Army position unless so designated by other...estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data...Video Activity Analytics (AVAA) system. AVAA was designed to help US Army Intelligence Analysts exploit full-motion video more efficiently and

  19. Fast generation of complex modulation video holograms using temporal redundancy compression and hybrid point-source/wave-field approaches

    NASA Astrophysics Data System (ADS)

    Gilles, Antonin; Gioia, Patrick; Cozot, Rémi; Morin, Luce

    2015-09-01

    The hybrid point-source/wave-field method is a newly proposed approach for Computer-Generated Hologram (CGH) calculation, based on the slicing of the scene into several depth layers parallel to the hologram plane. The complex wave scattered by each depth layer is then computed using either a wave-field or a point-source approach according to a threshold criterion on the number of points within the layer. Finally, the complex waves scattered by all the depth layers are summed up in order to obtain the final CGH. Although outperforming both point-source and wave-field methods without producing any visible artifact, this approach has not yet been used for animated holograms, and the possible exploitation of temporal redundancies has not been studied. In this paper, we propose a fast computation of video holograms by taking into account those redundancies. Our algorithm consists of three steps. First, intensity and depth data of the current 3D video frame are extracted and compared with those of the previous frame in order to remove temporally redundant data. Then the CGH pattern for this compressed frame is generated using the hybrid point-source/wave-field approach. The resulting CGH pattern is finally transmitted to the video output and stored in the previous frame buffer. Experimental results reveal that our proposed method is able to produce video holograms at interactive rates without producing any visible artifact.

  20. Packetized Video On MAGNET

    NASA Astrophysics Data System (ADS)

    Lazar, Aurel A.; White, John S.

    1987-07-01

    Theoretical analysis of integrated local area network model of MAGNET, an integrated network testbed developed at Columbia University, shows that the bandwidth freed up during video and voice calls during periods of little movement in the images and periods of silence in the speech signals could be utilized efficiently for graphics and data transmission. Based on these investigations, an architecture supporting adaptive protocols that are dynamicaly controlled by the requirements of a fluctuating load and changing user environment has been advanced. To further analyze the behavior of the network, a real-time packetized video system has been implemented. This system is embedded in the real-time multimedia workstation EDDY, which integrates video, voice, and data traffic flows. Protocols supporting variable-bandwidth, fixed-quality packetized video transport are described in detail.

  1. Teaching Social Studies with Video Games

    ERIC Educational Resources Information Center

    Maguth, Brad M.; List, Jonathan S.; Wunderle, Matthew

    2015-01-01

    Today's youth have grown up immersed in technology and are increasingly relying on video games to solve problems, engage socially, and find entertainment. Yet research and vignettes of teachers actually using video games to advance student learning in social studies is scarce (Hutchinson 2007). This article showcases how social studies…

  2. Video Golf

    NASA Technical Reports Server (NTRS)

    1995-01-01

    George Nauck of ENCORE!!! invented and markets the Advanced Range Performance (ARPM) Video Golf System for measuring the result of a golf swing. After Nauck requested their assistance, Marshall Space Flight Center scientists suggested video and image processing/computing technology, and provided leads on commercial companies that dealt with the pertinent technologies. Nauck contracted with Applied Research Inc. to develop a prototype. The system employs an elevated camera, which sits behind the tee and follows the flight of the ball down range, catching the point of impact and subsequent roll. Instant replay of the video on a PC monitor at the tee allows measurement of the carry and roll. The unit measures distance and deviation from the target line, as well as distance from the target when one is selected. The information serves as an immediate basis for making adjustments or as a record of skill level progress for golfers.

  3. Utilising advance care planning videos to empower perioperative cancer patients and families: a study protocol of a randomised controlled trial.

    PubMed

    Aslakson, Rebecca A; Isenberg, Sarina R; Crossnohere, Norah L; Conca-Cheng, Alison M; Yang, Ting; Weiss, Matthew; Volandes, Angelo E; Bridges, John F P; Roter, Debra L

    2017-06-06

    Despite positive health outcomes associated with advance care planning (ACP), little research has investigated the impact of ACP in surgical populations. Our goal is to evaluate how an ACP intervention video impacts the patient centredness and ACP of the patient-surgeon conversation during the presurgical consent visit. We hypothesise that patients who view the intervention will engage in a more patient-centred communication with their surgeons compared with patients who view a control video. Randomised controlled superiority trial of an ACP video with two study arms (intervention ACP video and control video) and four visits (baseline, presurgical consent, postoperative 1 week and postoperative 1 month). Surgeons, patients, principal investigator and analysts are blinded to the randomisation assignment. Single, academic, inner city and tertiary care hospital. Data collection began July 16, 2015 and continues to March 2017. Patients recruited from nine surgical oncology clinics who are undergoing major cancer surgery. In the intervention arm, patients view a patient preparedness video developed through extensive engagement with patients, surgeons and other stakeholders. Patients randomised to the control arm viewed an informational video about the hospital surgical programme. Primary Outcome: Patient centredness and ACP of patient-surgeon conversations during the presurgical consent visit as measured through the Roter Interaction Analysis System. patient Hospital Anxiety and Depression Scale score; patient goals of care; patient, companion and surgeon satisfaction; video helpfulness; medical decision maker designation; and the frequency patients watch the video. Intent-to-treat analysis will be used to assess the impact of video assignment on outcomes. Sensitivity analyses will assess whether there are differential effects contingent on patient or surgeon characteristics. This study has been approved by the Johns Hopkins School of Medicine institutional review

  4. Perceptual tools for quality-aware video networks

    NASA Astrophysics Data System (ADS)

    Bovik, A. C.

    2014-01-01

    Monitoring and controlling the quality of the viewing experience of videos transmitted over increasingly congested networks (especially wireless networks) is a pressing problem owing to rapid advances in video-centric mobile communication and display devices that are straining the capacity of the network infrastructure. New developments in automatic perceptual video quality models offer tools that have the potential to be used to perceptually optimize wireless video, leading to more efficient video data delivery and better received quality. In this talk I will review key perceptual principles that are, or could be used to create effective video quality prediction models, and leading quality prediction models that utilize these principles. The goal is to be able to monitor and perceptually optimize video networks by making them "quality-aware."

  5. CUQI: cardiac ultrasound video quality index

    PubMed Central

    Razaak, Manzoor; Martini, Maria G.

    2016-01-01

    Abstract. Medical images and videos are now increasingly part of modern telecommunication applications, including telemedicinal applications, favored by advancements in video compression and communication technologies. Medical video quality evaluation is essential for modern applications since compression and transmission processes often compromise the video quality. Several state-of-the-art video quality metrics used for quality evaluation assess the perceptual quality of the video. For a medical video, assessing quality in terms of “diagnostic” value rather than “perceptual” quality is more important. We present a diagnostic-quality–oriented video quality metric for quality evaluation of cardiac ultrasound videos. Cardiac ultrasound videos are characterized by rapid repetitive cardiac motions and distinct structural information characteristics that are explored by the proposed metric. Cardiac ultrasound video quality index, the proposed metric, is a full reference metric and uses the motion and edge information of the cardiac ultrasound video to evaluate the video quality. The metric was evaluated for its performance in approximating the quality of cardiac ultrasound videos by testing its correlation with the subjective scores of medical experts. The results of our tests showed that the metric has high correlation with medical expert opinions and in several cases outperforms the state-of-the-art video quality metrics considered in our tests. PMID:27014715

  6. Advanced downhole periodic seismic generator

    DOEpatents

    Hardee, Harry C.; Hills, Richard G.; Striker, Richard P.

    1991-07-16

    An advanced downhole periodic seismic generator system for transmitting variable frequency, predominantly shear-wave vibration into earth strata surrounding a borehole. The system comprises a unitary housing operably connected to a well head by support and electrical cabling and contains clamping apparatus for selectively clamping the housing to the walls of the borehole. The system further comprises a variable speed pneumatic oscillator and a self-contained pneumatic reservoir for producing a frequency-swept seismic output over a discrete frequency range.

  7. Advanced instrumentation for next-generation aerospace propulsion control systems

    NASA Technical Reports Server (NTRS)

    Barkhoudarian, S.; Cross, G. S.; Lorenzo, Carl F.

    1993-01-01

    New control concepts for the next generation of advanced air-breathing and rocket engines and hypersonic combined-cycle propulsion systems are analyzed. The analysis provides a database on the instrumentation technologies for advanced control systems and cross matches the available technologies for each type of engine to the control needs and applications of the other two types of engines. Measurement technologies that are considered to be ready for implementation include optical surface temperature sensors, an isotope wear detector, a brushless torquemeter, a fiberoptic deflectometer, an optical absorption leak detector, the nonintrusive speed sensor, and an ultrasonic triducer. It is concluded that all 30 advanced instrumentation technologies considered can be recommended for further development to meet need of the next generation of jet-, rocket-, and hypersonic-engine control systems.

  8. Use of Video Decision Aids to Promote Advance Care Planning in Hilo, Hawai'i.

    PubMed

    Volandes, Angelo E; Paasche-Orlow, Michael K; Davis, Aretha Delight; Eubanks, Robert; El-Jawahri, Areej; Seitz, Rae

    2016-09-01

    Advance care planning (ACP) seeks to promote care delivery that is concordant with patients' informed wishes. Scalability and cost may be barriers to widespread ACP, and video decision aids may help address such barriers. Our primary hypothesis was that ACP documentation would increase in Hilo after ACP video implementation. Secondary hypotheses included increased use of hospice, fewer deaths in the hospital, and decreased costs in the last month of life. The city of Hilo in Hawai'i (population 43,263), which is served by one 276-bed hospital (Hilo Medical Center), one hospice (the Hospice of Hilo), and 30 primary care physicians. The intervention consisted of a single, 1- to 4-h training and access to a suite of ACP video decision aids. Prior to implementation, the rate of ACP documentation for hospitalized patients with late-stage disease was 3.2 % (11/346). After the intervention, ACP documentation was 39.9 % (1,107/2,773) (P < 0.001). Primary care providers in the intervention had an ACP completion rate for patients over 75 years of 37.0 % (1,437/3,888) compared to control providers, who had an average of 25.6 % (10,760/42,099) (P < 0.001). The rate of discharge from hospital to hospice for patients with late-stage disease was 5.7 % prior to the intervention and 13.8 % after the intervention (P < 0.001). The average total insurance cost for the last month of life among Hilo patients was $3,458 (95 % CI $3,051 to 3,865) lower per patient after the intervention when compared to the control region. Implementing ACP video decision aids was associated with improved ACP documentation, greater use of hospice, and decreased costs. Decision aids that promote ACP offer a scalable and cost-efficient medium to place patients at the center of their care.

  9. Content fragile watermarking for H.264/AVC video authentication

    NASA Astrophysics Data System (ADS)

    Ait Sadi, K.; Guessoum, A.; Bouridane, A.; Khelifi, F.

    2017-04-01

    Discrete cosine transform is exploited in this work to generate the authentication data that are treated as a fragile watermark. This watermark is embedded in the motion vectors. The advances in multimedia technologies and digital processing tools have brought with them new challenges for the source and content authentication. To ensure the integrity of the H.264/AVC video stream, we introduce an approach based on a content fragile video watermarking method using an independent authentication of each group of pictures (GOPs) within the video. This technique uses robust visual features extracted from the video pertaining to the set of selected macroblocs (MBs) which hold the best partition mode in a tree-structured motion compensation process. An additional security degree is offered by the proposed method through using a more secured keyed function HMAC-SHA-256 and randomly choosing candidates from already selected MBs. In here, the watermark detection and verification processes are blind, whereas the tampered frames detection is not since it needs the original frames within the tampered GOPs. The proposed scheme achieves an accurate authentication technique with a high fragility and fidelity whilst maintaining the original bitrate and the perceptual quality. Furthermore, its ability to detect the tampered frames in case of spatial, temporal and colour manipulations is confirmed.

  10. Fast generation of video holograms of three-dimensional moving objects using a motion compensation-based novel look-up table.

    PubMed

    Kim, Seung-Cheol; Dong, Xiao-Bin; Kwon, Min-Woo; Kim, Eun-Soo

    2013-05-06

    A novel approach for fast generation of video holograms of three-dimensional (3-D) moving objects using a motion compensation-based novel-look-up-table (MC-N-LUT) method is proposed. Motion compensation has been widely employed in compression of conventional 2-D video data because of its ability to exploit high temporal correlation between successive video frames. Here, this concept of motion-compensation is firstly applied to the N-LUT based on its inherent property of shift-invariance. That is, motion vectors of 3-D moving objects are extracted between the two consecutive video frames, and with them motions of the 3-D objects at each frame are compensated. Then, through this process, 3-D object data to be calculated for its video holograms are massively reduced, which results in a dramatic increase of the computational speed of the proposed method. Experimental results with three kinds of 3-D video scenarios reveal that the average number of calculated object points and the average calculation time for one object point of the proposed method, have found to be reduced down to 86.95%, 86.53% and 34.99%, 32.30%, respectively compared to those of the conventional N-LUT and temporal redundancy-based N-LUT (TR-N-LUT) methods.

  11. Representing videos in tangible products

    NASA Astrophysics Data System (ADS)

    Fageth, Reiner; Weiting, Ralf

    2014-03-01

    Videos can be taken with nearly every camera, digital point and shoot cameras, DSLRs as well as smartphones and more and more with so-called action cameras mounted on sports devices. The implementation of videos while generating QR codes and relevant pictures out of the video stream via a software implementation was contents in last years' paper. This year we present first data about what contents is displayed and how the users represent their videos in printed products, e.g. CEWE PHOTOBOOKS and greeting cards. We report the share of the different video formats used, the number of images extracted out of the video in order to represent the video, the positions in the book and different design strategies compared to regular books.

  12. This Rock 'n' Roll Video Teaches Math

    ERIC Educational Resources Information Center

    Niess, Margaret L.; Walker, Janet M.

    2009-01-01

    Mathematics is a discipline that has significantly advanced through the use of digital technologies with improved computational, graphical, and symbolic capabilities. Digital videos can be used to present challenging mathematical questions for students. Video clips offer instructional possibilities for moving students from a passive mode of…

  13. Intelligent keyframe extraction for video printing

    NASA Astrophysics Data System (ADS)

    Zhang, Tong

    2004-10-01

    Nowadays most digital cameras have the functionality of taking short video clips, with the length of video ranging from several seconds to a couple of minutes. The purpose of this research is to develop an algorithm which extracts an optimal set of keyframes from each short video clip so that the user could obtain proper video frames to print out. In current video printing systems, keyframes are normally obtained by evenly sampling the video clip over time. Such an approach, however, may not reflect highlights or regions of interest in the video. Keyframes derived in this way may also be improper for video printing in terms of either content or image quality. In this paper, we present an intelligent keyframe extraction approach to derive an improved keyframe set by performing semantic analysis of the video content. For a video clip, a number of video and audio features are analyzed to first generate a candidate keyframe set. These features include accumulative color histogram and color layout differences, camera motion estimation, moving object tracking, face detection and audio event detection. Then, the candidate keyframes are clustered and evaluated to obtain a final keyframe set. The objective is to automatically generate a limited number of keyframes to show different views of the scene; to show different people and their actions in the scene; and to tell the story in the video shot. Moreover, frame extraction for video printing, which is a rather subjective problem, is considered in this work for the first time, and a semi-automatic approach is proposed.

  14. Overcoming Challenges: "Going Mobile with Your Own Video Models"

    ERIC Educational Resources Information Center

    Carnahan, Christina R.; Basham, James D.; Christman, Jennifer; Hollingshead, Aleksandra

    2012-01-01

    Video modeling has been shown to be an effective intervention for students with a variety of disabilities. Traditional video models present problems in terms of application across meaningful settings, such as in the community or even across the school environment. However, with advances in mobile technology, portable devices with video capability…

  15. Next Generation NASA GA Advanced Concept

    NASA Technical Reports Server (NTRS)

    Hahn, Andrew S.

    2006-01-01

    Not only is the common dream of frequent personal flight travel going unfulfilled, the current generation of General Aviation (GA) is facing tremendous challenges that threaten to relegate the Single Engine Piston (SEP) aircraft market to a footnote in the history of U.S. aviation. A case is made that this crisis stems from a generally low utility coupled to a high cost that makes the SEP aircraft of relatively low transportation value and beyond the means of many. The roots of this low value are examined in a broad sense, and a Next Generation NASA Advanced GA Concept is presented that attacks those elements addressable by synergistic aircraft design.

  16. A prototype to automate the video subsystem routing for the video distribution subsystem of Space Station Freedom

    NASA Astrophysics Data System (ADS)

    Betz, Jessie M. Bethly

    1993-12-01

    The Video Distribution Subsystem (VDS) for Space Station Freedom provides onboard video communications. The VDS includes three major functions: external video switching; internal video switching; and sync and control generation. The Video Subsystem Routing (VSR) is a part of the VDS Manager Computer Software Configuration Item (VSM/CSCI). The VSM/CSCI is the software which controls and monitors the VDS equipment. VSR activates, terminates, and modifies video services in response to Tier-1 commands to connect video sources to video destinations. VSR selects connection paths based on availability of resources and updates the video routing lookup tables. This project involves investigating the current methodology to automate the Video Subsystem Routing and developing and testing a prototype as 'proof of concept' for designers.

  17. Synchronous-digitization for Video Rate Polarization Modulated Beam Scanning Second Harmonic Generation Microscopy.

    PubMed

    Sullivan, Shane Z; DeWalt, Emma L; Schmitt, Paul D; Muir, Ryan M; Simpson, Garth J

    2015-03-09

    Fast beam-scanning non-linear optical microscopy, coupled with fast (8 MHz) polarization modulation and analytical modeling have enabled simultaneous nonlinear optical Stokes ellipsometry (NOSE) and linear Stokes ellipsometry imaging at video rate (15 Hz). NOSE enables recovery of the complex-valued Jones tensor that describes the polarization-dependent observables, in contrast to polarimetry, in which the polarization stated of the exciting beam is recorded. Each data acquisition consists of 30 images (10 for each detector, with three detectors operating in parallel), each of which corresponds to polarization-dependent results. Processing of this image set by linear fitting contracts down each set of 10 images to a set of 5 parameters for each detector in second harmonic generation (SHG) and three parameters for the transmittance of the fundamental laser beam. Using these parameters, it is possible to recover the Jones tensor elements of the sample at video rate. Video rate imaging is enabled by performing synchronous digitization (SD), in which a PCIe digital oscilloscope card is synchronized to the laser (the laser is the master clock.) Fast polarization modulation was achieved by modulating an electro-optic modulator synchronously with the laser and digitizer, with a simple sine-wave at 1/10th the period of the laser, producing a repeating pattern of 10 polarization states. This approach was validated using Z-cut quartz, and NOSE microscopy was performed for micro-crystals of naproxen.

  18. Synchronous-digitization for video rate polarization modulated beam scanning second harmonic generation microscopy

    NASA Astrophysics Data System (ADS)

    Sullivan, Shane Z.; DeWalt, Emma L.; Schmitt, Paul D.; Muir, Ryan D.; Simpson, Garth J.

    2015-03-01

    Fast beam-scanning non-linear optical microscopy, coupled with fast (8 MHz) polarization modulation and analytical modeling have enabled simultaneous nonlinear optical Stokes ellipsometry (NOSE) and linear Stokes ellipsometry imaging at video rate (15 Hz). NOSE enables recovery of the complex-valued Jones tensor that describes the polarization-dependent observables, in contrast to polarimetry, in which the polarization stated of the exciting beam is recorded. Each data acquisition consists of 30 images (10 for each detector, with three detectors operating in parallel), each of which corresponds to polarization-dependent results. Processing of this image set by linear fitting contracts down each set of 10 images to a set of 5 parameters for each detector in second harmonic generation (SHG) and three parameters for the transmittance of the fundamental laser beam. Using these parameters, it is possible to recover the Jones tensor elements of the sample at video rate. Video rate imaging is enabled by performing synchronous digitization (SD), in which a PCIe digital oscilloscope card is synchronized to the laser (the laser is the master clock.) Fast polarization modulation was achieved by modulating an electro-optic modulator synchronously with the laser and digitizer, with a simple sine-wave at 1/10th the period of the laser, producing a repeating pattern of 10 polarization states. This approach was validated using Z-cut quartz, and NOSE microscopy was performed for micro-crystals of naproxen.

  19. Real-time video analysis for retail stores

    NASA Astrophysics Data System (ADS)

    Hassan, Ehtesham; Maurya, Avinash K.

    2015-03-01

    With the advancement in video processing technologies, we can capture subtle human responses in a retail store environment which play decisive role in the store management. In this paper, we present a novel surveillance video based analytic system for retail stores targeting localized and global traffic estimate. Development of an intelligent system for human traffic estimation in real-life poses a challenging problem because of the variation and noise involved. In this direction, we begin with a novel human tracking system by an intelligent combination of motion based and image level object detection. We demonstrate the initial evaluation of this approach on available standard dataset yielding promising result. Exact traffic estimate in a retail store require correct separation of customers from service providers. We present a role based human classification framework using Gaussian mixture model for this task. A novel feature descriptor named graded colour histogram is defined for object representation. Using, our role based human classification and tracking system, we have defined a novel computationally efficient framework for two types of analytics generation i.e., region specific people count and dwell-time estimation. This system has been extensively evaluated and tested on four hours of real-life video captured from a retail store.

  20. Fulldome Video: An Emerging Technology for Education

    ERIC Educational Resources Information Center

    Law, Linda E.

    2006-01-01

    This article talks about fulldome video, a new technology which has been adopted fairly extensively by the larger, well-funded planetariums. Fulldome video, also called immersive projection, can help teach subjects ranging from geology to history to chemistry. The rapidly advancing progress of projection technology has provided high-resolution…

  1. Tactical 3D model generation using structure-from-motion on video from unmanned systems

    NASA Astrophysics Data System (ADS)

    Harguess, Josh; Bilinski, Mark; Nguyen, Kim B.; Powell, Darren

    2015-05-01

    Unmanned systems have been cited as one of the future enablers of all the services to assist the warfighter in dominating the battlespace. The potential benefits of unmanned systems are being closely investigated -- from providing increased and potentially stealthy surveillance, removing the warfighter from harms way, to reducing the manpower required to complete a specific job. In many instances, data obtained from an unmanned system is used sparingly, being applied only to the mission at hand. Other potential benefits to be gained from the data are overlooked and, after completion of the mission, the data is often discarded or lost. However, this data can be further exploited to offer tremendous tactical, operational, and strategic value. To show the potential value of this otherwise lost data, we designed a system that persistently stores the data in its original format from the unmanned vehicle and then generates a new, innovative data medium for further analysis. The system streams imagery and video from an unmanned system (original data format) and then constructs a 3D model (new data medium) using structure-from-motion. The 3D generated model provides warfighters additional situational awareness, tactical and strategic advantages that the original video stream lacks. We present our results using simulated unmanned vehicle data with Google Earth™providing the imagery as well as real-world data, including data captured from an unmanned aerial vehicle flight.

  2. Toward enhancing the distributed video coder under a multiview video codec framework

    NASA Astrophysics Data System (ADS)

    Lee, Shih-Chieh; Chen, Jiann-Jone; Tsai, Yao-Hong; Chen, Chin-Hua

    2016-11-01

    The advance of video coding technology enables multiview video (MVV) or three-dimensional television (3-D TV) display for users with or without glasses. For mobile devices or wireless applications, a distributed video coder (DVC) can be utilized to shift the encoder complexity to decoder under the MVV coding framework, denoted as multiview distributed video coding (MDVC). We proposed to exploit both inter- and intraview video correlations to enhance side information (SI) and improve the MDVC performance: (1) based on the multiview motion estimation (MVME) framework, a categorized block matching prediction with fidelity weights (COMPETE) was proposed to yield a high quality SI frame for better DVC reconstructed images. (2) The block transform coefficient properties, i.e., DCs and ACs, were exploited to design the priority rate control for the turbo code, such that the DVC decoding can be carried out with fewest parity bits. In comparison, the proposed COMPETE method demonstrated lower time complexity, while presenting better reconstructed video quality. Simulations show that the proposed COMPETE can reduce the time complexity of MVME to 1.29 to 2.56 times smaller, as compared to previous hybrid MVME methods, while the image peak signal to noise ratios (PSNRs) of a decoded video can be improved 0.2 to 3.5 dB, as compared to H.264/AVC intracoding.

  3. Evaluation of automatic video summarization systems

    NASA Astrophysics Data System (ADS)

    Taskiran, Cuneyt M.

    2006-01-01

    Compact representations of video, or video summaries, data greatly enhances efficient video browsing. However, rigorous evaluation of video summaries generated by automatic summarization systems is a complicated process. In this paper we examine the summary evaluation problem. Text summarization is the oldest and most successful summarization domain. We show some parallels between these to domains and introduce methods and terminology. Finally, we present results for a comprehensive evaluation summary that we have performed.

  4. Development of advanced generator of singlet oxygen for a COIL

    NASA Astrophysics Data System (ADS)

    Kodymová, Jarmila; Špalek, Otomar; Jirásek, Vít; Čenský, Miroslav; Hrubý, Jan

    2006-05-01

    The generator of singlet oxygen (SOG) remains still a challenge for a chemical oxygen-iodine laser (COIL). Hitherto, only chemical generators based on the gas-liquid reaction system (chlorine-basic hydrogen peroxide) can supply singlet oxygen, O II(1Δ), in enough high yields and at pressures to maintain operation of the high power supersonic COIL facilities. Employing conventional generators of jet-type or rotating disc-type makes often problems resulting mainly from liquid droplets entrained by an O II (1Δ) stream into the laser cavity, and a limited scalability of these generators. Advanced generator concepts investigated currently are based on two different approaches: (i)O II(1Δ) generation by the electrical discharge in various configurations, eliminating thus a liquid chemistry, and (ii) O II(1Δ) generation by the conventional chemistry in novel configurations offering the SOG efficiency increase and eliminating drawbacks of existing devices. One of the advanced concepts of chemical generator - a spray SOG with centrifugal separation of gasliquid phases - has been proposed and investigated in our laboratory. In this paper we present a description of the generator principle, some essential results of theoretical estimations, and interim experimental results obtained with the spray SOG.

  5. Video Guidance Sensor and Time-of-Flight Rangefinder

    NASA Technical Reports Server (NTRS)

    Bryan, Thomas; Howard, Richard; Bell, Joseph L.; Roe, Fred D.; Book, Michael L.

    2007-01-01

    A proposed video guidance sensor (VGS) would be based mostly on the hardware and software of a prior Advanced VGS (AVGS), with some additions to enable it to function as a time-of-flight rangefinder (in contradistinction to a triangulation or image-processing rangefinder). It would typically be used at distances of the order of 2 or 3 kilometers, where a typical target would appear in a video image as a single blob, making it possible to extract the direction to the target (but not the orientation of the target or the distance to the target) from a video image of light reflected from the target. As described in several previous NASA Tech Briefs articles, an AVGS system is an optoelectronic system that provides guidance for automated docking of two vehicles. In the original application, the two vehicles are spacecraft, but the basic principles of design and operation of the system are applicable to aircraft, robots, objects maneuvered by cranes, or other objects that may be required to be aligned and brought together automatically or under remote control. In a prior AVGS system of the type upon which the now-proposed VGS is largely based, the tracked vehicle is equipped with one or more passive targets that reflect light from one or more continuous-wave laser diode(s) on the tracking vehicle, a video camera on the tracking vehicle acquires images of the targets in the reflected laser light, the video images are digitized, and the image data are processed to obtain the direction to the target. The design concept of the proposed VGS does not call for any memory or processor hardware beyond that already present in the prior AVGS, but does call for some additional hardware and some additional software. It also calls for assignment of some additional tasks to two subsystems that are parts of the prior VGS: a field-programmable gate array (FPGA) that generates timing and control signals, and a digital signal processor (DSP) that processes the digitized video images. The

  6. Hierarchical video summarization based on context clustering

    NASA Astrophysics Data System (ADS)

    Tseng, Belle L.; Smith, John R.

    2003-11-01

    A personalized video summary is dynamically generated in our video personalization and summarization system based on user preference and usage environment. The three-tier personalization system adopts the server-middleware-client architecture in order to maintain, select, adapt, and deliver rich media content to the user. The server stores the content sources along with their corresponding MPEG-7 metadata descriptions. In this paper, the metadata includes visual semantic annotations and automatic speech transcriptions. Our personalization and summarization engine in the middleware selects the optimal set of desired video segments by matching shot annotations and sentence transcripts with user preferences. Besides finding the desired contents, the objective is to present a coherent summary. There are diverse methods for creating summaries, and we focus on the challenges of generating a hierarchical video summary based on context information. In our summarization algorithm, three inputs are used to generate the hierarchical video summary output. These inputs are (1) MPEG-7 metadata descriptions of the contents in the server, (2) user preference and usage environment declarations from the user client, and (3) context information including MPEG-7 controlled term list and classification scheme. In a video sequence, descriptions and relevance scores are assigned to each shot. Based on these shot descriptions, context clustering is performed to collect consecutively similar shots to correspond to hierarchical scene representations. The context clustering is based on the available context information, and may be derived from domain knowledge or rules engines. Finally, the selection of structured video segments to generate the hierarchical summary efficiently balances between scene representation and shot selection.

  7. Video Salient Object Detection via Fully Convolutional Networks.

    PubMed

    Wang, Wenguan; Shen, Jianbing; Shao, Ling

    This paper proposes a deep learning model to efficiently detect salient regions in videos. It addresses two important issues: 1) deep video saliency model training with the absence of sufficiently large and pixel-wise annotated video data and 2) fast video saliency training and detection. The proposed deep video saliency network consists of two modules, for capturing the spatial and temporal saliency information, respectively. The dynamic saliency model, explicitly incorporating saliency estimates from the static saliency model, directly produces spatiotemporal saliency inference without time-consuming optical flow computation. We further propose a novel data augmentation technique that simulates video training data from existing annotated image data sets, which enables our network to learn diverse saliency information and prevents overfitting with the limited number of training videos. Leveraging our synthetic video data (150K video sequences) and real videos, our deep video saliency model successfully learns both spatial and temporal saliency cues, thus producing accurate spatiotemporal saliency estimate. We advance the state-of-the-art on the densely annotated video segmentation data set (MAE of .06) and the Freiburg-Berkeley Motion Segmentation data set (MAE of .07), and do so with much improved speed (2 fps with all steps).This paper proposes a deep learning model to efficiently detect salient regions in videos. It addresses two important issues: 1) deep video saliency model training with the absence of sufficiently large and pixel-wise annotated video data and 2) fast video saliency training and detection. The proposed deep video saliency network consists of two modules, for capturing the spatial and temporal saliency information, respectively. The dynamic saliency model, explicitly incorporating saliency estimates from the static saliency model, directly produces spatiotemporal saliency inference without time-consuming optical flow computation. We further

  8. Social Properties of Mobile Video

    NASA Astrophysics Data System (ADS)

    Mitchell, April Slayden; O'Hara, Kenton; Vorbau, Alex

    Mobile video is now an everyday possibility with a wide array of commercially available devices, services, and content. These new technologies have created dramatic shifts in the way video-based media can be produced, consumed, and delivered by people beyond the familiar behaviors associated with fixed TV and video technologies. Such technology revolutions change the way users behave and change their expectations in regards to their mobile video experiences. Building upon earlier studies of mobile video, this paper reports on a study using diary techniques and ethnographic interviews to better understand how people are using commercially available mobile video technologies in their everyday lives. Drawing on reported episodes of mobile video behavior, the study identifies the social motivations and values underpinning these behaviors that help characterize mobile video consumption beyond the simplistic notion of viewing video only to kill time. This paper also discusses the significance of user-generated content and the usage of video in social communities through the description of two mobile video technology services that allow users to create and share content. Implications for adoption and design of mobile video technologies and services are discussed as well.

  9. Magnetic Braking: A Video Analysis

    NASA Astrophysics Data System (ADS)

    Molina-Bolívar, J. A.; Abella-Palacios, A. J.

    2012-10-01

    This paper presents a laboratory exercise that introduces students to the use of video analysis software and the Lenz's law demonstration. Digital techniques have proved to be very useful for the understanding of physical concepts. In particular, the availability of affordable digital video offers students the opportunity to actively engage in kinematics in introductory-level physics.1,2 By using digital videos frame advance features and "marking" the position of a moving object in each frame, students are able to more precisely determine the position of an object at much smaller time increments than would be possible with common time devices. Once the student collects data consisting of positions and times, these values may be manipulated to determine velocity and acceleration. There are a variety of commercial and free applications that can be used for video analysis. Because the relevant technology has become inexpensive, video analysis has become a prevalent tool in introductory physics courses.

  10. Real-time video streaming using H.264 scalable video coding (SVC) in multihomed mobile networks: a testbed approach

    NASA Astrophysics Data System (ADS)

    Nightingale, James; Wang, Qi; Grecos, Christos

    2011-03-01

    Users of the next generation wireless paradigm known as multihomed mobile networks expect satisfactory quality of service (QoS) when accessing streamed multimedia content. The recent H.264 Scalable Video Coding (SVC) extension to the Advanced Video Coding standard (AVC), offers the facility to adapt real-time video streams in response to the dynamic conditions of multiple network paths encountered in multihomed wireless mobile networks. Nevertheless, preexisting streaming algorithms were mainly proposed for AVC delivery over multipath wired networks and were evaluated by software simulation. This paper introduces a practical, hardware-based testbed upon which we implement and evaluate real-time H.264 SVC streaming algorithms in a realistic multihomed wireless mobile networks environment. We propose an optimised streaming algorithm with multi-fold technical contributions. Firstly, we extended the AVC packet prioritisation schemes to reflect the three-dimensional granularity of SVC. Secondly, we designed a mechanism for evaluating the effects of different streamer 'read ahead window' sizes on real-time performance. Thirdly, we took account of the previously unconsidered path switching and mobile networks tunnelling overheads encountered in real-world deployments. Finally, we implemented a path condition monitoring and reporting scheme to facilitate the intelligent path switching. The proposed system has been experimentally shown to offer a significant improvement in PSNR of the received stream compared with representative existing algorithms.

  11. Evaluation of a video image detection system : final report.

    DOT National Transportation Integrated Search

    1994-05-01

    A video image detection system (VIDS) is an advanced wide-area traffic monitoring system : that processes input from a video camera. The Autoscope VIDS coupled with an information : management system was selected as the monitoring device because test...

  12. Effectiveness of Student-Generated Video as a Teaching Tool for an Instrumental Technique in the Organic Chemistry Laboratory

    ERIC Educational Resources Information Center

    Jordan, Jeremy T.; Box, Melinda C.; Eguren, Kristen E.; Parker, Thomas A.; Saraldi-Gallardo, Victoria M.; Wolfe, Michael I.; Gallardo-Williams, Maria T.

    2016-01-01

    Multimedia instruction has been shown to serve as an effective learning aid for chemistry students. In this study, the viability of student-generated video instruction for organic chemistry laboratory techniques and procedure was examined and its effectiveness compared to instruction provided by a teaching assistant (TA) was evaluated. After…

  13. State Skill Standards: Digital Video & Broadcast Production

    ERIC Educational Resources Information Center

    Bullard, Susan; Tanner, Robin; Reedy, Brian; Grabavoi, Daphne; Ertman, James; Olson, Mark; Vaughan, Karen; Espinola, Ron

    2007-01-01

    The standards in this document are for digital video and broadcast production programs and are designed to clearly state what the student should know and be able to do upon completion of an advanced high-school program. Digital Video and Broadcast Production is a program that consists of the initial fundamentals and sequential courses that prepare…

  14. Content-based analysis of news video

    NASA Astrophysics Data System (ADS)

    Yu, Junqing; Zhou, Dongru; Liu, Huayong; Cai, Bo

    2001-09-01

    In this paper, we present a schema for content-based analysis of broadcast news video. First, we separate commercials from news using audiovisual features. Then, we automatically organize news programs into a content hierarchy at various levels of abstraction via effective integration of video, audio, and text data available from the news programs. Based on these news video structure and content analysis technologies, a TV news video Library is generated, from which users can retrieve definite news story according to their demands.

  15. Intelligent video storage of visual evidences on site in fast deployment

    NASA Astrophysics Data System (ADS)

    Desurmont, Xavier; Bastide, Arnaud; Delaigle, Jean-Francois

    2004-07-01

    In this article we present a generic, flexible, scalable and robust approach for an intelligent real-time forensic visual system. The proposed implementation could be rapidly deployable and integrates minimum logistic support as it embeds low complexity devices (PCs and cameras) that communicate through wireless network. The goal of these advanced tools is to provide intelligent video storage of potential video evidences for fast intervention during deployment around a hazardous sector after a terrorism attack, a disaster, an air crash or before attempt of it. Advanced video analysis tools, such as segmentation and tracking are provided to support intelligent storage and annotation.

  16. Virtual Space Camp Video Game

    NASA Astrophysics Data System (ADS)

    Speyerer, E. J.; Ferrari, K. A.; Lowes, L. L.; Raad, P. E.; Cuevas, T.; Purdy, J. A.

    2006-03-01

    With advances in computers, graphics, and especially video games, manned space exploration can become real, by creating a safe, fun learning environment that allows players to explore the solar system from the comfort of their personal computers.

  17. Technical and economic feasibility of integrated video service by satellite

    NASA Technical Reports Server (NTRS)

    Price, K. M.; Kwan, R. K.; White, L. W.; Garlow, R. K.; Henderson, T. R.

    1992-01-01

    A feasibility study is presented of utilizing modern satellite technology, or more advanced technology, to create a cost-effective, user-friendly, integrated video service, which can provide videophone, video conference, or other equivalent wideband service on demand. A system is described that permits a user to select a desired audience and establish the required links similar to arranging a teleconference by phone. Attention is given to video standards, video traffic scenarios, satellite system architecture, and user costs.

  18. Using Video Feedback to Improve Horseback-Riding Skills

    ERIC Educational Resources Information Center

    Kelley, Heather; Miltenberger, Raymond G.

    2016-01-01

    This study used video feedback to improve the horseback-riding skills of advanced beginning riders. We focused on 3 skill sets: those used in jumping over obstacles, dressage riding on the flat, and jumping position riding on the flat. Baseline consisted of standard lesson procedures. Intervention consisted of video feedback in which a recorded…

  19. Advanced Method of Boundary-Layer Control Based on Localized Plasma Generation

    DTIC Science & Technology

    2009-05-01

    measurements, validation of experiments, wind-tunnel testing of the microwave / plasma generation system , preliminary assessment of energy required...and design of a microwave generator , electrodynamic and multivibrator systems for experiments in the IHM-NAU wind tunnel: MW generator and its high...equipped with the microwave - generation and protection systems to study advanced methods of flow control (Kiev) Fig. 2.1,a. The blade

  20. Image and Video Compression with VLSI Neural Networks

    NASA Technical Reports Server (NTRS)

    Fang, W.; Sheu, B.

    1993-01-01

    An advanced motion-compensated predictive video compression system based on artificial neural networks has been developed to effectively eliminate the temporal and spatial redundancy of video image sequences and thus reduce the bandwidth and storage required for the transmission and recording of the video signal. The VLSI neuroprocessor for high-speed high-ratio image compression based upon a self-organization network and the conventional algorithm for vector quantization are compared. The proposed method is quite efficient and can achieve near-optimal results.

  1. Advanced Stirling Radioisotope Generator Engineering Unit 2 Anomaly Investigation

    NASA Technical Reports Server (NTRS)

    Lewandowski, Edward J.; Dobbs, Michael W.; Oriti, Salvatore M.

    2018-01-01

    The Advanced Stirling Radioisotope Generator (ASRG) Engineering Unit 2 (EU2) is the highest fidelity electrically heated Stirling radioisotope generator built to date. NASA Glenn Research Center completed the assembly of the ASRG EU2 in September 2014 using hardware from the now cancelled ASRG flight development project. The ASRG EU2 integrated the first pair of Sunpower's Advanced Stirling Convertors (ASC-E3 #1 and #2) in an aluminum generator housing with Lockheed Martin's (LM's) Engineering Development Unit (EDU) 4 controller. After just 179 hr of EU2 generator operation, the first power fluctuation occurred on ASC-E3 #1. The first power fluctuation occurred 175 hr later on ASC-E3 #2. Over time, the power fluctuations became more frequent on both convertors and larger in magnitude. Eventually the EU2 was shut down in January 2015. An anomaly investigation was chartered to determine root cause of the power fluctuations and other anomalous observations. A team with members from Glenn, Sunpower, and LM conducted a thorough investigation of the EU2 anomalies. Findings from the EU2 disassembly identified proximate causes of the anomalous observations. Discussion of the team's assessment of the primary possible failure theories, root cause, and conclusions is provided. Recommendations are made for future Stirling generator development to address the findings from the anomaly investigation. Additional findings from the investigation are also discussed.

  2. Issues and advances in research methods on video games and cognitive abilities

    PubMed Central

    Sobczyk, Bart; Dobrowolski, Paweł; Skorko, Maciek; Michalak, Jakub; Brzezicka, Aneta

    2015-01-01

    The impact of video game playing on cognitive abilities has been the focus of numerous studies over the last 10 years. Some cross-sectional comparisons indicate the cognitive advantages of video game players (VGPs) over non-players (NVGPs) and the benefits of video game trainings, while others fail to replicate these findings. Though there is an ongoing discussion over methodological practices and their impact on observable effects, some elementary issues, such as the representativeness of recruited VGP groups and lack of genre differentiation have not yet been widely addressed. In this article we present objective and declarative gameplay time data gathered from large samples in order to illustrate how playtime is distributed over VGP populations. The implications of this data are then discussed in the context of previous studies in the field. We also argue in favor of differentiating video games based on their genre when recruiting study samples, as this form of classification reflects the core mechanics that they utilize and therefore provides a measure of insight into what cognitive functions are likely to be engaged most. Additionally, we present the Covert Video Game Experience Questionnaire as an example of how this sort of classification can be applied during the recruitment process. PMID:26483717

  3. Issues and advances in research methods on video games and cognitive abilities.

    PubMed

    Sobczyk, Bart; Dobrowolski, Paweł; Skorko, Maciek; Michalak, Jakub; Brzezicka, Aneta

    2015-01-01

    The impact of video game playing on cognitive abilities has been the focus of numerous studies over the last 10 years. Some cross-sectional comparisons indicate the cognitive advantages of video game players (VGPs) over non-players (NVGPs) and the benefits of video game trainings, while others fail to replicate these findings. Though there is an ongoing discussion over methodological practices and their impact on observable effects, some elementary issues, such as the representativeness of recruited VGP groups and lack of genre differentiation have not yet been widely addressed. In this article we present objective and declarative gameplay time data gathered from large samples in order to illustrate how playtime is distributed over VGP populations. The implications of this data are then discussed in the context of previous studies in the field. We also argue in favor of differentiating video games based on their genre when recruiting study samples, as this form of classification reflects the core mechanics that they utilize and therefore provides a measure of insight into what cognitive functions are likely to be engaged most. Additionally, we present the Covert Video Game Experience Questionnaire as an example of how this sort of classification can be applied during the recruitment process.

  4. Video traffic characteristics of modern encoding standards: H.264/AVC with SVC and MVC extensions and H.265/HEVC.

    PubMed

    Seeling, Patrick; Reisslein, Martin

    2014-01-01

    Video encoding for multimedia services over communication networks has significantly advanced in recent years with the development of the highly efficient and flexible H.264/AVC video coding standard and its SVC extension. The emerging H.265/HEVC video coding standard as well as 3D video coding further advance video coding for multimedia communications. This paper first gives an overview of these new video coding standards and then examines their implications for multimedia communications by studying the traffic characteristics of long videos encoded with the new coding standards. We review video coding advances from MPEG-2 and MPEG-4 Part 2 to H.264/AVC and its SVC and MVC extensions as well as H.265/HEVC. For single-layer (nonscalable) video, we compare H.265/HEVC and H.264/AVC in terms of video traffic and statistical multiplexing characteristics. Our study is the first to examine the H.265/HEVC traffic variability for long videos. We also illustrate the video traffic characteristics and statistical multiplexing of scalable video encoded with the SVC extension of H.264/AVC as well as 3D video encoded with the MVC extension of H.264/AVC.

  5. Video Traffic Characteristics of Modern Encoding Standards: H.264/AVC with SVC and MVC Extensions and H.265/HEVC

    PubMed Central

    2014-01-01

    Video encoding for multimedia services over communication networks has significantly advanced in recent years with the development of the highly efficient and flexible H.264/AVC video coding standard and its SVC extension. The emerging H.265/HEVC video coding standard as well as 3D video coding further advance video coding for multimedia communications. This paper first gives an overview of these new video coding standards and then examines their implications for multimedia communications by studying the traffic characteristics of long videos encoded with the new coding standards. We review video coding advances from MPEG-2 and MPEG-4 Part 2 to H.264/AVC and its SVC and MVC extensions as well as H.265/HEVC. For single-layer (nonscalable) video, we compare H.265/HEVC and H.264/AVC in terms of video traffic and statistical multiplexing characteristics. Our study is the first to examine the H.265/HEVC traffic variability for long videos. We also illustrate the video traffic characteristics and statistical multiplexing of scalable video encoded with the SVC extension of H.264/AVC as well as 3D video encoded with the MVC extension of H.264/AVC. PMID:24701145

  6. Video Fact Sheets: Everyday Advanced Materials

    ScienceCinema

    None

    2018-06-21

    What are Advanced Materials? Ames Laboratory is behind some of the best advanced materials out there. Some of those include: Lead-Free Solder, Photonic Band-Gap Crystals, Terfenol-D, Aluminum-Calcium Power Cable and Nano Particles. Some of these are in products we use every day.

  7. Video Fact Sheets: Everyday Advanced Materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    2015-10-06

    What are Advanced Materials? Ames Laboratory is behind some of the best advanced materials out there. Some of those include: Lead-Free Solder, Photonic Band-Gap Crystals, Terfenol-D, Aluminum-Calcium Power Cable and Nano Particles. Some of these are in products we use every day.

  8. Automated Video Quality Assessment for Deep-Sea Video

    NASA Astrophysics Data System (ADS)

    Pirenne, B.; Hoeberechts, M.; Kalmbach, A.; Sadhu, T.; Branzan Albu, A.; Glotin, H.; Jeffries, M. A.; Bui, A. O. V.

    2015-12-01

    these effects. These steps include filtering out unusable data, color and luminance balancing, and choosing the most appropriate image descriptors. We apply these techniques to generate automated quality assessment of video data and illustrate their utility with an example application where we perform vision-based substrate classification.

  9. Video Analysis of Anterior Cruciate Ligament (ACL) Injuries

    PubMed Central

    Carlson, Victor R.; Sheehan, Frances T.; Boden, Barry P.

    2016-01-01

    Background: As the most viable method for investigating in vivo anterior cruciate ligament (ACL) rupture, video analysis is critical for understanding ACL injury mechanisms and advancing preventative training programs. Despite the limited number of published studies involving video analysis, much has been gained through evaluating actual injury scenarios. Methods: Studies meeting criteria for this systematic review were collected by performing a broad search of the ACL literature with use of variations and combinations of video recordings and ACL injuries. Both descriptive and analytical studies were included. Results: Descriptive studies have identified specific conditions that increase the likelihood of an ACL injury. These conditions include close proximity to opposing players or other perturbations, high shoe-surface friction, and landing on the heel or the flat portion of the foot. Analytical studies have identified high-risk joint angles on landing, such as a combination of decreased ankle plantar flexion, decreased knee flexion, and increased hip flexion. Conclusions: The high-risk landing position appears to influence the likelihood of ACL injury to a much greater extent than inherent risk factors. As such, on the basis of the results of video analysis, preventative training should be applied broadly. Kinematic data from video analysis have provided insights into the dominant forces that are responsible for the injury (i.e., axial compression with potential contributions from quadriceps contraction and valgus loading). With the advances in video technology currently underway, video analysis will likely lead to enhanced understanding of non-contact ACL injury. PMID:27922985

  10. Thermal Model Predictions of Advanced Stirling Radioisotope Generator Performance

    NASA Technical Reports Server (NTRS)

    Wang, Xiao-Yen J.; Fabanich, William Anthony; Schmitz, Paul C.

    2014-01-01

    This presentation describes the capabilities of three-dimensional thermal power model of advanced stirling radioisotope generator (ASRG). The performance of the ASRG is presented for different scenario, such as Venus flyby with or without the auxiliary cooling system.

  11. 78 FR 68058 - Next Generation Risk Assessment: Incorporation of Recent Advances in Molecular, Computational...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-13

    ... Generation Risk Assessment: Incorporation of Recent Advances in Molecular, Computational, and Systems Biology... Generation Risk Assessment: Incorporation of Recent Advances in Molecular, Computational, and Systems Biology..., computational, and systems biology data can better inform risk assessment. This draft document is available for...

  12. Domain Decomposition By the Advancing-Partition Method for Parallel Unstructured Grid Generation

    NASA Technical Reports Server (NTRS)

    Pirzadeh, Shahyar Z.; Zagaris, George

    2009-01-01

    A new method of domain decomposition has been developed for generating unstructured grids in subdomains either sequentially or using multiple computers in parallel. Domain decomposition is a crucial and challenging step for parallel grid generation. Prior methods are generally based on auxiliary, complex, and computationally intensive operations for defining partition interfaces and usually produce grids of lower quality than those generated in single domains. The new technique, referred to as "Advancing Partition," is based on the Advancing-Front method, which partitions a domain as part of the volume mesh generation in a consistent and "natural" way. The benefits of this approach are: 1) the process of domain decomposition is highly automated, 2) partitioning of domain does not compromise the quality of the generated grids, and 3) the computational overhead for domain decomposition is minimal. The new method has been implemented in NASA's unstructured grid generation code VGRID.

  13. Testing to Characterize the Advanced Stirling Radioisotope Generator Engineering Unit

    NASA Technical Reports Server (NTRS)

    Lewandowski, Edward; Schreiber, Jeffrey

    2010-01-01

    The Advanced Stirling Radioisotope Generator (ASRG), a high efficiency generator, is being considered for space missions. Lockheed Martin designed and fabricated an engineering unit (EU), the ASRG EU, under contract to the Department of Energy. This unit is currently undergoing extended operation testing at the NASA Glenn Research Center to generate performance data and validate life and reliability predictions for the generator and the Stirling convertors. It has also undergone performance tests to characterize generator operation while varying control parameters and system inputs. This paper summarizes and explains test results in the context of designing operating strategies for the generator during a space mission and notes expected differences between the EU performance and future generators.

  14. Satellite markers: a simple method for ground truth car pose on stereo video

    NASA Astrophysics Data System (ADS)

    Gil, Gustavo; Savino, Giovanni; Piantini, Simone; Pierini, Marco

    2018-04-01

    Artificial prediction of future location of other cars in the context of advanced safety systems is a must. The remote estimation of car pose and particularly its heading angle is key to predict its future location. Stereo vision systems allow to get the 3D information of a scene. Ground truth in this specific context is associated with referential information about the depth, shape and orientation of the objects present in the traffic scene. Creating 3D ground truth is a measurement and data fusion task associated with the combination of different kinds of sensors. The novelty of this paper is the method to generate ground truth car pose only from video data. When the method is applied to stereo video, it also provides the extrinsic camera parameters for each camera at frame level which are key to quantify the performance of a stereo vision system when it is moving because the system is subjected to undesired vibrations and/or leaning. We developed a video post-processing technique which employs a common camera calibration tool for the 3D ground truth generation. In our case study, we focus in accurate car heading angle estimation of a moving car under realistic imagery. As outcomes, our satellite marker method provides accurate car pose at frame level, and the instantaneous spatial orientation for each camera at frame level.

  15. Utilizing Computer and Multimedia Technology in Generating Choreography for the Advanced Dance Student at the High School Level.

    ERIC Educational Resources Information Center

    Griffin, Irma Amado

    This study describes a pilot program utilizing various multimedia computer programs on a MacQuadra 840 AV. The target group consisted of six advanced dance students who participated in the pilot program within the dance curriculum by creating a database of dance movement using video and still photography. The students combined desktop publishing,…

  16. Upstream-advancing waves generated by three-dimensional moving disturbances

    NASA Astrophysics Data System (ADS)

    Lee, Seung-Joon; Grimshaw, Roger H. J.

    1990-02-01

    The wave field resulting from a surface pressure or a bottom topography in a horizontally unbounded domain is studied. Upstream-advancing waves successively generated by various forcing disturbances moving with near-resonant speeds are found by numerically solving a forced Kadomtsev-Petviashvili (fKP) equation, which shows in its simplest form the interplay of a basic linear wave operator, longitudinal and transverse dispersion, nonlinearity, and forcing. Curved solitary waves are found as a slowly varying similarity solution of the Kadomtsev-Petviashvili (KP) equation, and are favorably compared with the upstream-advancing waves numerically obtained.

  17. Fabrication of Advanced Thermoelectric Materials by Hierarchical Nanovoid Generation

    NASA Technical Reports Server (NTRS)

    Park, Yeonjoon (Inventor); Elliott, James R. (Inventor); Stoakley, Diane M. (Inventor); Chu, Sang-Hyon (Inventor); King, Glen C. (Inventor); Kim, Jae-Woo (Inventor); Choi, Sang Hyouk (Inventor); Lillehei, Peter T. (Inventor)

    2011-01-01

    A novel method to prepare an advanced thermoelectric material has hierarchical structures embedded with nanometer-sized voids which are key to enhancement of the thermoelectric performance. Solution-based thin film deposition technique enables preparation of stable film of thermoelectric material and void generator (voigen). A subsequent thermal process creates hierarchical nanovoid structure inside the thermoelectric material. Potential application areas of this advanced thermoelectric material with nanovoid structure are commercial applications (electronics cooling), medical and scientific applications (biological analysis device, medical imaging systems), telecommunications, and defense and military applications (night vision equipments).

  18. User-oriented summary extraction for soccer video based on multimodal analysis

    NASA Astrophysics Data System (ADS)

    Liu, Huayong; Jiang, Shanshan; He, Tingting

    2011-11-01

    An advanced user-oriented summary extraction method for soccer video is proposed in this work. Firstly, an algorithm of user-oriented summary extraction for soccer video is introduced. A novel approach that integrates multimodal analysis, such as extraction and analysis of the stadium features, moving object features, audio features and text features is introduced. By these features the semantic of the soccer video and the highlight mode are obtained. Then we can find the highlight position and put them together by highlight degrees to obtain the video summary. The experimental results for sports video of world cup soccer games indicate that multimodal analysis is effective for soccer video browsing and retrieval.

  19. Video denoising using low rank tensor decomposition

    NASA Astrophysics Data System (ADS)

    Gui, Lihua; Cui, Gaochao; Zhao, Qibin; Wang, Dongsheng; Cichocki, Andrzej; Cao, Jianting

    2017-03-01

    Reducing noise in a video sequence is of vital important in many real-world applications. One popular method is block matching collaborative filtering. However, the main drawback of this method is that noise standard deviation for the whole video sequence is known in advance. In this paper, we present a tensor based denoising framework that considers 3D patches instead of 2D patches. By collecting the similar 3D patches non-locally, we employ the low-rank tensor decomposition for collaborative filtering. Since we specify the non-informative prior over the noise precision parameter, the noise variance can be inferred automatically from observed video data. Therefore, our method is more practical, which does not require knowing the noise variance. The experimental on video denoising demonstrates the effectiveness of our proposed method.

  20. High resolution, high frame rate video technology development plan and the near-term system conceptual design

    NASA Technical Reports Server (NTRS)

    Ziemke, Robert A.

    1990-01-01

    The objective of the High Resolution, High Frame Rate Video Technology (HHVT) development effort is to provide technology advancements to remove constraints on the amount of high speed, detailed optical data recorded and transmitted for microgravity science and application experiments. These advancements will enable the development of video systems capable of high resolution, high frame rate video data recording, processing, and transmission. Techniques such as multichannel image scan, video parameter tradeoff, and the use of dual recording media were identified as methods of making the most efficient use of the near-term technology.

  1. Video personalization for usage environment

    NASA Astrophysics Data System (ADS)

    Tseng, Belle L.; Lin, Ching-Yung; Smith, John R.

    2002-07-01

    A video personalization and summarization system is designed and implemented incorporating usage environment to dynamically generate a personalized video summary. The personalization system adopts the three-tier server-middleware-client architecture in order to select, adapt, and deliver rich media content to the user. The server stores the content sources along with their corresponding MPEG-7 metadata descriptions. Our semantic metadata is provided through the use of the VideoAnnEx MPEG-7 Video Annotation Tool. When the user initiates a request for content, the client communicates the MPEG-21 usage environment description along with the user query to the middleware. The middleware is powered by the personalization engine and the content adaptation engine. Our personalization engine includes the VideoSue Summarization on Usage Environment engine that selects the optimal set of desired contents according to user preferences. Afterwards, the adaptation engine performs the required transformations and compositions of the selected contents for the specific usage environment using our VideoEd Editing and Composition Tool. Finally, two personalization and summarization systems are demonstrated for the IBM Websphere Portal Server and for the pervasive PDA devices.

  2. Indexed Captioned Searchable Videos: A Learning Companion for STEM Coursework

    NASA Astrophysics Data System (ADS)

    Tuna, Tayfun; Subhlok, Jaspal; Barker, Lecia; Shah, Shishir; Johnson, Olin; Hovey, Christopher

    2017-02-01

    Videos of classroom lectures have proven to be a popular and versatile learning resource. A key shortcoming of the lecture video format is accessing the content of interest hidden in a video. This work meets this challenge with an advanced video framework featuring topical indexing, search, and captioning (ICS videos). Standard optical character recognition (OCR) technology was enhanced with image transformations for extraction of text from video frames to support indexing and search. The images and text on video frames is analyzed to divide lecture videos into topical segments. The ICS video player integrates indexing, search, and captioning in video playback providing instant access to the content of interest. This video framework has been used by more than 70 courses in a variety of STEM disciplines and assessed by more than 4000 students. Results presented from the surveys demonstrate the value of the videos as a learning resource and the role played by videos in a students learning process. Survey results also establish the value of indexing and search features in a video platform for education. This paper reports on the development and evaluation of ICS videos framework and over 5 years of usage experience in several STEM courses.

  3. Automatic Association of Chats and Video Tracks for Activity Learning and Recognition in Aerial Video Surveillance

    PubMed Central

    Hammoud, Riad I.; Sahin, Cem S.; Blasch, Erik P.; Rhodes, Bradley J.; Wang, Tao

    2014-01-01

    We describe two advanced video analysis techniques, including video-indexed by voice annotations (VIVA) and multi-media indexing and explorer (MINER). VIVA utilizes analyst call-outs (ACOs) in the form of chat messages (voice-to-text) to associate labels with video target tracks, to designate spatial-temporal activity boundaries and to augment video tracking in challenging scenarios. Challenging scenarios include low-resolution sensors, moving targets and target trajectories obscured by natural and man-made clutter. MINER includes: (1) a fusion of graphical track and text data using probabilistic methods; (2) an activity pattern learning framework to support querying an index of activities of interest (AOIs) and targets of interest (TOIs) by movement type and geolocation; and (3) a user interface to support streaming multi-intelligence data processing. We also present an activity pattern learning framework that uses the multi-source associated data as training to index a large archive of full-motion videos (FMV). VIVA and MINER examples are demonstrated for wide aerial/overhead imagery over common data sets affording an improvement in tracking from video data alone, leading to 84% detection with modest misdetection/false alarm results due to the complexity of the scenario. The novel use of ACOs and chat messages in video tracking paves the way for user interaction, correction and preparation of situation awareness reports. PMID:25340453

  4. Automatic association of chats and video tracks for activity learning and recognition in aerial video surveillance.

    PubMed

    Hammoud, Riad I; Sahin, Cem S; Blasch, Erik P; Rhodes, Bradley J; Wang, Tao

    2014-10-22

    We describe two advanced video analysis techniques, including video-indexed by voice annotations (VIVA) and multi-media indexing and explorer (MINER). VIVA utilizes analyst call-outs (ACOs) in the form of chat messages (voice-to-text) to associate labels with video target tracks, to designate spatial-temporal activity boundaries and to augment video tracking in challenging scenarios. Challenging scenarios include low-resolution sensors, moving targets and target trajectories obscured by natural and man-made clutter. MINER includes: (1) a fusion of graphical track and text data using probabilistic methods; (2) an activity pattern learning framework to support querying an index of activities of interest (AOIs) and targets of interest (TOIs) by movement type and geolocation; and (3) a user interface to support streaming multi-intelligence data processing. We also present an activity pattern learning framework that uses the multi-source associated data as training to index a large archive of full-motion videos (FMV). VIVA and MINER examples are demonstrated for wide aerial/overhead imagery over common data sets affording an improvement in tracking from video data alone, leading to 84% detection with modest misdetection/false alarm results due to the complexity of the scenario. The novel use of ACOs and chat Sensors 2014, 14 19844 messages in video tracking paves the way for user interaction, correction and preparation of situation awareness reports.

  5. More About The Video Event Trigger

    NASA Technical Reports Server (NTRS)

    Williams, Glenn L.

    1996-01-01

    Report presents additional information about system described in "Video Event Trigger" (LEW-15076). Digital electronic system processes video-image data to generate trigger signal when image shows significant change, such as motion, or appearance, disappearance, change in color, brightness, or dilation of object. Potential uses include monitoring of hallways, parking lots, and other areas during hours when supposed unoccupied, looking for fires, tracking airplanes or other moving objects, identification of missing or defective parts on production lines, and video recording of automobile crash tests.

  6. Promoting Academic Programs Using Online Videos

    ERIC Educational Resources Information Center

    Clark, Thomas; Stewart, Julie

    2007-01-01

    In the last 20 years, the Internet has evolved from simply conveying text and then still photographs and music to the present-day medium in which individuals are contributors and consumers of a nearly infinite number of professional and do-it-yourself videos. In this dynamic environment, new generations of Internet users are streaming video and…

  7. Discontinuity minimization for omnidirectional video projections

    NASA Astrophysics Data System (ADS)

    Alshina, Elena; Zakharchenko, Vladyslav

    2017-09-01

    Advances in display technologies both for head mounted devices and television panels demand resolution increase beyond 4K for source signal in virtual reality video streaming applications. This poses a problem of content delivery trough a bandwidth limited distribution networks. Considering a fact that source signal covers entire surrounding space investigation reviled that compression efficiency may fluctuate 40% in average depending on origin selection at the conversion stage from 3D space to 2D projection. Based on these knowledge the origin selection algorithm for video compression applications has been proposed. Using discontinuity entropy minimization function projection origin rotation may be defined to provide optimal compression results. Outcome of this research may be applied across various video compression solutions for omnidirectional content.

  8. Surgical videos online: a survey of prominent sources and future trends.

    PubMed

    Dinscore, Amanda; Andres, Amy

    2010-01-01

    This article determines the extent of the online availability and quality of surgical videos for the educational benefit of the surgical community. A comprehensive survey was performed that compared a number of online sites providing surgical videos according to their content, production quality, authority, audience, navigability, and other features. Methods for evaluating video content are discussed as well as possible future directions and emerging trends. Surgical videos are a valuable tool for demonstrating and teaching surgical technique and, despite room for growth in this area, advances in streaming video technology have made providing and accessing these resources easier than ever before.

  9. Video-Guidance Design for the DART Rendezvous Mission

    NASA Technical Reports Server (NTRS)

    Ruth, Michael; Tracy, Chisholm

    2004-01-01

    NASA's Demonstration of Autonomous Rendezvous Technology (DART) mission will validate a number of different guidance technologies, including state-differenced GPS transfers and close-approach video guidance. The video guidance for DART will employ NASA/Marshall s Advanced Video Guidance Sensor (AVGS). This paper focuses on the terminal phase of the DART mission that includes close-approach maneuvers under AVGS guidance. The closed-loop video guidance design for DART is driven by a number of competing requirements, including a need for maximizing tracking bandwidths while coping with measurement noise and the need to minimize RCS firings. A range of different strategies for attitude control and docking guidance have been considered for the DART mission, and design decisions are driven by a goal of minimizing both the design complexity and the effects of video guidance lags. The DART design employs an indirect docking approach, in which the guidance position targets are defined using relative attitude information. Flight simulation results have proven the effectiveness of the video guidance design.

  10. Psychiatric Advance Directives: Getting Started

    MedlinePlus

    ... the United States View PDF Type of PADs Federal Law on Advance Directives View PDF “Introducing Psychiatric Advance ... Ph.D., M.L.S. View video (12:08) “Federal Law on Advance Directives: The Patient Self-Determination Act” ...

  11. Intergraph video and images exploitation capabilities

    NASA Astrophysics Data System (ADS)

    Colla, Simone; Manesis, Charalampos

    2013-08-01

    The current paper focuses on the capture, fusion and process of aerial imagery in order to leverage full motion video, giving analysts the ability to collect, analyze, and maximize the value of video assets. Unmanned aerial vehicles (UAV) have provided critical real-time surveillance and operational support to military organizations, and are a key source of intelligence, particularly when integrated with other geospatial data. In the current workflow, at first, the UAV operators plan the flight by using a flight planning software. During the flight the UAV send a live video stream directly on the field to be processed by Intergraph software, to generate and disseminate georeferenced images trough a service oriented architecture based on ERDAS Apollo suite. The raw video-based data sources provide the most recent view of a situation and can augment other forms of geospatial intelligence - such as satellite imagery and aerial photos - to provide a richer, more detailed view of the area of interest. To effectively use video as a source of intelligence, however, the analyst needs to seamlessly fuse the video with these other types of intelligence, such as map features and annotations. Intergraph has developed an application that automatically generates mosaicked georeferenced image, tags along the video route which can then be seamlessly integrated with other forms of static data, such as aerial photos, satellite imagery, or geospatial layers and features. Consumers will finally have the ability to use a single, streamlined system to complete the entire geospatial information lifecycle: capturing geospatial data using sensor technology; processing vector, raster, terrain data into actionable information; managing, fusing, and sharing geospatial data and video toghether; and finally, rapidly and securely delivering integrated information products, ensuring individuals can make timely decisions.

  12. Advanced sensors and instrumentation

    NASA Technical Reports Server (NTRS)

    Calloway, Raymond S.; Zimmerman, Joe E.; Douglas, Kevin R.; Morrison, Rusty

    1990-01-01

    NASA is currently investigating the readiness of Advanced Sensors and Instrumentation to meet the requirements of new initiatives in space. The following technical objectives and technologies are briefly discussed: smart and nonintrusive sensors; onboard signal and data processing; high capacity and rate adaptive data acquisition systems; onboard computing; high capacity and rate onboard storage; efficient onboard data distribution; high capacity telemetry; ground and flight test support instrumentation; power distribution; and workstations, video/lighting. The requirements for high fidelity data (accuracy, frequency, quantity, spatial resolution) in hostile environments will continue to push the technology developers and users to extend the performance of their products and to develop new generations.

  13. Benchmarking emergency department thoracotomy: Using trauma video review to generate procedural norms.

    PubMed

    Dumas, Ryan P; Chreiman, Kristen M; Seamon, Mark J; Cannon, Jeremy W; Reilly, Patrick M; Christie, Jason D; Holena, Daniel N

    2018-05-23

    Emergency department thoracotomy (EDT) must be rapid and well-executed. Currently there are no defined benchmarks for EDT procedural milestones. We hypothesized that trauma video review (TVR) can be used to define the 'normative EDT' and generate procedural benchmarks. As a secondary aim, we hypothesized that data collected by TVR would have less missingness and bias than data collected by review of the Electronic Medical Record (EMR). We used continuously recording video to review all EDTs performed at our centre during the study period. Using skin incision as start time, we defined four procedural milestones for EDT: 1. Decompression of the right chest (tube thoracostomy, finger thoracostomy, or clamshell thoracotomy with transverse sternotomy performed in conjunction with left anterolateral thoracotomy) 2. Retractor deployment 3. Pericardiotomy 4. Aortic Cross-clamp. EDTs with any milestone time ≥ 75 th percentile of time or during which a milestone was omitted were identified as outliers. We compared rates of missingness in data collected by TVR and EMR using McNemar's test. 44 EDTs were included from the study period. Patients had a median age of 30 [IQR 25-44] and were predominantly African-American (95%) males (93%) with penetrating trauma (95%). From skin incision, median times in minutes to milestones were as follows: right chest decompression: 2.11 [IQR 0.68-2.83], retractor deployment 1.35 [IQR 0.96-1.85], pericardiotomy 2.35 [IQR 1.85-3.75], aortic cross-clamp 3.71 [IQR 2.83-5.77]. In total, 28/44 (64%) of EDTs were either high outliers for one or more benchmarks or had milestones that were omitted. For all milestones, rates of missingness for TVR data were lower than EMR data (p < 0.001). Video review can be used to define normative times for the procedural milestones of EDT. Steps exceeding the 75 th percentile of time were common, with over half of EDTs having at least one milestone as an outlier. Data quality is higher using TVR compared to

  14. High-definition video display based on the FPGA and THS8200

    NASA Astrophysics Data System (ADS)

    Qian, Jia; Sui, Xiubao

    2014-11-01

    This paper presents a high-definition video display solution based on the FPGA and THS8200. THS8200 is a video decoder chip launched by TI company, this chip has three 10-bit DAC channels which can capture video data in both 4:2:2 and 4:4:4 formats, and its data synchronization can be either through the dedicated synchronization signals HSYNC and VSYNC, or extracted from the embedded video stream synchronization information SAV / EAV code. In this paper, we will utilize the address and control signals generated by FPGA to access to the data-storage array, and then the FPGA generates the corresponding digital video signals YCbCr. These signals combined with the synchronization signals HSYNC and VSYNC that are also generated by the FPGA act as the input signals of THS8200. In order to meet the bandwidth requirements of the high-definition TV, we adopt video input in the 4:2:2 format over 2×10-bit interface. THS8200 is needed to be controlled by FPGA with I2C bus to set the internal registers, and as a result, it can generate the synchronous signal that is satisfied with the standard SMPTE and transfer the digital video signals YCbCr into analog video signals YPbPr. Hence, the composite analog output signals YPbPr are consist of image data signal and synchronous signal which are superimposed together inside the chip THS8200. The experimental research indicates that the method presented in this paper is a viable solution for high-definition video display, which conforms to the input requirements of the new high-definition display devices.

  15. Camera network video summarization

    NASA Astrophysics Data System (ADS)

    Panda, Rameswar; Roy-Chowdhury, Amit K.

    2017-05-01

    Networks of vision sensors are deployed in many settings, ranging from security needs to disaster response to environmental monitoring. Many of these setups have hundreds of cameras and tens of thousands of hours of video. The difficulty of analyzing such a massive volume of video data is apparent whenever there is an incident that requires foraging through vast video archives to identify events of interest. As a result, video summarization, that automatically extract a brief yet informative summary of these videos, has attracted intense attention in the recent years. Much progress has been made in developing a variety of ways to summarize a single video in form of a key sequence or video skim. However, generating a summary from a set of videos captured in a multi-camera network still remains as a novel and largely under-addressed problem. In this paper, with the aim of summarizing videos in a camera network, we introduce a novel representative selection approach via joint embedding and capped l21-norm minimization. The objective function is two-fold. The first is to capture the structural relationships of data points in a camera network via an embedding, which helps in characterizing the outliers and also in extracting a diverse set of representatives. The second is to use a capped l21-norm to model the sparsity and to suppress the influence of data outliers in representative selection. We propose to jointly optimize both of the objectives, such that embedding can not only characterize the structure, but also indicate the requirements of sparse representative selection. Extensive experiments on standard multi-camera datasets well demonstrate the efficacy of our method over state-of-the-art methods.

  16. Pendulum Exercises After Hip Arthroscopy: A Video Technique.

    PubMed

    Sauber, Ryan; Saborio, George; Nickel, Beth M; Kivlan, Benjamin R; Christoforetti, John J

    2016-08-01

    Advanced hip joint-preserving arthroscopic techniques have been shown to improve patient-reported functional outcomes with low rates of postoperative complications. Prior work has shown that formation of adhesive scar is a potential source of persistent pain and cause for revision surgery. As resources for postoperative in-studio physical therapy become scarce, a home-based strategy to avoid scar formation without adding formal therapy cost may be beneficial. The purpose of this technical note is to introduce a patient-centered educational video technique for home-caregiver delivery of manual hip pendulum exercises in the postoperative setting. This video technique offers access to our method for pendulum exercise as part of early recovery after advanced hip arthroscopy.

  17. Advanced Video Activity Analytics (AVAA): Human Factors Evaluation

    DTIC Science & Technology

    2015-05-01

    video, and 3) creating and saving annotations (Fig. 11). (The logging program was updated after the pilot to also capture search clicks.) Playing and... visual search task and the auditory task together and thus automatically focused on the visual task. Alternatively, the operator may have intentionally...affect performance on the primary task; however, in the current test there was no apparent effect on the operator’s performance in the visual search task

  18. 76 FR 21741 - Twenty-First Century Communications and Video Programming Accessibility Act; Announcement of Town...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-04-18

    ... FEDERAL COMMUNICATIONS COMMISSION [DA 11-428] Twenty-First Century Communications and Video... The Twenty-First Century Communications and Video Programming Accessibility Act (the Act or CVAA... orientation to the Act, and discussed the advanced communications and video programming changes required by...

  19. Evaluation of video detection systems, volume 3 : effects of windy conditions in the performance of video detection systems.

    DOT National Transportation Integrated Search

    2008-05-01

    The performance of three Video Detection Systems (VDS), namely Autoscope, Iteris, and Peek, was evaluated : at stop bar and advance locations, at an instrumented signalized intersection located in Rantoul, Illinois, utilizing : a side-by-side install...

  20. Mathematics Teachers' Self-Captured Video and Opportunities for Learning

    ERIC Educational Resources Information Center

    Sherin, Miriam Gamoran; Dyer, Elizabeth B.

    2017-01-01

    Numerous video-based programs have been developed to support mathematics teachers in reflecting on and examining classrooms interactions without the immediate demands of instruction. An important premise of such work is that teacher learning occurs at the time that the video is viewed and discussed with teachers. Recent advances in technology,…

  1. Carpet Specifiers Guide. Ultron, Advanced Generation Nylon Carpet Fiber.

    ERIC Educational Resources Information Center

    Monsanto Textiles Co., Atlanta, GA.

    The purpose of this guide is to assist specifiers in properly specifying carpet made of Monsanto Ultron advanced generation nylon fiber. The guide describes a variety of conditions that should be considered in arriving at the proper selection and provides reference information and data, ranging from varying regulatory requirements, performance and…

  2. Video on phone lines: technology and applications

    NASA Astrophysics Data System (ADS)

    Hsing, T. Russell

    1996-03-01

    Recent advances in communications signal processing and VLSI technology are fostering tremendous interest in transmitting high-speed digital data over ordinary telephone lines at bit rates substantially above the ISDN Basic Access rate (144 Kbit/s). Two new technologies, high-bit-rate digital subscriber lines and asymmetric digital subscriber lines promise transmission over most of the embedded loop plant at 1.544 Mbit/s and beyond. Stimulated by these research promises and rapid advances on video coding techniques and the standards activity, information networks around the globe are now exploring possible business opportunities of offering quality video services (such as distant learning, telemedicine, and telecommuting etc.) through this high-speed digital transport capability in the copper loop plant. Visual communications for residential customers have become more feasible than ever both technically and economically.

  3. Evaluation of video detection systems, volume 2 : effects of illumination conditions in the performance of video detection systems.

    DOT National Transportation Integrated Search

    2009-05-01

    The evaluation of three Video Detection Systems (VDS) at an instrumented signalized intersection in Rantoul : Illinois, at both stop bar and advance detection zones, was performed under a wide range of lighting and : weather conditions. The evaluated...

  4. Video Eases End-of-Life Care Discussions

    Cancer.gov

    Patients with advanced cancer who watched a video that depicts options for end-of-life care were more certain of their end-of-life decision making than patients who only listened to a verbal narrative.

  5. Video image processing

    NASA Technical Reports Server (NTRS)

    Murray, N. D.

    1985-01-01

    Current technology projections indicate a lack of availability of special purpose computing for Space Station applications. Potential functions for video image special purpose processing are being investigated, such as smoothing, enhancement, restoration and filtering, data compression, feature extraction, object detection and identification, pixel interpolation/extrapolation, spectral estimation and factorization, and vision synthesis. Also, architectural approaches are being identified and a conceptual design generated. Computationally simple algorithms will be research and their image/vision effectiveness determined. Suitable algorithms will be implimented into an overall architectural approach that will provide image/vision processing at video rates that are flexible, selectable, and programmable. Information is given in the form of charts, diagrams and outlines.

  6. Advances in Parallelization for Large Scale Oct-Tree Mesh Generation

    NASA Technical Reports Server (NTRS)

    O'Connell, Matthew; Karman, Steve L.

    2015-01-01

    Despite great advancements in the parallelization of numerical simulation codes over the last 20 years, it is still common to perform grid generation in serial. Generating large scale grids in serial often requires using special "grid generation" compute machines that can have more than ten times the memory of average machines. While some parallel mesh generation techniques have been proposed, generating very large meshes for LES or aeroacoustic simulations is still a challenging problem. An automated method for the parallel generation of very large scale off-body hierarchical meshes is presented here. This work enables large scale parallel generation of off-body meshes by using a novel combination of parallel grid generation techniques and a hybrid "top down" and "bottom up" oct-tree method. Meshes are generated using hardware commonly found in parallel compute clusters. The capability to generate very large meshes is demonstrated by the generation of off-body meshes surrounding complex aerospace geometries. Results are shown including a one billion cell mesh generated around a Predator Unmanned Aerial Vehicle geometry, which was generated on 64 processors in under 45 minutes.

  7. Women, Video Gaming and Learning: Beyond Stereotypes

    ERIC Educational Resources Information Center

    Hayes, Elisabeth

    2005-01-01

    While video gaming has grown immensely as an industry over the last decade, with growing numbers of gamers around the globe, including women, gaming continues to be a very gendered practice. The apparent gender divide in video gaming has caught the attention of both the gaming industry and educators, generating considerable discussion and…

  8. A practical implementation of free viewpoint video system for soccer games

    NASA Astrophysics Data System (ADS)

    Suenaga, Ryo; Suzuki, Kazuyoshi; Tezuka, Tomoyuki; Panahpour Tehrani, Mehrdad; Takahashi, Keita; Fujii, Toshiaki

    2015-03-01

    In this paper, we present a free viewpoint video generation system with billboard representation for soccer games. Free viewpoint video generation is a technology that enables users to watch 3-D objects from their desired viewpoints. Practical implementation of free viewpoint video for sports events is highly demanded. However, a commercially acceptable system has not yet been developed. The main obstacles are insufficient user-end quality of the synthesized images and highly complex procedures that sometimes require manual operations. In this work, we aim to develop a commercially acceptable free viewpoint video system with a billboard representation. A supposed scenario is that soccer games during the day can be broadcasted in 3-D, even in the evening of the same day. Our work is still ongoing. However, we have already developed several techniques to support our goal. First, we captured an actual soccer game at an official stadium where we used 20 full-HD professional cameras. Second, we have implemented several tools for free viewpoint video generation as follow. In order to facilitate free viewpoint video generation, all cameras should be calibrated. We calibrated all cameras using checker board images and feature points on the field (cross points of the soccer field lines). We extract each player region from captured images manually. The background region is estimated by observing chrominance changes of each pixel in temporal domain (automatically). Additionally, we have developed a user interface for visualizing free viewpoint video generation using a graphic library (OpenGL), which is suitable for not only commercialized TV sets but also devices such as smartphones. However, practical system has not yet been completed and our study is still ongoing.

  9. Video change detection for fixed wing UAVs

    NASA Astrophysics Data System (ADS)

    Bartelsen, Jan; Müller, Thomas; Ring, Jochen; Mück, Klaus; Brüstle, Stefan; Erdnüß, Bastian; Lutz, Bastian; Herbst, Theresa

    2017-10-01

    In this paper we proceed the work of Bartelsen et al.1 We present the draft of a process chain for an image based change detection which is designed for videos acquired by fixed wing unmanned aerial vehicles (UAVs). From our point of view, automatic video change detection for aerial images can be useful to recognize functional activities which are typically caused by the deployment of improvised explosive devices (IEDs), e.g. excavations, skid marks, footprints, left-behind tooling equipment, and marker stones. Furthermore, in case of natural disasters, like flooding, imminent danger can be recognized quickly. Due to the necessary flight range, we concentrate on fixed wing UAVs. Automatic change detection can be reduced to a comparatively simple photogrammetric problem when the perspective change between the "before" and "after" image sets is kept as small as possible. Therefore, the aerial image acquisition demands a mission planning with a clear purpose including flight path and sensor configuration. While the latter can be enabled simply by a fixed and meaningful adjustment of the camera, ensuring a small perspective change for "before" and "after" videos acquired by fixed wing UAVs is a challenging problem. Concerning this matter, we have performed tests with an advanced commercial off the shelf (COTS) system which comprises a differential GPS and autopilot system estimating the repetition accuracy of its trajectory. Although several similar approaches have been presented,23 as far as we are able to judge, the limits for this important issue are not estimated so far. Furthermore, we design a process chain to enable the practical utilization of video change detection. It consists of a front-end of a database to handle large amounts of video data, an image processing and change detection implementation, and the visualization of the results. We apply our process chain on the real video data acquired by the advanced COTS fixed wing UAV and synthetic data. For the

  10. Video Kills the Lecturing Star: New Technologies and the Teaching of Meterology.

    ERIC Educational Resources Information Center

    Sumner, Graham

    1984-01-01

    The educational potential of time-lapse video sequences and weather data obtained using a conventional microcomputer are considered in the light of recent advances in both fields. Illustrates how videos and microcomputers can be used to study clouds in meteorology classes. (RM)

  11. Ensemble of Chaotic and Naive Approaches for Performance Enhancement in Video Encryption.

    PubMed

    Chandrasekaran, Jeyamala; Thiruvengadam, S J

    2015-01-01

    Owing to the growth of high performance network technologies, multimedia applications over the Internet are increasing exponentially. Applications like video conferencing, video-on-demand, and pay-per-view depend upon encryption algorithms for providing confidentiality. Video communication is characterized by distinct features such as large volume, high redundancy between adjacent frames, video codec compliance, syntax compliance, and application specific requirements. Naive approaches for video encryption encrypt the entire video stream with conventional text based cryptographic algorithms. Although naive approaches are the most secure for video encryption, the computational cost associated with them is very high. This research work aims at enhancing the speed of naive approaches through chaos based S-box design. Chaotic equations are popularly known for randomness, extreme sensitivity to initial conditions, and ergodicity. The proposed methodology employs two-dimensional discrete Henon map for (i) generation of dynamic and key-dependent S-box that could be integrated with symmetric algorithms like Blowfish and Data Encryption Standard (DES) and (ii) generation of one-time keys for simple substitution ciphers. The proposed design is tested for randomness, nonlinearity, avalanche effect, bit independence criterion, and key sensitivity. Experimental results confirm that chaos based S-box design and key generation significantly reduce the computational cost of video encryption with no compromise in security.

  12. Ensemble of Chaotic and Naive Approaches for Performance Enhancement in Video Encryption

    PubMed Central

    Chandrasekaran, Jeyamala; Thiruvengadam, S. J.

    2015-01-01

    Owing to the growth of high performance network technologies, multimedia applications over the Internet are increasing exponentially. Applications like video conferencing, video-on-demand, and pay-per-view depend upon encryption algorithms for providing confidentiality. Video communication is characterized by distinct features such as large volume, high redundancy between adjacent frames, video codec compliance, syntax compliance, and application specific requirements. Naive approaches for video encryption encrypt the entire video stream with conventional text based cryptographic algorithms. Although naive approaches are the most secure for video encryption, the computational cost associated with them is very high. This research work aims at enhancing the speed of naive approaches through chaos based S-box design. Chaotic equations are popularly known for randomness, extreme sensitivity to initial conditions, and ergodicity. The proposed methodology employs two-dimensional discrete Henon map for (i) generation of dynamic and key-dependent S-box that could be integrated with symmetric algorithms like Blowfish and Data Encryption Standard (DES) and (ii) generation of one-time keys for simple substitution ciphers. The proposed design is tested for randomness, nonlinearity, avalanche effect, bit independence criterion, and key sensitivity. Experimental results confirm that chaos based S-box design and key generation significantly reduce the computational cost of video encryption with no compromise in security. PMID:26550603

  13. Sequence to Sequence - Video to Text

    DTIC Science & Technology

    2015-12-11

    Saenko, and S. Guadarrama. Generating natural-language video descriptions using text - mined knowledge. In AAAI, July 2013. 2 [20] P. Kuznetsova, V...Sequence to Sequence – Video to Text Subhashini Venugopalan1 Marcus Rohrbach2,4 Jeff Donahue2 Raymond Mooney1 Trevor Darrell2 Kate Saenko3...1. Introduction Describing visual content with natural language text has recently received increased interest, especially describing images with a

  14. [The effects of video games on cognitive aging].

    PubMed

    Maillot, Pauline; Perrot, Alexandra; Hartley, Alan

    2012-03-01

    Advancing age is associated with cognitive decline, which, however, remains a very heterogeneous phenomenon. Indeed, several extrinsic factors seem to modulate the effect of aging on cognition. Recently, several studies have provided evidence that the practice of video games could engender many benefits by favoring the maintenance of cognitive vitality in the elderly. This review of the literature aims to establish a precise inventory of the relations between the various types of video games and cognitive aging, including both sedentary video games (i.e., classics as well as brain training) and active video games (i.e., exergames). The largest benefits seem to be provided by exergames which combine game play with significant physical exercise. This article also tries to define the determinants of the training programs which could be responsible for the observed improvements.

  15. The effect of online violent video games on levels of aggression.

    PubMed

    Hollingdale, Jack; Greitemeyer, Tobias

    2014-01-01

    In recent years the video game industry has surpassed both the music and video industries in sales. Currently violent video games are among the most popular video games played by consumers, most specifically First-Person Shooters (FPS). Technological advancements in game play experience including the ability to play online has accounted for this increase in popularity. Previous research, utilising the General Aggression Model (GAM), has identified that violent video games increase levels of aggression. Little is known, however, as to the effect of playing a violent video game online. Participants (N = 101) were randomly assigned to one of four experimental conditions; neutral video game--offline, neutral video game--online, violent video game--offline and violent video game--online. Following this they completed questionnaires to assess their attitudes towards the game and engaged in a chilli sauce paradigm to measure behavioural aggression. The results identified that participants who played a violent video game exhibited more aggression than those who played a neutral video game. Furthermore, this main effect was not particularly pronounced when the game was played online. These findings suggest that both playing violent video games online and offline compared to playing neutral video games increases aggression.

  16. SIRSALE: integrated video database management tools

    NASA Astrophysics Data System (ADS)

    Brunie, Lionel; Favory, Loic; Gelas, J. P.; Lefevre, Laurent; Mostefaoui, Ahmed; Nait-Abdesselam, F.

    2002-07-01

    Video databases became an active field of research during the last decade. The main objective in such systems is to provide users with capabilities to friendly search, access and playback distributed stored video data in the same way as they do for traditional distributed databases. Hence, such systems need to deal with hard issues : (a) video documents generate huge volumes of data and are time sensitive (streams must be delivered at a specific bitrate), (b) contents of video data are very hard to be automatically extracted and need to be humanly annotated. To cope with these issues, many approaches have been proposed in the literature including data models, query languages, video indexing etc. In this paper, we present SIRSALE : a set of video databases management tools that allow users to manipulate video documents and streams stored in large distributed repositories. All the proposed tools are based on generic models that can be customized for specific applications using ad-hoc adaptation modules. More precisely, SIRSALE allows users to : (a) browse video documents by structures (sequences, scenes, shots) and (b) query the video database content by using a graphical tool, adapted to the nature of the target video documents. This paper also presents an annotating interface which allows archivists to describe the content of video documents. All these tools are coupled to a video player integrating remote VCR functionalities and are based on active network technology. So, we present how dedicated active services allow an optimized video transport for video streams (with Tamanoir active nodes). We then describe experiments of using SIRSALE on an archive of news video and soccer matches. The system has been demonstrated to professionals with a positive feedback. Finally, we discuss open issues and present some perspectives.

  17. Video quality assessment method motivated by human visual perception

    NASA Astrophysics Data System (ADS)

    He, Meiling; Jiang, Gangyi; Yu, Mei; Song, Yang; Peng, Zongju; Shao, Feng

    2016-11-01

    Research on video quality assessment (VQA) plays a crucial role in improving the efficiency of video coding and the performance of video processing. It is well acknowledged that the motion energy model generates motion energy responses in a middle temporal area by simulating the receptive field of neurons in V1 for the motion perception of the human visual system. Motivated by the biological evidence for the visual motion perception, a VQA method is proposed in this paper, which comprises the motion perception quality index and the spatial index. To be more specific, the motion energy model is applied to evaluate the temporal distortion severity of each frequency component generated from the difference of Gaussian filter bank, which produces the motion perception quality index, and the gradient similarity measure is used to evaluate the spatial distortion of the video sequence to get the spatial quality index. The experimental results of the LIVE, CSIQ, and IVP video databases demonstrate that the random forests regression technique trained by the generated quality indices is highly correspondent to human visual perception and has many significant improvements than comparable well-performing methods. The proposed method has higher consistency with subjective perception and higher generalization capability.

  18. A Learning Design for Student-Generated Digital Storytelling

    ERIC Educational Resources Information Center

    Kearney, Matthew

    2011-01-01

    The literature on digital video in education emphasises the use of pre-fabricated, instructional-style video assets. Learning designs for supporting the use of these expert-generated video products have been developed. However, there has been a paucity of pedagogical frameworks for facilitating specific genres of learner-generated video projects.…

  19. Concept-oriented indexing of video databases: toward semantic sensitive retrieval and browsing.

    PubMed

    Fan, Jianping; Luo, Hangzai; Elmagarmid, Ahmed K

    2004-07-01

    Digital video now plays an important role in medical education, health care, telemedicine and other medical applications. Several content-based video retrieval (CBVR) systems have been proposed in the past, but they still suffer from the following challenging problems: semantic gap, semantic video concept modeling, semantic video classification, and concept-oriented video database indexing and access. In this paper, we propose a novel framework to make some advances toward the final goal to solve these problems. Specifically, the framework includes: 1) a semantic-sensitive video content representation framework by using principal video shots to enhance the quality of features; 2) semantic video concept interpretation by using flexible mixture model to bridge the semantic gap; 3) a novel semantic video-classifier training framework by integrating feature selection, parameter estimation, and model selection seamlessly in a single algorithm; and 4) a concept-oriented video database organization technique through a certain domain-dependent concept hierarchy to enable semantic-sensitive video retrieval and browsing.

  20. Recent advances in multiview distributed video coding

    NASA Astrophysics Data System (ADS)

    Dufaux, Frederic; Ouaret, Mourad; Ebrahimi, Touradj

    2007-04-01

    We consider dense networks of surveillance cameras capturing overlapped images of the same scene from different viewing directions, such a scenario being referred to as multi-view. Data compression is paramount in such a system due to the large amount of captured data. In this paper, we propose a Multi-view Distributed Video Coding approach. It allows for low complexity / low power consumption at the encoder side, and the exploitation of inter-view correlation without communications among the cameras. We introduce a combination of temporal intra-view side information and homography inter-view side information. Simulation results show both the improvement of the side information, as well as a significant gain in terms of coding efficiency.

  1. Progress in video immersion using Panospheric imaging

    NASA Astrophysics Data System (ADS)

    Bogner, Stephen L.; Southwell, David T.; Penzes, Steven G.; Brosinsky, Chris A.; Anderson, Ron; Hanna, Doug M.

    1998-09-01

    Having demonstrated significant technical and marketplace advantages over other modalities for video immersion, PanosphericTM Imaging (PI) continues to evolve rapidly. This paper reports on progress achieved since AeroSense 97. The first practical field deployment of the technology occurred in June-August 1997 during the NASA-CMU 'Atacama Desert Trek' activity, where the Nomad mobile robot was teleoperated via immersive PanosphericTM imagery from a distance of several thousand kilometers. Research using teleoperated vehicles at DRES has also verified the exceptional utility of the PI technology for achieving high levels of situational awareness, operator confidence, and mission effectiveness. Important performance enhancements have been achieved with the completion of the 4th Generation PI DSP-based array processor system. The system is now able to provide dynamic full video-rate generation of spatial and computational transformations, resulting in a programmable and fully interactive immersive video telepresence. A new multi- CCD camera architecture has been created to exploit the bandwidth of this processor, yielding a well-matched PI system with greatly improved resolution. While the initial commercial application for this technology is expected to be video tele- conferencing, it also appears to have excellent potential for application in the 'Immersive Cockpit' concept. Additional progress is reported in the areas of Long Wave Infrared PI Imaging, Stereo PI concepts, PI based Video-Servoing concepts, PI based Video Navigation concepts, and Foveation concepts (to merge localized high-resolution views with immersive views).

  2. Characterization of the Advanced Stirling Radioisotope Generator Engineering Unit 2

    NASA Technical Reports Server (NTRS)

    Lewandowski, Edward J.; Oriti, Salvatore M.; Schifer, Niholas A.

    2016-01-01

    Significant progress was made developing the Advanced Stirling Radioisotope Generator (ASRG) 140-W radioisotope power system. While the ASRG flight development project has ended, the hardware that was designed and built under the project is continuing to be tested to support future Stirling-based power system development. NASA Glenn Research Center recently completed the assembly of the ASRG Engineering Unit 2 (EU2). The ASRG EU2 consists of the first pair of Sunpower's Advanced Stirling Convertor E3 (ASC-E3) Stirling convertors mounted in an aluminum housing, and Lockheed Martin's Engineering Development Unit (EDU) 4 controller (a fourth-generation controller). The ASC-E3 convertors and Generator Housing Assembly (GHA) closely match the intended ASRG Qualification Unit flight design. A series of tests were conducted to characterize the EU2, its controller, and the convertors in the flight-like GHA. The GHA contained an argon cover gas for these tests. The tests included measurement of convertor, controller, and generator performance and efficiency; quantification of control authority of the controller; disturbance force measurement with varying piston phase and piston amplitude; and measurement of the effect of spacecraft direct current (DC) bus voltage on EU2 performance. The results of these tests are discussed and summarized, providing a basic understanding of EU2 characteristics and the performance and capability of the EDU 4 controller.

  3. Annotations of Mexican bullfighting videos for semantic index

    NASA Astrophysics Data System (ADS)

    Montoya Obeso, Abraham; Oropesa Morales, Lester Arturo; Fernando Vázquez, Luis; Cocolán Almeda, Sara Ivonne; Stoian, Andrei; García Vázquez, Mireya Saraí; Zamudio Fuentes, Luis Miguel; Montiel Perez, Jesús Yalja; de la O Torres, Saul; Ramírez Acosta, Alejandro Alvaro

    2015-09-01

    The video annotation is important for web indexing and browsing systems. Indeed, in order to evaluate the performance of video query and mining techniques, databases with concept annotations are required. Therefore, it is necessary generate a database with a semantic indexing that represents the digital content of the Mexican bullfighting atmosphere. This paper proposes a scheme to make complex annotations in a video in the frame of multimedia search engine project. Each video is partitioned using our segmentation algorithm that creates shots of different length and different number of frames. In order to make complex annotations about the video, we use ELAN software. The annotations are done in two steps: First, we take note about the whole content in each shot. Second, we describe the actions as parameters of the camera like direction, position and deepness. As a consequence, we obtain a more complete descriptor of every action. In both cases we use the concepts of the TRECVid 2014 dataset. We also propose new concepts. This methodology allows to generate a database with the necessary information to create descriptors and algorithms capable to detect actions to automatically index and classify new bullfighting multimedia content.

  4. NASA's Myriad Uses of Digital Video

    NASA Technical Reports Server (NTRS)

    Grubbs, Rodney; Lindblom, Walt; George, Sandy

    1999-01-01

    Since it's inception, NASA has created many of the most memorable images seen this Century. From the fuzzy video of Neil Armstrong taking that first step on the moon, to images of the Mars surface available to all on the internet, NASA has provided images to inspire a generation, all because a scientist or researcher had a requirement to see something unusual. Digital Television technology will give NASA unprecedented new tools for acquiring, analyzing, and distributing video. This paper will explore NASA's DTV future. The agency has a requirement to move video from one NASA Center to another, in real time. Specifics will be provided relating to the NASA video infrastructure, including video from the Space Shuttle and from the various Centers. A comparison of the pros and cons of interlace and progressive scanned images will be presented. Film is a major component of NASA's image acquisition for analysis usage. The future of film within the context of DTV will be explored.

  5. Compression of computer generated phase-shifting hologram sequence using AVC and HEVC

    NASA Astrophysics Data System (ADS)

    Xing, Yafei; Pesquet-Popescu, Béatrice; Dufaux, Frederic

    2013-09-01

    With the capability of achieving twice the compression ratio of Advanced Video Coding (AVC) with similar reconstruction quality, High Efficiency Video Coding (HEVC) is expected to become the newleading technique of video coding. In order to reduce the storage and transmission burden of digital holograms, in this paper we propose to use HEVC for compressing the phase-shifting digital hologram sequences (PSDHS). By simulating phase-shifting digital holography (PSDH) interferometry, interference patterns between illuminated three dimensional( 3D) virtual objects and the stepwise phase changed reference wave are generated as digital holograms. The hologram sequences are obtained by the movement of the virtual objects and compressed by AVC and HEVC. The experimental results show that AVC and HEVC are efficient to compress PSDHS, with HEVC giving better performance. Good compression rate and reconstruction quality can be obtained with bitrate above 15000kbps.

  6. A computer-aided telescope pointing system utilizing a video star tracker

    NASA Technical Reports Server (NTRS)

    Murphy, J. P.; Lorell, K. R.; Swift, C. D.

    1975-01-01

    The Video Inertial Pointing (VIP) System developed to satisfy the acquisition and pointing requirements of astronomical telescopes is described. A unique feature of the system is the use of a single sensor to provide information for the generation of three axis pointing error signals and for a cathode ray tube (CRT) display of the star field. The pointing error signals are used to update the telescope's gyro stabilization and the CRT display is used by an operator to facilitate target acquisition and to aid in manual positioning of the telescope optical axis. A model of the system using a low light level vidicon built and flown on a balloon-borne infrared telescope is briefly described from a state of the art charge coupled device (CCD) sensor. The advanced system hardware is described and an analysis of the multi-star tracking and three axis error signal generation, along with an analysis and design of the gyro update filter, are presented. Results of a hybrid simulation are described in which the advanced VIP system hardware is driven by a digital simulation of the star field/CCD sensor and an analog simulation of the telescope and gyro stabilization dynamics.

  7. Student-Generated Instructional Videos Facilitate Learning through Positive Emotions

    ERIC Educational Resources Information Center

    Pirhonen, Juhani; Rasi, Päivi

    2017-01-01

    The central focus of this study is a learning method in which university students produce instructional videos about the content matter as part of their learning process, combined with other learning assignments. The rationale for this is to promote a more multimodal pedagogy, and to provide students opportunities for a more learner-centred,…

  8. Adaptive maritime video surveillance

    NASA Astrophysics Data System (ADS)

    Gupta, Kalyan Moy; Aha, David W.; Hartley, Ralph; Moore, Philip G.

    2009-05-01

    Maritime assets such as ports, harbors, and vessels are vulnerable to a variety of near-shore threats such as small-boat attacks. Currently, such vulnerabilities are addressed predominantly by watchstanders and manual video surveillance, which is manpower intensive. Automatic maritime video surveillance techniques are being introduced to reduce manpower costs, but they have limited functionality and performance. For example, they only detect simple events such as perimeter breaches and cannot predict emerging threats. They also generate too many false alerts and cannot explain their reasoning. To overcome these limitations, we are developing the Maritime Activity Analysis Workbench (MAAW), which will be a mixed-initiative real-time maritime video surveillance tool that uses an integrated supervised machine learning approach to label independent and coordinated maritime activities. It uses the same information to predict anomalous behavior and explain its reasoning; this is an important capability for watchstander training and for collecting performance feedback. In this paper, we describe MAAW's functional architecture, which includes the following pipeline of components: (1) a video acquisition and preprocessing component that detects and tracks vessels in video images, (2) a vessel categorization and activity labeling component that uses standard and relational supervised machine learning methods to label maritime activities, and (3) an ontology-guided vessel and maritime activity annotator to enable subject matter experts (e.g., watchstanders) to provide feedback and supervision to the system. We report our findings from a preliminary system evaluation on river traffic video.

  9. Writing/Thinking in Real Time: Digital Video and Corpus Query Analysis

    ERIC Educational Resources Information Center

    Park, Kwanghyun; Kinginger, Celeste

    2010-01-01

    The advance of digital video technology in the past two decades facilitates empirical investigation of learning in real time. The focus of this paper is the combined use of real-time digital video and a networked linguistic corpus for exploring the ways in which these technologies enhance our capability to investigate the cognitive process of…

  10. The Effect of Online Violent Video Games on Levels of Aggression

    PubMed Central

    Hollingdale, Jack; Greitemeyer, Tobias

    2014-01-01

    Background In recent years the video game industry has surpassed both the music and video industries in sales. Currently violent video games are among the most popular video games played by consumers, most specifically First-Person Shooters (FPS). Technological advancements in game play experience including the ability to play online has accounted for this increase in popularity. Previous research, utilising the General Aggression Model (GAM), has identified that violent video games increase levels of aggression. Little is known, however, as to the effect of playing a violent video game online. Methods/Principal Findings Participants (N = 101) were randomly assigned to one of four experimental conditions; neutral video game—offline, neutral video game—online, violent video game—offline and violent video game—online. Following this they completed questionnaires to assess their attitudes towards the game and engaged in a chilli sauce paradigm to measure behavioural aggression. The results identified that participants who played a violent video game exhibited more aggression than those who played a neutral video game. Furthermore, this main effect was not particularly pronounced when the game was played online. Conclusions/Significance These findings suggest that both playing violent video games online and offline compared to playing neutral video games increases aggression. PMID:25391143

  11. Advanced optical components for next-generation photonic networks

    NASA Astrophysics Data System (ADS)

    Yoo, S. J. B.

    2003-08-01

    Future networks will require very high throughput, carrying dominantly data-centric traffic. The role of Photonic Networks employing all-optical systems will become increasingly important in providing scalable bandwidth, agile reconfigurability, and low-power consumptions in the future. In particular, the self-similar nature of data traffic indicates that packet switching and burst switching will be beneficial in the Next Generation Photonic Networks. While the natural conclusion is to pursue Photonic Packet Switching and Photonic Burst Switching systems, there are significant challenges in realizing such a system due to practical limitations in optical component technologies. Lack of a viable all-optical memory technology will continue to drive us towards exploring rapid reconfigurability in the wavelength domain. We will introduce and discuss the advanced optical component technologies behind the Photonic Packet Routing system designed and demonstrated at UC Davis. The system is capable of packet switching and burst switching, as well as circuit switching with 600 psec switching speed and scalability to 42 petabit/sec aggregated switching capacity. By utilizing a combination of rapidly tunable wavelength conversion and a uniform-loss cyclic frequency (ULCF) arrayed waveguide grating router (AWGR), the system is capable of rapidly switching the packets in wavelength, time, and space domains. The label swapping module inside the Photonic Packet Routing system containing a Mach-Zehnder wavelength converter and a narrow-band fiber Bragg-grating achieves all-optical label swapping with optical 2R (potentially 3R) regeneration while maintaining optical transparency for the data payload. By utilizing the advanced optical component technologies, the Photonic Packet Routing system successfully demonstrated error-free, cascaded, multi-hop photonic packet switching and routing with optical-label swapping. This paper will review the advanced optical component technologies

  12. The H.264/AVC advanced video coding standard: overview and introduction to the fidelity range extensions

    NASA Astrophysics Data System (ADS)

    Sullivan, Gary J.; Topiwala, Pankaj N.; Luthra, Ajay

    2004-11-01

    H.264/MPEG-4 AVC is the latest international video coding standard. It was jointly developed by the Video Coding Experts Group (VCEG) of the ITU-T and the Moving Picture Experts Group (MPEG) of ISO/IEC. It uses state-of-the-art coding tools and provides enhanced coding efficiency for a wide range of applications, including video telephony, video conferencing, TV, storage (DVD and/or hard disk based, especially high-definition DVD), streaming video, digital video authoring, digital cinema, and many others. The work on a new set of extensions to this standard has recently been completed. These extensions, known as the Fidelity Range Extensions (FRExt), provide a number of enhanced capabilities relative to the base specification as approved in the Spring of 2003. In this paper, an overview of this standard is provided, including the highlights of the capabilities of the new FRExt features. Some comparisons with the existing MPEG-2 and MPEG-4 Part 2 standards are also provided.

  13. Deep Lake Explorer: Using citizen science to analyze underwater video from the Great Lakes

    EPA Science Inventory

    While underwater video collection technology continues to improve, advancements in underwater video analysis techniques have lagged. Crowdsourcing image interpretation using the Zooniverse platform has proven successful for many projects, but few projects to date have included vi...

  14. Smartphone based automatic organ validation in ultrasound video.

    PubMed

    Vaish, Pallavi; Bharath, R; Rajalakshmi, P

    2017-07-01

    Telesonography involves transmission of ultrasound video from remote areas to the doctors for getting diagnosis. Due to the lack of trained sonographers in remote areas, the ultrasound videos scanned by these untrained persons do not contain the proper information that is required by a physician. As compared to standard methods for video transmission, mHealth driven systems need to be developed for transmitting valid medical videos. To overcome this problem, we are proposing an organ validation algorithm to evaluate the ultrasound video based on the content present. This will guide the semi skilled person to acquire the representative data from patient. Advancement in smartphone technology allows us to perform high medical image processing on smartphone. In this paper we have developed an Application (APP) for a smartphone which can automatically detect the valid frames (which consist of clear organ visibility) in an ultrasound video and ignores the invalid frames (which consist of no-organ visibility), and produces a compressed sized video. This is done by extracting the GIST features from the Region of Interest (ROI) of the frame and then classifying the frame using SVM classifier with quadratic kernel. The developed application resulted with the accuracy of 94.93% in classifying valid and invalid images.

  15. Next Generation Integrated Environment for Collaborative Work Across Internets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harvey B. Newman

    2009-02-24

    We are now well-advanced in our development, prototyping and deployment of a high performance next generation Integrated Environment for Collaborative Work. The system, aimed at using the capability of ESnet and Internet2 for rapid data exchange, is based on the Virtual Room Videoconferencing System (VRVS) developed by Caltech. The VRVS system has been chosen by the Internet2 Digital Video (I2-DV) Initiative as a preferred foundation for the development of advanced video, audio and multimedia collaborative applications by the Internet2 community. Today, the system supports high-end, broadcast-quality interactivity, while enabling a wide variety of clients (Mbone, H.323) to participate in themore » same conference by running different standard protocols in different contexts with different bandwidth connection limitations, has a fully Web-integrated user interface, developers and administrative APIs, a widely scalable video network topology based on both multicast domains and unicast tunnels, and demonstrated multiplatform support. This has led to its rapidly expanding production use for national and international scientific collaborations in more than 60 countries. We are also in the process of creating a 'testbed video network' and developing the necessary middleware to support a set of new and essential requirements for rapid data exchange, and a high level of interactivity in large-scale scientific collaborations. These include a set of tunable, scalable differentiated network services adapted to each of the data streams associated with a large number of collaborative sessions, policy-based and network state-based resource scheduling, authentication, and optional encryption to maintain confidentiality of inter-personal communications. High performance testbed video networks will be established in ESnet and Internet2 to test and tune the implementation, using a few target application-sets.« less

  16. Long Term Activity Analysis in Surveillance Video Archives

    ERIC Educational Resources Information Center

    Chen, Ming-yu

    2010-01-01

    Surveillance video recording is becoming ubiquitous in daily life for public areas such as supermarkets, banks, and airports. The rate at which surveillance video is being generated has accelerated demand for machine understanding to enable better content-based search capabilities. Analyzing human activity is one of the key tasks to understand and…

  17. Using ARINC 818 Avionics Digital Video Bus (ADVB) for military displays

    NASA Astrophysics Data System (ADS)

    Alexander, Jon; Keller, Tim

    2007-04-01

    ARINC 818 Avionics Digital Video Bus (ADVB) is a new digital video interface and protocol standard developed especially for high bandwidth uncompressed digital video. The first draft of this standard, released in January of 2007, has been advanced by ARINC and the aerospace community to meet the acute needs of commercial aviation for higher performance digital video. This paper analyzes ARINC 818 for use in military display systems found in avionics, helicopters, and ground vehicles. The flexibility of ARINC 818 for the diverse resolutions, grayscales, pixel formats, and frame rates of military displays is analyzed as well as the suitability of ARINC 818 to support requirements for military video systems including bandwidth, latency, and reliability. Implementation issues relevant to military displays are presented.

  18. Parachute Aerodynamics From Video Data

    NASA Technical Reports Server (NTRS)

    Schoenenberger, Mark; Queen, Eric M.; Cruz, Juan R.

    2005-01-01

    A new data analysis technique for the identification of static and dynamic aerodynamic stability coefficients from wind tunnel test video data is presented. This new technique was applied to video data obtained during a parachute wind tunnel test program conducted in support of the Mars Exploration Rover Mission. Total angle-of-attack data obtained from video images were used to determine the static pitching moment curve of the parachute. During the original wind tunnel test program the static pitching moment curve had been determined by forcing the parachute to a specific total angle-of -attack and measuring the forces generated. It is shown with the new technique that this parachute, when free to rotate, trims at an angle-of-attack two degrees lower than was measured during the forced-angle tests. An attempt was also made to extract pitch damping information from the video data. Results suggest that the parachute is dynamically unstable at the static trim point and tends to become dynamically stable away from the trim point. These trends are in agreement with limit-cycle-like behavior observed in the video. However, the chaotic motion of the parachute produced results with large uncertainty bands.

  19. Treatment Considerations in Internet and Video Game Addiction: A Qualitative Discussion.

    PubMed

    Greenfield, David N

    2018-04-01

    Internet and video game addiction has been a steadily developing consequence of modern living. Behavioral and process addictions and particularly Internet and video game addiction require specialized treatment protocols and techniques. Recent advances in addiction medicine have improved our understanding of the neurobiology of substance and behavioral addictions. Novel research has expanded the ways we understand and apply well-established addiction treatments as well as newer therapies specific to Internet and video game addiction. This article reviews the etiology, psychology, and neurobiology of Internet and video game addiction and presents treatment strategies and protocols for addressing this growing problem. Copyright © 2017 Elsevier Inc. All rights reserved.

  20. Lyme Disease and YouTube TM: A Cross-Sectional Study of Video Contents.

    PubMed

    Basch, Corey H; Mullican, Lindsay A; Boone, Kwanza D; Yin, Jingjing; Berdnik, Alyssa; Eremeeva, Marina E; Fung, Isaac Chun-Hai

    2017-08-01

    Lyme disease is the most common tick-borne disease. People seek health information on Lyme disease from YouTube TM videos. In this study, we investigated if the contents of Lyme disease-related YouTube TM videos varied by their sources. Most viewed English YouTube TM videos (n = 100) were identified and manually coded for contents and sources. Within the sample, 40 videos were consumer-generated, 31 were internet-based news, 16 were professional, and 13 were TV news. Compared with consumer-generated videos, TV news videos were more likely to mention celebrities (odds ratio [OR], 10.57; 95% confidence interval [CI], 2.13-52.58), prevention of Lyme disease through wearing protective clothing (OR, 5.63; 95% CI, 1.23-25.76), and spraying insecticides (OR, 7.71; 95% CI, 1.52-39.05). A majority of the most popular Lyme disease-related YouTube TM videos were not created by public health professionals. Responsible reporting and creative video-making facilitate Lyme disease education. Partnership with YouTube TM celebrities to co-develop educational videos may be a future direction.

  1. GRC Supporting Technology for NASA's Advanced Stirling Radioisotope Generator (ASRG)

    NASA Technical Reports Server (NTRS)

    Schreiber, Jeffrey G.; Thieme, Lanny G.

    2008-01-01

    From 1999 to 2006, the NASA Glenn Research Center (GRC) supported a NASA project to develop a high-efficiency, nominal 110-We Stirling Radioisotope Generator (SRG110) for potential use on NASA missions. Lockheed Martin was selected as the System Integration Contractor for the SRG110, under contract to the Department of Energy (DOE). The potential applications included deep space missions, and Mars rovers. The project was redirected in 2006 to make use of the Advanced Stirling Convertor (ASC) that was being developed by Sunpower, Inc. under contract to GRC, which would reduce the mass of the generator and increase the power output. This change would approximately double the specific power and result in the Advanced Stirling Radioisotope Generator (ASRG). The SRG110 supporting technology effort at GRC was replanned to support the integration of the Sunpower convertor and the ASRG. This paper describes the ASRG supporting technology effort at GRC and provides details of the contributions in some of the key areas. The GRC tasks include convertor extended-operation testing in air and in thermal vacuum environments, heater head life assessment, materials studies, permanent magnet characterization and aging tests, structural dynamics testing, electromagnetic interference and electromagnetic compatibility characterization, evaluation of organic materials, reliability studies, and analysis to support controller development.

  2. VLSI-based video event triggering for image data compression

    NASA Astrophysics Data System (ADS)

    Williams, Glenn L.

    1994-02-01

    Long-duration, on-orbit microgravity experiments require a combination of high resolution and high frame rate video data acquisition. The digitized high-rate video stream presents a difficult data storage problem. Data produced at rates of several hundred million bytes per second may require a total mission video data storage requirement exceeding one terabyte. A NASA-designed, VLSI-based, highly parallel digital state machine generates a digital trigger signal at the onset of a video event. High capacity random access memory storage coupled with newly available fuzzy logic devices permits the monitoring of a video image stream for long term (DC-like) or short term (AC-like) changes caused by spatial translation, dilation, appearance, disappearance, or color change in a video object. Pre-trigger and post-trigger storage techniques are then adaptable to archiving only the significant video images.

  3. VLSI-based Video Event Triggering for Image Data Compression

    NASA Technical Reports Server (NTRS)

    Williams, Glenn L.

    1994-01-01

    Long-duration, on-orbit microgravity experiments require a combination of high resolution and high frame rate video data acquisition. The digitized high-rate video stream presents a difficult data storage problem. Data produced at rates of several hundred million bytes per second may require a total mission video data storage requirement exceeding one terabyte. A NASA-designed, VLSI-based, highly parallel digital state machine generates a digital trigger signal at the onset of a video event. High capacity random access memory storage coupled with newly available fuzzy logic devices permits the monitoring of a video image stream for long term (DC-like) or short term (AC-like) changes caused by spatial translation, dilation, appearance, disappearance, or color change in a video object. Pre-trigger and post-trigger storage techniques are then adaptable to archiving only the significant video images.

  4. Selecting salient frames for spatiotemporal video modeling and segmentation.

    PubMed

    Song, Xiaomu; Fan, Guoliang

    2007-12-01

    We propose a new statistical generative model for spatiotemporal video segmentation. The objective is to partition a video sequence into homogeneous segments that can be used as "building blocks" for semantic video segmentation. The baseline framework is a Gaussian mixture model (GMM)-based video modeling approach that involves a six-dimensional spatiotemporal feature space. Specifically, we introduce the concept of frame saliency to quantify the relevancy of a video frame to the GMM-based spatiotemporal video modeling. This helps us use a small set of salient frames to facilitate the model training by reducing data redundancy and irrelevance. A modified expectation maximization algorithm is developed for simultaneous GMM training and frame saliency estimation, and the frames with the highest saliency values are extracted to refine the GMM estimation for video segmentation. Moreover, it is interesting to find that frame saliency can imply some object behaviors. This makes the proposed method also applicable to other frame-related video analysis tasks, such as key-frame extraction, video skimming, etc. Experiments on real videos demonstrate the effectiveness and efficiency of the proposed method.

  5. A method of mobile video transmission based on J2ee

    NASA Astrophysics Data System (ADS)

    Guo, Jian-xin; Zhao, Ji-chun; Gong, Jing; Chun, Yang

    2013-03-01

    As 3G (3rd-generation) networks evolve worldwide, the rising demand for mobile video services and the enormous growth of video on the internet is creating major new revenue opportunities for mobile network operators and application developers. The text introduced a method of mobile video transmission based on J2ME, giving the method of video compressing, then describing the video compressing standard, and then describing the software design. The proposed mobile video method based on J2EE is a typical mobile multimedia application, which has a higher availability and a wide range of applications. The users can get the video through terminal devices such as phone.

  6. Robust video copy detection approach based on local tangent space alignment

    NASA Astrophysics Data System (ADS)

    Nie, Xiushan; Qiao, Qianping

    2012-04-01

    We propose a robust content-based video copy detection approach based on local tangent space alignment (LTSA), which is an efficient dimensionality reduction algorithm. The idea is motivated by the fact that the content of video becomes richer and the dimension of content becomes higher. It does not give natural tools for video analysis and understanding because of the high dimensionality. The proposed approach reduces the dimensionality of video content using LTSA, and then generates video fingerprints in low dimensional space for video copy detection. Furthermore, a dynamic sliding window is applied to fingerprint matching. Experimental results show that the video copy detection approach has good robustness and discrimination.

  7. Veterinary students' usage and perception of video teaching resources

    PubMed Central

    2011-01-01

    Background The purpose of our study was to use a student-centred approach to develop an online video learning resource (called 'Moo Tube') at the School of Veterinary Medicine and Science, University of Nottingham, UK and also to provide guidance for other academics in the School wishing to develop a similar resource in the future. Methods A focus group in the format of the nominal group technique was used to garner the opinions of 12 undergraduate students (3 from year-1, 4 from year-2 and 5 from year-3). Students generated lists of items in response to key questions, these responses were thematically analysed to generate key themes which were compared between the different year groups. The number of visits to 'Moo Tube' before and after an objective structured practical examination (OSPE) was also analysed to provide data on video usage. Results Students highlighted a number of strengths of video resources which can be grouped into four overarching themes: (1) teaching enhancement, (2) accessibility, (3) technical quality and (4) video content. Of these themes, students rated teaching enhancement and accessibility most highly. Video usage was seen to significantly increase (P < 0.05) prior to an examination and significantly decrease (P < 0.05) following the examination. Conclusions The students had a positive perception of video usage in higher education. Video usage increases prior to practical examinations. Image quality was a greater concern with year-3 students than with either year-1 or 2 students but all groups highlighted the following as important issues: i) good sound quality, ii) accessibility, including location of videos within electronic libraries, and iii) video content. Based on the findings from this study, guidelines are suggested for those developing undergraduate veterinary videos. We believe that many aspects of our list will have resonance in other areas of medicine education and higher education. PMID:21219639

  8. Veterinary students' usage and perception of video teaching resources.

    PubMed

    Roshier, Amanda L; Foster, Neil; Jones, Michael A

    2011-01-10

    The purpose of our study was to use a student-centred approach to develop an online video learning resource (called 'Moo Tube') at the School of Veterinary Medicine and Science, University of Nottingham, UK and also to provide guidance for other academics in the School wishing to develop a similar resource in the future. A focus group in the format of the nominal group technique was used to garner the opinions of 12 undergraduate students (3 from year-1, 4 from year-2 and 5 from year-3). Students generated lists of items in response to key questions, these responses were thematically analysed to generate key themes which were compared between the different year groups. The number of visits to 'Moo Tube' before and after an objective structured practical examination (OSPE) was also analysed to provide data on video usage. Students highlighted a number of strengths of video resources which can be grouped into four overarching themes: (1) teaching enhancement, (2) accessibility, (3) technical quality and (4) video content. Of these themes, students rated teaching enhancement and accessibility most highly. Video usage was seen to significantly increase (P < 0.05) prior to an examination and significantly decrease (P < 0.05) following the examination. The students had a positive perception of video usage in higher education. Video usage increases prior to practical examinations. Image quality was a greater concern with year-3 students than with either year-1 or 2 students but all groups highlighted the following as important issues: i) good sound quality, ii) accessibility, including location of videos within electronic libraries, and iii) video content. Based on the findings from this study, guidelines are suggested for those developing undergraduate veterinary videos. We believe that many aspects of our list will have resonance in other areas of medicine education and higher education.

  9. Zika Virus on YouTube: An Analysis of English-language Video Content by Source

    PubMed Central

    2017-01-01

    Objectives The purpose of this study was to describe the source, length, number of views, and content of the most widely viewed Zika virus (ZIKV)-related YouTube videos. We hypothesized that ZIKV-related videos uploaded by different sources contained different content. Methods The 100 most viewed English ZIKV-related videos were manually coded and analyzed statistically. Results Among the 100 videos, there were 43 consumer-generated videos, 38 Internet-based news videos, 15 TV-based news videos, and 4 professional videos. Internet news sources captured over two-thirds of the total of 8 894 505 views. Compared with consumer-generated videos, Internet-based news videos were more likely to mention the impact of ZIKV on babies (odds ratio [OR], 6.25; 95% confidence interval [CI], 1.64 to 23.76), the number of cases in Latin America (OR, 5.63; 95% CI, 1.47 to 21.52); and ZIKV in Africa (OR, 2.56; 95% CI, 1.04 to 6.31). Compared with consumer-generated videos, TV-based news videos were more likely to express anxiety or fear of catching ZIKV (OR, 6.67; 95% CI, 1.36 to 32.70); to highlight fear of ZIKV among members of the public (OR, 7.45; 95% CI, 1.20 to 46.16); and to discuss avoiding pregnancy (OR, 3.88; 95% CI, 1.13 to 13.25). Conclusions Public health agencies should establish a larger presence on YouTube to reach more people with evidence-based information about ZIKV. PMID:28372356

  10. Two-terminal video coding.

    PubMed

    Yang, Yang; Stanković, Vladimir; Xiong, Zixiang; Zhao, Wei

    2009-03-01

    Following recent works on the rate region of the quadratic Gaussian two-terminal source coding problem and limit-approaching code designs, this paper examines multiterminal source coding of two correlated, i.e., stereo, video sequences to save the sum rate over independent coding of both sequences. Two multiterminal video coding schemes are proposed. In the first scheme, the left sequence of the stereo pair is coded by H.264/AVC and used at the joint decoder to facilitate Wyner-Ziv coding of the right video sequence. The first I-frame of the right sequence is successively coded by H.264/AVC Intracoding and Wyner-Ziv coding. An efficient stereo matching algorithm based on loopy belief propagation is then adopted at the decoder to produce pixel-level disparity maps between the corresponding frames of the two decoded video sequences on the fly. Based on the disparity maps, side information for both motion vectors and motion-compensated residual frames of the right sequence are generated at the decoder before Wyner-Ziv encoding. In the second scheme, source splitting is employed on top of classic and Wyner-Ziv coding for compression of both I-frames to allow flexible rate allocation between the two sequences. Experiments with both schemes on stereo video sequences using H.264/AVC, LDPC codes for Slepian-Wolf coding of the motion vectors, and scalar quantization in conjunction with LDPC codes for Wyner-Ziv coding of the residual coefficients give a slightly lower sum rate than separate H.264/AVC coding of both sequences at the same video quality.

  11. Advanced Stirling Radioisotope Generator EU2 Anomaly Investigation

    NASA Technical Reports Server (NTRS)

    Lewandowski, Edward J.; Dobbs, Michael W.; Oriti, Salvatore M.

    2016-01-01

    The Advanced Stirling Radioisotope Generator (ASRG) Engineering Unit 2 (EU2) is the highest fidelity electrically-heated Stirling radioisotope generator built to date. NASA Glenn Research Center (GRC) completed the assembly of the ASRG EU2 in September, 2014 using hardware from the now cancelled ASRG flight development project. The ASRG EU2 integrated the first pair of Sunpower's ASC-E3 Stirling convertors (ASC-E3 #1 and #2) in an aluminum generator housing with Lockheed Martin's Engineering Development Unit (EDU) 4 controller. After just 179 hours of EU2 generator operation, the first power fluctuation occurred on ASC-E3 #1. The first power fluctuation occurred 175 hours later on ASC-E3 #2. Over time, the power fluctuations became more frequent on both convertors and larger in magnitude. Eventually the EU2 was shut down in January, 2015. An anomaly investigation was chartered to determine root cause of the power fluctuations and other anomalous observations. A team with members from GRC, Sunpower, and Lockheed Martin conducted a thorough investigation of the EU2 anomalies. Findings from the EU2 disassembly identified proximate causes of the anomalous observations. Discussion of the team's assessment of the primary possible failure theories, root cause, and conclusions is provided. Recommendations are made for future Stirling generator development to address the findings from the anomaly investigation. Additional findings from the investigation are also discussed.

  12. Video-to-film color-image recorder.

    NASA Technical Reports Server (NTRS)

    Montuori, J. S.; Carnes, W. R.; Shim, I. H.

    1973-01-01

    A precision video-to-film recorder for use in image data processing systems, being developed for NASA, will convert three video input signals (red, blue, green) into a single full-color light beam for image recording on color film. Argon ion and krypton lasers are used to produce three spectral lines which are independently modulated by the appropriate video signals, combined into a single full-color light beam, and swept over the recording film in a raster format for image recording. A rotating multi-faceted spinner mounted on a translating carriage generates the raster, and an annotation head is used to record up to 512 alphanumeric characters in a designated area outside the image area.

  13. Chroma sampling and modulation techniques in high dynamic range video coding

    NASA Astrophysics Data System (ADS)

    Dai, Wei; Krishnan, Madhu; Topiwala, Pankaj

    2015-09-01

    High Dynamic Range and Wide Color Gamut (HDR/WCG) Video Coding is an area of intense research interest in the engineering community, for potential near-term deployment in the marketplace. HDR greatly enhances the dynamic range of video content (up to 10,000 nits), as well as broadens the chroma representation (BT.2020). The resulting content offers new challenges in its coding and transmission. The Moving Picture Experts Group (MPEG) of the International Standards Organization (ISO) is currently exploring coding efficiency and/or the functionality enhancements of the recently developed HEVC video standard for HDR and WCG content. FastVDO has developed an advanced approach to coding HDR video, based on splitting the HDR signal into a smoothed luminance (SL) signal, and an associated base signal (B). Both signals are then chroma downsampled to YFbFr 4:2:0 signals, using advanced resampling filters, and coded using the Main10 High Efficiency Video Coding (HEVC) standard, which has been developed jointly by ISO/IEC MPEG and ITU-T WP3/16 (VCEG). Our proposal offers both efficient coding, and backwards compatibility with the existing HEVC Main10 Profile. That is, an existing Main10 decoder can produce a viewable standard dynamic range video, suitable for existing screens. Subjective tests show visible improvement over the anchors. Objective tests show a sizable gain of over 25% in PSNR (RGB domain) on average, for a key set of test clips selected by the ISO/MPEG committee.

  14. Design of video interface conversion system based on FPGA

    NASA Astrophysics Data System (ADS)

    Zhao, Heng; Wang, Xiang-jun

    2014-11-01

    This paper presents a FPGA based video interface conversion system that enables the inter-conversion between digital and analog video. Cyclone IV series EP4CE22F17C chip from Altera Corporation is used as the main video processing chip, and single-chip is used as the information interaction control unit between FPGA and PC. The system is able to encode/decode messages from the PC. Technologies including video decoding/encoding circuits, bus communication protocol, data stream de-interleaving and de-interlacing, color space conversion and the Camera Link timing generator module of FPGA are introduced. The system converts Composite Video Broadcast Signal (CVBS) from the CCD camera into Low Voltage Differential Signaling (LVDS), which will be collected by the video processing unit with Camera Link interface. The processed video signals will then be inputted to system output board and displayed on the monitor.The current experiment shows that it can achieve high-quality video conversion with minimum board size.

  15. Cultural values and cross-cultural video consumption on YouTube.

    PubMed

    Park, Minsu; Park, Jaram; Baek, Young Min; Macy, Michael

    2017-01-01

    Video-sharing social media like YouTube provide access to diverse cultural products from all over the world, making it possible to test theories that the Web facilitates global cultural convergence. Drawing on a daily listing of YouTube's most popular videos across 58 countries, we investigate the consumption of popular videos in countries that differ in cultural values, language, gross domestic product, and Internet penetration rate. Although online social media facilitate global access to cultural products, we find this technological capability does not result in universal cultural convergence. Instead, consumption of popular videos in culturally different countries appears to be constrained by cultural values. Cross-cultural convergence is more advanced in cosmopolitan countries with cultural values that favor individualism and power inequality.

  16. Cultural values and cross-cultural video consumption on YouTube

    PubMed Central

    Macy, Michael

    2017-01-01

    Video-sharing social media like YouTube provide access to diverse cultural products from all over the world, making it possible to test theories that the Web facilitates global cultural convergence. Drawing on a daily listing of YouTube’s most popular videos across 58 countries, we investigate the consumption of popular videos in countries that differ in cultural values, language, gross domestic product, and Internet penetration rate. Although online social media facilitate global access to cultural products, we find this technological capability does not result in universal cultural convergence. Instead, consumption of popular videos in culturally different countries appears to be constrained by cultural values. Cross-cultural convergence is more advanced in cosmopolitan countries with cultural values that favor individualism and power inequality. PMID:28531228

  17. Development of Advanced Stirling Radioisotope Generator for Space Exploration

    NASA Technical Reports Server (NTRS)

    Chan, Jack; Wood, J. Gary; Schreiber, Jeffrey G.

    2007-01-01

    Under the joint sponsorship of the Department of Energy and NASA, a radioisotope power system utilizing Stirling power conversion technology is being developed for potential future space missions. The higher conversion efficiency of the Stirling cycle compared with that of Radioisotope Thermoelectric Generators (RTGs) used in previous missions (Viking, Pioneer, Voyager, Galileo, Ulysses, Cassini, and New Horizons) offers the advantage of a four-fold reduction in PuO2 fuel, thereby saving cost and reducing radiation exposure to support personnel. With the advancement of state-of-the-art Stirling technology development under the NASA Research Announcement (NRA) project, the Stirling Radioisotope Generator program has evolved to incorporate the advanced Stirling convertor (ASC), provided by Sunpower, into an engineering unit. Due to the reduced envelope and lighter mass of the ASC compared to the previous Stirling convertor, the specific power of the flight generator is projected to increase from 3.5 to 7 We/kg, along with a 25 percent reduction in generator length. Modifications are being made to the ASC design to incorporate features for thermal, mechanical, and electrical integration with the engineering unit. These include the heat collector for hot end interface, cold-side flange for waste heat removal and structural attachment, and piston position sensor for ASC control and power factor correction. A single-fault tolerant, active power factor correction controller is used to synchronize the Stirling convertors, condition the electrical power from AC to DC, and to control the ASCs to maintain operation within temperature and piston stroke limits. Development activities at Sunpower and NASA Glenn Research Center (GRC) are also being conducted on the ASC to demonstrate the capability for long life, high reliability, and flight qualification needed for use in future missions.

  18. Helping Video Games Rewire "Our Minds"

    NASA Technical Reports Server (NTRS)

    Pope, Alan T.; Palsson, Olafur S.

    2001-01-01

    Biofeedback-modulated video games are games that respond to physiological signals as well as mouse, joystick or game controller input; they embody the concept of improving physiological functioning by rewarding specific healthy body signals with success at playing a video game. The NASA patented biofeedback-modulated game method blends biofeedback into popular off-the- shelf video games in such a way that the games do not lose their entertainment value. This method uses physiological signals (e.g., electroencephalogram frequency band ratio) not simply to drive a biofeedback display directly, or periodically modify a task as in other systems, but to continuously modulate parameters (e.g., game character speed and mobility) of a game task in real time while the game task is being performed by other means (e.g., a game controller). Biofeedback-modulated video games represent a new generation of computer and video game environments that train valuable mental skills beyond eye-hand coordination. These psychophysiological training technologies are poised to exploit the revolution in interactive multimedia home entertainment for the personal improvement, not just the diversion, of the user.

  19. Complex Event Processing for Content-Based Text, Image, and Video Retrieval

    DTIC Science & Technology

    2016-06-01

    NY): Wiley- Interscience; 2000. Feldman R, Sanger J. The text mining handbook: advanced approaches in analyzing unstructured data. New York (NY...ARL-TR-7705 ● JUNE 2016 US Army Research Laboratory Complex Event Processing for Content-Based Text , Image, and Video Retrieval...ARL-TR-7705 ● JUNE 2016 US Army Research Laboratory Complex Event Processing for Content-Based Text , Image, and Video Retrieval

  20. YouTube as a Qualitative Research Asset: Reviewing User Generated Videos as Learning Resources

    ERIC Educational Resources Information Center

    Chenail, Ronald J.

    2011-01-01

    YouTube, the video hosting service, offers students, teachers, and practitioners of qualitative researchers a unique reservoir of video clips introducing basic qualitative research concepts, sharing qualitative data from interviews and field observations, and presenting completed research studies. This web-based site also affords qualitative…

  1. Self-Supervised Video Hashing With Hierarchical Binary Auto-Encoder.

    PubMed

    Song, Jingkuan; Zhang, Hanwang; Li, Xiangpeng; Gao, Lianli; Wang, Meng; Hong, Richang

    2018-07-01

    Existing video hash functions are built on three isolated stages: frame pooling, relaxed learning, and binarization, which have not adequately explored the temporal order of video frames in a joint binary optimization model, resulting in severe information loss. In this paper, we propose a novel unsupervised video hashing framework dubbed self-supervised video hashing (SSVH), which is able to capture the temporal nature of videos in an end-to-end learning to hash fashion. We specifically address two central problems: 1) how to design an encoder-decoder architecture to generate binary codes for videos and 2) how to equip the binary codes with the ability of accurate video retrieval. We design a hierarchical binary auto-encoder to model the temporal dependencies in videos with multiple granularities, and embed the videos into binary codes with less computations than the stacked architecture. Then, we encourage the binary codes to simultaneously reconstruct the visual content and neighborhood structure of the videos. Experiments on two real-world data sets show that our SSVH method can significantly outperform the state-of-the-art methods and achieve the current best performance on the task of unsupervised video retrieval.

  2. Self-Supervised Video Hashing With Hierarchical Binary Auto-Encoder

    NASA Astrophysics Data System (ADS)

    Song, Jingkuan; Zhang, Hanwang; Li, Xiangpeng; Gao, Lianli; Wang, Meng; Hong, Richang

    2018-07-01

    Existing video hash functions are built on three isolated stages: frame pooling, relaxed learning, and binarization, which have not adequately explored the temporal order of video frames in a joint binary optimization model, resulting in severe information loss. In this paper, we propose a novel unsupervised video hashing framework dubbed Self-Supervised Video Hashing (SSVH), that is able to capture the temporal nature of videos in an end-to-end learning-to-hash fashion. We specifically address two central problems: 1) how to design an encoder-decoder architecture to generate binary codes for videos; and 2) how to equip the binary codes with the ability of accurate video retrieval. We design a hierarchical binary autoencoder to model the temporal dependencies in videos with multiple granularities, and embed the videos into binary codes with less computations than the stacked architecture. Then, we encourage the binary codes to simultaneously reconstruct the visual content and neighborhood structure of the videos. Experiments on two real-world datasets (FCVID and YFCC) show that our SSVH method can significantly outperform the state-of-the-art methods and achieve the currently best performance on the task of unsupervised video retrieval.

  3. Enhancing Proficiency Level Using Digital Video

    ERIC Educational Resources Information Center

    Fujioka-Ito, Noriko

    2009-01-01

    This article reports a case study where the data was collected at one university in the United States. It shows the benefits of using digital videos in intermediate-level Japanese language course curriculum so that learners can develop a higher level of proficiency. Since advanced-level speakers, according to the American Council on the Teaching…

  4. 2D automatic body-fitted structured mesh generation using advancing extraction method

    NASA Astrophysics Data System (ADS)

    Zhang, Yaoxin; Jia, Yafei

    2018-01-01

    This paper presents an automatic mesh generation algorithm for body-fitted structured meshes in Computational Fluids Dynamics (CFD) analysis using the Advancing Extraction Method (AEM). The method is applicable to two-dimensional domains with complex geometries, which have the hierarchical tree-like topography with extrusion-like structures (i.e., branches or tributaries) and intrusion-like structures (i.e., peninsula or dikes). With the AEM, the hierarchical levels of sub-domains can be identified, and the block boundary of each sub-domain in convex polygon shape in each level can be extracted in an advancing scheme. In this paper, several examples were used to illustrate the effectiveness and applicability of the proposed algorithm for automatic structured mesh generation, and the implementation of the method.

  5. Orbital Express Advanced Video Guidance Sensor

    NASA Technical Reports Server (NTRS)

    Howard, Ricky; Heaton, Andy; Pinson, Robin; Carrington, Connie

    2008-01-01

    In May 2007 the first US fully autonomous rendezvous and capture was successfully performed by DARPA's Orbital Express (OE) mission. Since then, the Boeing ASTRO spacecraft and the Ball Aerospace NEXTSat have performed multiple rendezvous and docking maneuvers to demonstrate the technologies needed for satellite servicing. MSFC's Advanced Video Guidance Sensor (AVGS) is a primary near-field proximity operations sensor integrated into ASTRO's Autonomous Rendezvous and Capture Sensor System (ARCSS), which provides relative state knowledge to the ASTRO GN&C system. This paper provides an overview of the AVGS sensor flying on Orbital Express, and a summary of the ground testing and on-orbit performance of the AVGS for OE. The AVGS is a laser-based system that is capable of providing range and bearing at midrange distances and full six degree-of-freedom (6DOF) knowledge at near fields. The sensor fires lasers at two different frequencies to illuminate the Long Range Targets (LRTs) and the Short Range Targets (SRTs) on NEXTSat. Subtraction of one image from the other image removes extraneous light sources and reflections from anything other than the corner cubes on the LRTs and SRTs. This feature has played a significant role for Orbital Express in poor lighting conditions. The very bright spots that remain in the subtracted image are processed by the target recognition algorithms and the inverse-perspective algorithms, to provide 3DOF or 6DOF relative state information. Although Orbital Express has configured the ASTRO ARCSS system to only use AVGS at ranges of 120 m or less, some OE scenarios have provided opportunities for AVGS to acquire and track NEXTSat at greater distances. Orbital Express scenarios to date that have utilized AVGS include a berthing operation performed by the ASTRO robotic arm, sensor checkout maneuvers performed by the ASTRO robotic arm, 10-m unmated operations, 30-m unmated operations, and Scenario 3-1 anomaly recovery. The AVGS performed very

  6. Advanced Material Strategies for Next-Generation Additive Manufacturing

    PubMed Central

    Chang, Jinke; He, Jiankang; Zhou, Wenxing; Lei, Qi; Li, Xiao; Li, Dichen

    2018-01-01

    Additive manufacturing (AM) has drawn tremendous attention in various fields. In recent years, great efforts have been made to develop novel additive manufacturing processes such as micro-/nano-scale 3D printing, bioprinting, and 4D printing for the fabrication of complex 3D structures with high resolution, living components, and multimaterials. The development of advanced functional materials is important for the implementation of these novel additive manufacturing processes. Here, a state-of-the-art review on advanced material strategies for novel additive manufacturing processes is provided, mainly including conductive materials, biomaterials, and smart materials. The advantages, limitations, and future perspectives of these materials for additive manufacturing are discussed. It is believed that the innovations of material strategies in parallel with the evolution of additive manufacturing processes will provide numerous possibilities for the fabrication of complex smart constructs with multiple functions, which will significantly widen the application fields of next-generation additive manufacturing. PMID:29361754

  7. Advanced Material Strategies for Next-Generation Additive Manufacturing.

    PubMed

    Chang, Jinke; He, Jiankang; Mao, Mao; Zhou, Wenxing; Lei, Qi; Li, Xiao; Li, Dichen; Chua, Chee-Kai; Zhao, Xin

    2018-01-22

    Additive manufacturing (AM) has drawn tremendous attention in various fields. In recent years, great efforts have been made to develop novel additive manufacturing processes such as micro-/nano-scale 3D printing, bioprinting, and 4D printing for the fabrication of complex 3D structures with high resolution, living components, and multimaterials. The development of advanced functional materials is important for the implementation of these novel additive manufacturing processes. Here, a state-of-the-art review on advanced material strategies for novel additive manufacturing processes is provided, mainly including conductive materials, biomaterials, and smart materials. The advantages, limitations, and future perspectives of these materials for additive manufacturing are discussed. It is believed that the innovations of material strategies in parallel with the evolution of additive manufacturing processes will provide numerous possibilities for the fabrication of complex smart constructs with multiple functions, which will significantly widen the application fields of next-generation additive manufacturing.

  8. Latest Highlights from our Direct Measurement Video Collection

    NASA Astrophysics Data System (ADS)

    Vonk, M.; Bohacek, P. H.

    2014-12-01

    Recent advances in technology have made videos much easier to produce, edit, store, transfer, and view. This has spawned an explosion in a production of a wide variety of different types of pedagogical videos. But with the exception of student-made videos (which are often of poor quality) almost all of the educational videos being produced are passive. No matter how compelling the content, students are expected to simply sit and watch them. Because we feel that being engaged and active are necessary components of student learning, we have been working to create a free online library of Direct Measurement Videos (DMV's). These videos are short high-quality videos of real events, shot in a way that allows students to make measurements directly from the video. Instead of handing students a word problem about a car skidding on ice, we actually show them the car skidding on ice. We then ask them to measure the important quantities, make calculations based on those measurements and solve for unknowns. DMV's are more interesting than their word problem equivalents and frequently inspire further questions about the physics of the situation or about the uncertainty of the measurement in ways that word problems almost never do. We feel that it is simply impossible to a video of a roller coaster or a rocket and then argue that word problems are better. In this talk I will highlight some new additions to our DMV collection. This work is supported by NSF TUES award #1245268

  9. 78 FR 59927 - Next Generation Risk Assessment: Incorporation of Recent Advances in Molecular, Computational...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-30

    ... Generation Risk Assessment: Incorporation of Recent Advances in Molecular, Computational, and Systems Biology..., Computational, and Systems Biology [External Review Draft]'' (EPA/600/R-13/214A). EPA is also announcing that... Advances in Molecular, Computational, and Systems Biology [External Review Draft]'' is available primarily...

  10. Advanced ceramic materials for next-generation nuclear applications

    NASA Astrophysics Data System (ADS)

    Marra, John

    2011-10-01

    The nuclear industry is at the eye of a 'perfect storm' with fuel oil and natural gas prices near record highs, worldwide energy demands increasing at an alarming rate, and increased concerns about greenhouse gas (GHG) emissions that have caused many to look negatively at long-term use of fossil fuels. This convergence of factors has led to a growing interest in revitalization of the nuclear power industry within the United States and across the globe. Many are surprised to learn that nuclear power provides approximately 20% of the electrical power in the US and approximately 16% of the world-wide electric power. With the above factors in mind, world-wide over 130 new reactor projects are being considered with approximately 25 new permit applications in the US. Materials have long played a very important role in the nuclear industry with applications throughout the entire fuel cycle; from fuel fabrication to waste stabilization. As the international community begins to look at advanced reactor systems and fuel cycles that minimize waste and increase proliferation resistance, materials will play an even larger role. Many of the advanced reactor concepts being evaluated operate at high-temperature requiring the use of durable, heat-resistant materials. Advanced metallic and ceramic fuels are being investigated for a variety of Generation IV reactor concepts. These include the traditional TRISO-coated particles, advanced alloy fuels for 'deep-burn' applications, as well as advanced inert-matrix fuels. In order to minimize wastes and legacy materials, a number of fuel reprocessing operations are being investigated. Advanced materials continue to provide a vital contribution in 'closing the fuel cycle' by stabilization of associated low-level and high-level wastes in highly durable cements, ceramics, and glasses. Beyond this fission energy application, fusion energy will demand advanced materials capable of withstanding the extreme environments of high

  11. Impact of Super Monkey Ball and Underground video games on basic and advanced laparoscopic skill training.

    PubMed

    Rosser, James C; Liu, Xinwei; Jacobs, Charles; Choi, Katherine Mia; Jalink, Maarten B; Ten Cate Hoedemaker, Henk O

    2017-04-01

    This abstract profiles the comparison of correlations between previously validated Super Monkey Ball (SMB) and recently introduced Underground (U) video game on the Nintendo Wii U to multiple validated tasks used for developing basic and advanced laparoscopic skills. Sixty-eight participants, 53 residents and 15 attending surgeons, performed the Top Gun Pea Drop, FLS Peg Pass, intracorporeal suturing, and two video games (SMB and U). SMB is an over-the-counter game, and U was formulated for laparoscopic skill training. Spearman's rank correlations were performed looking at performance comparing the three validated laparoscopic training tasks, and SMB/U. The SMB score had a moderate correlation with intracorporeal suturing (ρ = 0.39, p < 0.01), and the final score involving all three tasks (ρ = 0.39, p < 0.01), but low correlations with Pea Drop Drill and FLS Peg Transfer (ρ = 0.11, 0.18, p < 0.01). The U score had a small correlation with intracorporeal suturing and final score (ρ = 0.09, 0.13, p < 0.01). However, there were correlations between U score and Pea Drop Drill, and FLS Peg Transfer (ρ = 0.24, 0.27, p < 0.01, respectively). In this study, SMB had a very significant correlation with intracorporeal suturing. U demonstrated more of a correlation with basic skills. At this point, our conclusion would be that both are effective for laparoscopic skill training, and they should be used in tandem rather than alone.

  12. Activity-based exploitation of Full Motion Video (FMV)

    NASA Astrophysics Data System (ADS)

    Kant, Shashi

    2012-06-01

    Video has been a game-changer in how US forces are able to find, track and defeat its adversaries. With millions of minutes of video being generated from an increasing number of sensor platforms, the DOD has stated that the rapid increase in video is overwhelming their analysts. The manpower required to view and garner useable information from the flood of video is unaffordable, especially in light of current fiscal restraints. "Search" within full-motion video has traditionally relied on human tagging of content, and video metadata, to provision filtering and locate segments of interest, in the context of analyst query. Our approach utilizes a novel machine-vision based approach to index FMV, using object recognition & tracking, events and activities detection. This approach enables FMV exploitation in real-time, as well as a forensic look-back within archives. This approach can help get the most information out of video sensor collection, help focus the attention of overburdened analysts form connections in activity over time and conserve national fiscal resources in exploiting FMV.

  13. NASA/DARPA advanced communications technology satellite project for evaluation of telemedicine outreach using next-generation communications satellite technology: Mayo Foundation participation.

    PubMed

    Gilbert, B K; Mitchell, M P; Bengali, A R; Khandheria, B K

    1999-08-01

    To describe the development of telemedicine capabilities-application of remote consultation and diagnostic techniques-and to evaluate the feasibility and practicality of such clinical outreach to rural and underserved communities with limited telecommunications infrastructures. In 1992, Mayo Foundation (Rochester, Minn, Jacksonville, Fla, and Scottsdale, Ariz), the National Aeronautics and Space Administration, and the Defense Advanced Research Projects Agency collaborated to create a complex network of fiberoptic landlines, video recording systems, satellite terminals, and specially developed data translators linking Mayo sites with other locations in the continental United States on an on-demand basis. The purpose was to transmit data via the asynchronous transfer mode (ATM) digital communications protocol over the Advanced Communications Technology Satellite. The links were intended to provide a conduit for transmission of data for patient-specific consultations between physicians, evaluation of medical imagery, and medical education for clinical staffs at remote sites. Low-data-rate (LDR) experiments went live late in 1993. Mayo Clinic Rochester successfully provided medical consultation and services to 2 small regional medical facilities. High-data-rate (HDR) experiments included studies of remote digital echocardiography, store-and-forward telemedicine, cardiac catheterization, and teleconsultation for congenital heart disease. These studies combined landline data transmission with use of the satellite. The complexity of the routing paths and network components, immaturity of available software, and inexperience with existing telecommunications caused significant study delays. These experiments demonstrated that next-generation satellite technology can provide batch and real-time imagery for telemedicine. The first-generation of the ATM and satellite network technology used in these experiments created several technical problems and inconveniences that should

  14. Learn More in Less Time: Fundamental Aquatic Skill Acquisition via Video Technology

    ERIC Educational Resources Information Center

    Roberts, Tom; Brown, Larry

    2008-01-01

    Recent advances in the technology field have changed the way video support should be considered. It is now much more user-friendly and feasible than it was as recently as 10 years ago. In part because of these significant strides, current literature supports the use of video technology in the classroom. This article focuses on the innovative use…

  15. Layered Wyner-Ziv video coding.

    PubMed

    Xu, Qian; Xiong, Zixiang

    2006-12-01

    Following recent theoretical works on successive Wyner-Ziv coding (WZC), we propose a practical layered Wyner-Ziv video coder using the DCT, nested scalar quantization, and irregular LDPC code based Slepian-Wolf coding (or lossless source coding with side information at the decoder). Our main novelty is to use the base layer of a standard scalable video coder (e.g., MPEG-4/H.26L FGS or H.263+) as the decoder side information and perform layered WZC for quality enhancement. Similar to FGS coding, there is no performance difference between layered and monolithic WZC when the enhancement bitstream is generated in our proposed coder. Using an H.26L coded version as the base layer, experiments indicate that WZC gives slightly worse performance than FGS coding when the channel (for both the base and enhancement layers) is noiseless. However, when the channel is noisy, extensive simulations of video transmission over wireless networks conforming to the CDMA2000 1X standard show that H.26L base layer coding plus Wyner-Ziv enhancement layer coding are more robust against channel errors than H.26L FGS coding. These results demonstrate that layered Wyner-Ziv video coding is a promising new technique for video streaming over wireless networks.

  16. Developing the Storyline for an Advance Care Planning Video for Surgery Patients: Patient-Centered Outcomes Research Engagement from Stakeholder Summit to State Fair.

    PubMed

    Aslakson, Rebecca A; Schuster, Anne L R; Lynch, Thomas J; Weiss, Matthew J; Gregg, Lydia; Miller, Judith; Isenberg, Sarina R; Crossnohere, Norah L; Conca-Cheng, Alison M; Volandes, Angelo E; Smith, Thomas J; Bridges, John F P

    2018-01-01

    Patient-centered outcomes research (PCOR) methods and social learning theory (SLT) require intensive interaction between researchers and stakeholders. Advance care planning (ACP) is valuable before major surgery, but a systematic review found no extant perioperative ACP tools. Consequently, PCOR methods and SLT can inform the development of an ACP educational video for patients and families preparing for major surgery. The objective is to develop and test acceptability of an ACP video storyline. The design is a stakeholder-guided development of the ACP video storyline. Design-thinking methods explored and prioritized stakeholder perspectives. Patients and family members evaluated storyboards containing the proposed storyline. The study was conducted at hospital outpatient surgical clinics, in-person stakeholder summit, and the 2014 Maryland State Fair. Measurements are done through stakeholder engagement and deidentified survey. Stakeholders evaluated and prioritized evidence from an environmental scan. A surgeon, family member, and palliative care physician team iteratively developed a script featuring 12 core themes and worked with a medical graphic designer to translate the script into storyboards. For 10 days, 359 attendees of the 2014 Maryland State Fair evaluated the storyboards and 87% noted that they would be "very comfortable" or "comfortable" seeing the storyboard before major surgery, 89% considered the storyboards "very helpful" or "helpful," and 89% would "definitely recommend" or "recommend" this story to others preparing for major surgery. Through an iterative process utilizing diverse PCOR engagement methods and informed by SLT, storyboards were developed for an ACP video. Field testing revealed the storyline to be highly meaningful for surgery patients and family members.

  17. YouTube Professors: Scholars as Online Video Stars

    ERIC Educational Resources Information Center

    Young, Jeffrey R.

    2008-01-01

    This article takes a look at the rising popularity of professors as the latest YouTube stars. The popularity of their appearances on YouTube and other video-sharing sites is making it possible for classrooms to be opened up and making teaching--which once took place behind closed doors--a more public art. Web video has generated a new form of…

  18. Violence in E-rated video games.

    PubMed

    Thompson, K M; Haninger, K

    2001-08-01

    Children's exposure to violence, alcohol, tobacco and other substances, and sexual messages in the media are a source of public health concern; however, content in video games commonly played by children has not been quantified. To quantify and characterize the depiction of violence, alcohol, tobacco and other substances, and sex in video games rated E (for "Everyone"), analogous to the G rating of films, which suggests suitability for all audiences. We created a database of all existing E-rated video games available for rent or sale in the United States by April 1, 2001, to identify the distribution of games by genre and to characterize the distribution of content descriptors associated with these games. We played and assessed the content of a convenience sample of 55 E-rated video games released for major home video game consoles between 1985 and 2000. Game genre; duration of violence; number of fatalities; types of weapons used; whether injuring characters or destroying objects is rewarded or is required to advance in the game; depiction of alcohol, tobacco and other substances; and sexual content. Based on analysis of the 672 current E-rated video games played on home consoles, 77% were in sports, racing, or action genres and 57% did not receive any content descriptors. We found that 35 of the 55 games we played (64%) involved intentional violence for an average of 30.7% of game play (range, 1.5%-91.2%), and we noted significant differences in the amount of violence among game genres. Injuring characters was rewarded or required for advancement in 33 games (60%). The presence of any content descriptor for violence (n = 23 games) was significantly correlated with the presence of intentional violence in the game (at a 5% significance level based on a 2-sided Wilcoxon rank-sum test, t(53) = 2.59). Notably, 14 of 32 games (44%) that did not receive a content descriptor for violence contained acts of violence. Action and shooting games led to the largest numbers of

  19. Video face recognition against a watch list

    NASA Astrophysics Data System (ADS)

    Abbas, Jehanzeb; Dagli, Charlie K.; Huang, Thomas S.

    2007-10-01

    Due to a large increase in the video surveillance data recently in an effort to maintain high security at public places, we need more robust systems to analyze this data and make tasks like face recognition a realistic possibility in challenging environments. In this paper we explore a watch-list scenario where we use an appearance based model to classify query faces from low resolution videos into either a watch-list or a non-watch-list face. We then use our simple yet a powerful face recognition system to recognize the faces classified as watch-list faces. Where the watch-list includes those people that we are interested in recognizing. Our system uses simple feature machine algorithms from our previous work to match video faces against still images. To test our approach, we match video faces against a large database of still images obtained from a previous work in the field from Yahoo News over a period of time. We do this matching in an efficient manner to come up with a faster and nearly real-time system. This system can be incorporated into a larger surveillance system equipped with advanced algorithms involving anomalous event detection and activity recognition. This is a step towards more secure and robust surveillance systems and efficient video data analysis.

  20. Bringing Evolution to a Technological Generation: A Case Study with the Video Game SPORE

    ERIC Educational Resources Information Center

    Poli, DorothyBelle; Berenotto, Christopher; Blankenship, Sara; Piatkowski, Bryan; Bader, Geoffrey A.; Poore, Mark

    2012-01-01

    The video game SPORE was found to hold characteristics that stimulate higher-order thinking even though it rated poorly for accurate science. Interested in evaluating whether a scientifically inaccurate video game could be used effectively, we exposed students to SPORE during an evolution course. Students that played the game reported that they…

  1. Preliminary Investigation of a Video-Based Stimulus Preference Assessment

    ERIC Educational Resources Information Center

    Snyder, Katie; Higbee, Thomas S.; Dayton, Elizabeth

    2012-01-01

    Video clips may be an effective format for presenting complex stimuli in preference assessments. In this preliminary study, we evaluated the correspondence between preference hierarchies generated from preference assessments that included either toys or videos of the toys. The top-ranked item corresponded in both assessments for 5 of the 6…

  2. Video Browsing on Handheld Devices

    NASA Astrophysics Data System (ADS)

    Hürst, Wolfgang

    Recent improvements in processing power, storage space, and video codec development enable users now to playback video on their handheld devices in a reasonable quality. However, given the form factor restrictions of such a mobile device, screen size still remains a natural limit and - as the term "handheld" implies - always will be a critical resource. This is not only true for video but any data that is processed on such devices. For this reason, developers have come up with new and innovative ways to deal with large documents in such limited scenarios. For example, if you look at the iPhone, innovative techniques such as flicking have been introduced to skim large lists of text (e.g. hundreds of entries in your music collection). Automatically adapting the zoom level to, for example, the width of table cells when double tapping on the screen enables reasonable browsing of web pages that have originally been designed for large, desktop PC sized screens. A multi touch interface allows you to easily zoom in and out of large text documents and images using two fingers. In the next section, we will illustrate that advanced techniques to browse large video files have been developed in the past years, as well. However, if you look at state-of-the-art video players on mobile devices, normally just simple, VCR like controls are supported (at least at the time of this writing) that only allow users to just start, stop, and pause video playback. If supported at all, browsing and navigation functionality is often restricted to simple skipping of chapters via two single buttons for backward and forward navigation and a small and thus not very sensitive timeline slider.

  3. United Sugpiaq Alutiiq (USA) Video Game: Preserving Traditional Knowledge, Culture, and Language

    ERIC Educational Resources Information Center

    Hall, Leslie D.; Sanderville, James Mountain Chief

    2009-01-01

    Video games are explored as a means of reviving dying indigenous languages. The design and production of the place-based United Sugpiaq Alutiiq (USA) video game prototype involved work across generations and across cultures. The video game is one part of a proposed digital environment where Sugcestun speakers in traditional Alaskan villages could…

  4. Science communication on YouTube: Factors that affect channel and video popularity.

    PubMed

    Welbourne, Dustin J; Grant, Will J

    2016-08-01

    YouTube has become one of the largest websites on the Internet. Among its many genres, both professional and amateur science communicators compete for audience attention. This article provides the first overview of science communication on YouTube and examines content factors that affect the popularity of science communication videos on the site. A content analysis of 390 videos from 39 YouTube channels was conducted. Although professionally generated content is superior in number, user-generated content was significantly more popular. Furthermore, videos that had consistent science communicators were more popular than those without a regular communicator. This study represents an important first step to understand content factors, which increases the channel and video popularity of science communication on YouTube. © The Author(s) 2015.

  5. Using Videos and Multimodal Discourse Analysis to Study How Students Learn a Trade

    ERIC Educational Resources Information Center

    Chan, Selena

    2013-01-01

    The use of video to assist with ethnographical-based research is not a new phenomenon. Recent advances in technology have reduced the costs and technical expertise required to use videos for gathering research data. Audio-visual records of learning activities as they take place, allow for many non-vocal and inter-personal communication…

  6. Hierarchical video surveillance architecture: a chassis for video big data analytics and exploration

    NASA Astrophysics Data System (ADS)

    Ajiboye, Sola O.; Birch, Philip; Chatwin, Christopher; Young, Rupert

    2015-03-01

    There is increasing reliance on video surveillance systems for systematic derivation, analysis and interpretation of the data needed for predicting, planning, evaluating and implementing public safety. This is evident from the massive number of surveillance cameras deployed across public locations. For example, in July 2013, the British Security Industry Association (BSIA) reported that over 4 million CCTV cameras had been installed in Britain alone. The BSIA also reveal that only 1.5% of these are state owned. In this paper, we propose a framework that allows access to data from privately owned cameras, with the aim of increasing the efficiency and accuracy of public safety planning, security activities, and decision support systems that are based on video integrated surveillance systems. The accuracy of results obtained from government-owned public safety infrastructure would improve greatly if privately owned surveillance systems `expose' relevant video-generated metadata events, such as triggered alerts and also permit query of a metadata repository. Subsequently, a police officer, for example, with an appropriate level of system permission can query unified video systems across a large geographical area such as a city or a country to predict the location of an interesting entity, such as a pedestrian or a vehicle. This becomes possible with our proposed novel hierarchical architecture, the Fused Video Surveillance Architecture (FVSA). At the high level, FVSA comprises of a hardware framework that is supported by a multi-layer abstraction software interface. It presents video surveillance systems as an adapted computational grid of intelligent services, which is integration-enabled to communicate with other compatible systems in the Internet of Things (IoT).

  7. Method and system for efficient video compression with low-complexity encoder

    NASA Technical Reports Server (NTRS)

    Chen, Jun (Inventor); He, Dake (Inventor); Sheinin, Vadim (Inventor); Jagmohan, Ashish (Inventor); Lu, Ligang (Inventor)

    2012-01-01

    Disclosed are a method and system for video compression, wherein the video encoder has low computational complexity and high compression efficiency. The disclosed system comprises a video encoder and a video decoder, wherein the method for encoding includes the steps of converting a source frame into a space-frequency representation; estimating conditional statistics of at least one vector of space-frequency coefficients; estimating encoding rates based on the said conditional statistics; and applying Slepian-Wolf codes with the said computed encoding rates. The preferred method for decoding includes the steps of; generating a side-information vector of frequency coefficients based on previously decoded source data, encoder statistics, and previous reconstructions of the source frequency vector; and performing Slepian-Wolf decoding of at least one source frequency vector based on the generated side-information, the Slepian-Wolf code bits and the encoder statistics.

  8. Review of passive-blind detection in digital video forgery based on sensing and imaging techniques

    NASA Astrophysics Data System (ADS)

    Tao, Junjie; Jia, Lili; You, Ying

    2016-01-01

    Advances in digital video compression and IP communication technologies raised new issues and challenges concerning the integrity and authenticity of surveillance videos. It is so important that the system should ensure that once recorded, the video cannot be altered; ensuring the audit trail is intact for evidential purposes. This paper gives an overview of passive techniques of Digital Video Forensics which are based on intrinsic fingerprints inherent in digital surveillance videos. In this paper, we performed a thorough research of literatures relevant to video manipulation detection methods which accomplish blind authentications without referring to any auxiliary information. We presents review of various existing methods in literature, and much more work is needed to be done in this field of video forensics based on video data analysis and observation of the surveillance systems.

  9. A content-based news video retrieval system: NVRS

    NASA Astrophysics Data System (ADS)

    Liu, Huayong; He, Tingting

    2009-10-01

    This paper focus on TV news programs and design a content-based news video browsing and retrieval system, NVRS, which is convenient for users to fast browsing and retrieving news video by different categories such as political, finance, amusement, etc. Combining audiovisual features and caption text information, the system automatically segments a complete news program into separate news stories. NVRS supports keyword-based news story retrieval, category-based news story browsing and generates key-frame-based video abstract for each story. Experiments show that the method of story segmentation is effective and the retrieval is also efficient.

  10. Geographic Video 3d Data Model And Retrieval

    NASA Astrophysics Data System (ADS)

    Han, Z.; Cui, C.; Kong, Y.; Wu, H.

    2014-04-01

    Geographic video includes both spatial and temporal geographic features acquired through ground-based or non-ground-based cameras. With the popularity of video capture devices such as smartphones, the volume of user-generated geographic video clips has grown significantly and the trend of this growth is quickly accelerating. Such a massive and increasing volume poses a major challenge to efficient video management and query. Most of the today's video management and query techniques are based on signal level content extraction. They are not able to fully utilize the geographic information of the videos. This paper aimed to introduce a geographic video 3D data model based on spatial information. The main idea of the model is to utilize the location, trajectory and azimuth information acquired by sensors such as GPS receivers and 3D electronic compasses in conjunction with video contents. The raw spatial information is synthesized to point, line, polygon and solid according to the camcorder parameters such as focal length and angle of view. With the video segment and video frame, we defined the three categories geometry object using the geometry model of OGC Simple Features Specification for SQL. We can query video through computing the spatial relation between query objects and three categories geometry object such as VFLocation, VSTrajectory, VSFOView and VFFovCone etc. We designed the query methods using the structured query language (SQL) in detail. The experiment indicate that the model is a multiple objective, integration, loosely coupled, flexible and extensible data model for the management of geographic stereo video.

  11. Portrayal of smokeless tobacco in YouTube videos.

    PubMed

    Bromberg, Julie E; Augustson, Erik M; Backinger, Cathy L

    2012-04-01

    Videos of smokeless tobacco (ST) on YouTube are abundant and easily accessible, yet no studies have examined the content of ST videos. This study assesses the overall portrayal, genre, and messages of ST YouTube videos. In August 2010, researchers identified the top 20 search results on YouTube by "relevance" and "view count" for the following search terms: "ST," "chewing tobacco," "snus," and "Skoal." After eliminating videos that were not about ST (n = 26), non-English (n = 14), or duplicate (n = 42), a final sample of 78 unique videos was coded for overall portrayal, genre, and various content measures. Among the 78 unique videos, 15.4% were anti-ST, while 74.4% were pro-ST. Researchers were unable to determine the portrayal of ST in the remaining 10.3% of videos because they involved excessive or "sensationalized" use of the ST, which could be interpreted either positively or negatively, depending on the viewer. The most common ST genre was positive video diaries (or "vlogs"), which made up almost one third of the videos (29.5%), followed by promotional advertisements (20.5%) and anti-ST public service announcements (12.8%). While YouTube is intended for user-generated content, 23.1% of the videos were created by professional organizations. These results demonstrate that ST videos on YouTube are overwhelmingly pro-ST. More research is needed to determine who is viewing these ST YouTube videos and how they may affect people's knowledge, attitudes, and behaviors regarding ST use.

  12. Breaking News: Utilizing Video Simulations to Improve Educational Leaders' Public Speaking Skills

    ERIC Educational Resources Information Center

    Friend, Jennifer; Adams, April; Curry, George

    2011-01-01

    This article examines specific uses of video simulations in one educational leadership preparation program to advance future school and district leaders' skills related to public speaking and participation in televised news interviews. One faculty member and two advanced educational leadership candidates share their perspectives of several…

  13. Video conferencing made easy

    NASA Astrophysics Data System (ADS)

    Larsen, D. G.; Schwieder, P. R.

    Network video conferencing is advancing rapidly throughout the nation, and the Idaho National Engineering Laboratory (INEL), a Department of Energy (DOE) facility, is at the forefront of the development. Engineers at INEL/EG&G designed and installed a very unique DOE video conferencing system, offering many outstanding features, that include true multipoint conferencing, user-friendly design and operation with no full-time operators required, and the potential for cost effective expansion of the system. One area where INEL/EG&G engineers made a significant contribution to video conferencing was in the development of effective, user-friendly, end station driven scheduling software. A PC at each user site is used to schedule conferences via a windows package. This software interface provides information to the users concerning conference availability, scheduling, initiation, and termination. The menus are 'mouse' controlled. Once a conference is scheduled, a workstation at the hub monitors the network to initiate all scheduled conferences. No active operator participation is required once a user schedules a conference through the local PC; the workstation automatically initiates and terminates the conference as scheduled. As each conference is scheduled, hard copy notification is also printed at each participating site. Video conferencing is the wave of the future. The use of these user-friendly systems will save millions in lost productivity and travel costs throughout the nation. The ease of operation and conference scheduling will play a key role on the extent industry uses this new technology. The INEL/EG&G has developed a prototype scheduling system for both commercial and federal government use.

  14. Improving Video Based Heart Rate Monitoring.

    PubMed

    Lin, Jian; Rozado, David; Duenser, Andreas

    2015-01-01

    Non-contact measurements of cardiac pulse can provide robust measurement of heart rate (HR) without the annoyance of attaching electrodes to the body. In this paper we explore a novel and reliable method to carry out video-based HR estimation and propose various performance improvement over existing approaches. The investigated method uses Independent Component Analysis (ICA) to detect the underlying HR signal from a mixed source signal present in the RGB channels of the image. The original ICA algorithm was implemented and several modifications were explored in order to determine which one could be optimal for accurate HR estimation. Using statistical analysis, we compared the cardiac pulse rate estimation from the different methods under comparison on the extracted videos to a commercially available oximeter. We found that some of these methods are quite effective and efficient in terms of improving accuracy and latency of the system. We have made the code of our algorithms openly available to the scientific community so that other researchers can explore how to integrate video-based HR monitoring in novel health technology applications. We conclude by noting that recent advances in video-based HR monitoring permit computers to be aware of a user's psychophysiological status in real time.

  15. Video-augmented feedback for procedural performance.

    PubMed

    Wittler, Mary; Hartman, Nicholas; Manthey, David; Hiestand, Brian; Askew, Kim

    2016-06-01

    Resident programs must assess residents' achievement of core competencies for clinical and procedural skills. Video-augmented feedback may facilitate procedural skill acquisition and promote more accurate self-assessment. A randomized controlled study to investigate whether video-augmented verbal feedback leads to increased procedural skill and improved accuracy of self-assessment compared to verbal only feedback. Participants were evaluated during procedural training for ultrasound guided internal jugular central venous catheter (US IJ CVC) placement. All participants received feedback based on a validated 30-point checklist for US IJ CVC placement and validated 6-point procedural global rating scale. Scores in both groups improved by a mean of 9.6 points (95% CI: 7.8-11.4) on the 30-point checklist, with no difference between groups in mean score improvement on the global rating scale. In regards to self-assessment, participant self-rating diverged from faculty scoring, increasingly so after receiving feedback. Residents rated highly by faculty underestimated their skill, while those rated more poorly demonstrated increasing overestimation. Accuracy of self-assessment was not improved by addition of video. While feedback advanced the skill of the resident, video-augmented feedback did not enhance skill acquisition or improve accuracy of resident self-assessment compared to standard feedback.

  16. Informational value and bias of videos related to orthodontics screened on a video-sharing Web site.

    PubMed

    Knösel, Michael; Jung, Klaus

    2011-05-01

    To assess the informational value, intention, source, and bias of videos related to orthodontics screened by the video-sharing Internet platform YouTube. YouTube (www.youtube.com) was scanned in July 2010 for orthodontics-related videos using an adequately defined search term. Each of the first 30 search results of the scan was categorized with the system-generated sorts "by relevance" and "most viewed" (total: 60). These were rated independently by three assessors, who completed a questionnaire for each video. The data were analyzed statistically using Friedman's test for dependent samples, Kendall's tau, and Fleiss's kappa. The YouTube scan produced 5140 results. There was a wide variety of information about orthodontics available on YouTube, and the highest proportion of videos was found to originate from orthodontic patients. These videos were also the most viewed ones. The informational content of most of the videos was generally judged to be low, with a rather poor to inadequate representation of the orthodontic profession, although a moderately pro-orthodontics stance prevailed. It was noticeable that the majority of contributions of orthodontists to YouTube constituted advertising. This tendency was not viewed positively by the majority of YouTube users, as was evident in the divergence in the proportions when sorting by "relevance" and "most viewed." In the light of the very large number of people using the Internet as their primary source of information, orthodontists should recognize the importance of YouTube and similar social media Web sites in the opinion-forming process, especially in the case of adolescents.

  17. Public online information about tinnitus: A cross-sectional study of YouTube videos.

    PubMed

    Basch, Corey H; Yin, Jingjing; Kollia, Betty; Adedokun, Adeyemi; Trusty, Stephanie; Yeboah, Felicia; Fung, Isaac Chun-Hai

    2018-01-01

    To examine the information about tinnitus contained in different video sources on YouTube. The 100 most widely viewed tinnitus videos were manually coded. Firstly, we identified the sources of upload: consumer, professional, television-based clip, and internet-based clip. Secondly, the videos were analyzed to ascertain what pertinent information they contained from a current National Institute on Deafness and Other Communication Disorders fact sheet. Of the videos, 42 were consumer-generated, 33 from media, and 25 from professionals. Collectively, the 100 videos were viewed almost 9 million times. The odds of mentioning "objective tinnitus" in professional videos were 9.58 times those from media sources [odds ratio (OR) = 9.58; 95% confidence interval (CI): 1.94, 47.42; P = 0.01], whereas these odds in consumer videos were 51% of media-generated videos (OR = 0.51; 95% CI: 0.20, 1.29; P = 0.16). The odds that the purpose of a video was to sell a product or service were nearly the same for both consumer and professional videos. Consumer videos were found to be 4.33 times as likely to carry a theme about an individual's own experience with tinnitus (OR = 4.33; 95% CI: 1.62, 11.63; P = 0.004) as media videos. Of the top 100 viewed videos on tinnitus, most were uploaded by consumers, sharing individuals' experiences. Actions are needed to make scientific medical information more prominently available and accessible on YouTube and other social media.

  18. Public Online Information About Tinnitus: A Cross-Sectional Study of YouTube Videos

    PubMed Central

    Basch, Corey H.; Yin, Jingjing; Kollia, Betty; Adedokun, Adeyemi; Trusty, Stephanie; Yeboah, Felicia; Fung, Isaac Chun-Hai

    2018-01-01

    Purpose: To examine the information about tinnitus contained in different video sources on YouTube. Materials and Methods: The 100 most widely viewed tinnitus videos were manually coded. Firstly, we identified the sources of upload: consumer, professional, television-based clip, and internet-based clip. Secondly, the videos were analyzed to ascertain what pertinent information they contained from a current National Institute on Deafness and Other Communication Disorders fact sheet. Results: Of the videos, 42 were consumer-generated, 33 from media, and 25 from professionals. Collectively, the 100 videos were viewed almost 9 million times. The odds of mentioning “objective tinnitus” in professional videos were 9.58 times those from media sources [odds ratio (OR) = 9.58; 95% confidence interval (CI): 1.94, 47.42; P = 0.01], whereas these odds in consumer videos were 51% of media-generated videos (OR = 0.51; 95% CI: 0.20, 1.29; P = 0.16). The odds that the purpose of a video was to sell a product or service were nearly the same for both consumer and professional videos. Consumer videos were found to be 4.33 times as likely to carry a theme about an individual’s own experience with tinnitus (OR = 4.33; 95% CI: 1.62, 11.63; P = 0.004) as media videos. Conclusions: Of the top 100 viewed videos on tinnitus, most were uploaded by consumers, sharing individuals’ experiences. Actions are needed to make scientific medical information more prominently available and accessible on YouTube and other social media. PMID:29457600

  19. Advance directives for future dementia can be modified by a brief video presentation on dementia care: An experimental study.

    PubMed

    Volhard, Theresia; Jessen, Frank; Kleineidam, Luca; Wolfsgruber, Steffen; Lanzerath, Dirk; Wagner, Michael; Maier, Wolfgang

    2018-01-01

    To investigate whether life-sustaining measures in medical emergency situations are less accepted for an anticipated own future of living with dementia, and to test whether a resource-oriented, in contrast to a deficit-oriented video about the same demented person, would increase the acceptance of such life-saving measures. Experimental study conducted between September 2012 and February 2013. Community dwelling female volunteers living in the region of Bonn, Germany. 278 women aged 19 to 89 (mean age 53.4 years). Presentation of a video on dementia care focusing either on the deficits of a demented woman (negative framing), or focusing on the remaining resources (positive framing) of the same patient. Approval of life-sustaining treatments in five critical medical scenarios under the assumption of having comorbid dementia, before and after the presentation of the brief videos on care. At baseline, the acceptance of life-sustaining measures in critical medical situations was significantly lower in subjects anticipating their own future life with dementia. Participants watching the resource-oriented film on living with dementia had significantly higher post-film acceptance rates compared to those watching the deficit-oriented negatively framed film. This effect particularly emerges if brief and efficient life-saving interventions with a high likelihood of physical recovery are available (eg, antibiotic treatment for pneumonia). Anticipated decisions regarding life-sustaining measures are negatively influenced by the subjective imagination of living with dementia, which might be shaped by common, unquestioned stereotypes. This bias can be reduced by providing audio-visual information on living with dementia which is not only centred around cognitive and functional losses but also focuses on remaining resources and the apparent quality of life. This is particularly true if the medical threat can be treated efficiently. These findings have implications for the practice

  20. Temporal flicker reduction and denoising in video using sparse directional transforms

    NASA Astrophysics Data System (ADS)

    Kanumuri, Sandeep; Guleryuz, Onur G.; Civanlar, M. Reha; Fujibayashi, Akira; Boon, Choong S.

    2008-08-01

    The bulk of the video content available today over the Internet and over mobile networks suffers from many imperfections caused during acquisition and transmission. In the case of user-generated content, which is typically produced with inexpensive equipment, these imperfections manifest in various ways through noise, temporal flicker and blurring, just to name a few. Imperfections caused by compression noise and temporal flicker are present in both studio-produced and user-generated video content transmitted at low bit-rates. In this paper, we introduce an algorithm designed to reduce temporal flicker and noise in video sequences. The algorithm takes advantage of the sparse nature of video signals in an appropriate transform domain that is chosen adaptively based on local signal statistics. When the signal corresponds to a sparse representation in this transform domain, flicker and noise, which are spread over the entire domain, can be reduced easily by enforcing sparsity. Our results show that the proposed algorithm reduces flicker and noise significantly and enables better presentation of compressed videos.

  1. Goddard In The Galaxy [Music Video

    NASA Image and Video Library

    2014-07-14

    This video highlights the many ways NASA Goddard Space Flight Center explores the universe. So crank up your speakers and let the music be your guide. "My Songs Know What You Did In The Dark (Light Em Up)" Performed by Fall Out Boy Courtesy of Island Def Jam Music Group under license from Universal Music Enterprises Download the video here: svs.gsfc.nasa.gov/goto?11378 NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  2. Identifying hidden voice and video streams

    NASA Astrophysics Data System (ADS)

    Fan, Jieyan; Wu, Dapeng; Nucci, Antonio; Keralapura, Ram; Gao, Lixin

    2009-04-01

    Given the rising popularity of voice and video services over the Internet, accurately identifying voice and video traffic that traverse their networks has become a critical task for Internet service providers (ISPs). As the number of proprietary applications that deliver voice and video services to end users increases over time, the search for the one methodology that can accurately detect such services while being application independent still remains open. This problem becomes even more complicated when voice and video service providers like Skype, Microsoft, and Google bundle their voice and video services with other services like file transfer and chat. For example, a bundled Skype session can contain both voice stream and file transfer stream in the same layer-3/layer-4 flow. In this context, traditional techniques to identify voice and video streams do not work. In this paper, we propose a novel self-learning classifier, called VVS-I , that detects the presence of voice and video streams in flows with minimum manual intervention. Our classifier works in two phases: training phase and detection phase. In the training phase, VVS-I first extracts the relevant features, and subsequently constructs a fingerprint of a flow using the power spectral density (PSD) analysis. In the detection phase, it compares the fingerprint of a flow to the existing fingerprints learned during the training phase, and subsequently classifies the flow. Our classifier is not only capable of detecting voice and video streams that are hidden in different flows, but is also capable of detecting different applications (like Skype, MSN, etc.) that generate these voice/video streams. We show that our classifier can achieve close to 100% detection rate while keeping the false positive rate to less that 1%.

  3. Photogrammetric Applications of Immersive Video Cameras

    NASA Astrophysics Data System (ADS)

    Kwiatek, K.; Tokarczyk, R.

    2014-05-01

    The paper investigates immersive videography and its application in close-range photogrammetry. Immersive video involves the capture of a live-action scene that presents a 360° field of view. It is recorded simultaneously by multiple cameras or microlenses, where the principal point of each camera is offset from the rotating axis of the device. This issue causes problems when stitching together individual frames of video separated from particular cameras, however there are ways to overcome it and applying immersive cameras in photogrammetry provides a new potential. The paper presents two applications of immersive video in photogrammetry. At first, the creation of a low-cost mobile mapping system based on Ladybug®3 and GPS device is discussed. The amount of panoramas is much too high for photogrammetric purposes as the base line between spherical panoramas is around 1 metre. More than 92 000 panoramas were recorded in one Polish region of Czarny Dunajec and the measurements from panoramas enable the user to measure the area of outdoors (adverting structures) and billboards. A new law is being created in order to limit the number of illegal advertising structures in the Polish landscape and immersive video recorded in a short period of time is a candidate for economical and flexible measurements off-site. The second approach is a generation of 3d video-based reconstructions of heritage sites based on immersive video (structure from immersive video). A mobile camera mounted on a tripod dolly was used to record the interior scene and immersive video, separated into thousands of still panoramas, was converted from video into 3d objects using Agisoft Photoscan Professional. The findings from these experiments demonstrated that immersive photogrammetry seems to be a flexible and prompt method of 3d modelling and provides promising features for mobile mapping systems.

  4. Promoting Healthy Child Development via a Two-Generation Translational Neuroscience Framework: The Filming Interactions to Nurture Development Video Coaching Program.

    PubMed

    Fisher, Philip A; Frenkel, Tahl I; Noll, Laura K; Berry, Melanie; Yockelson, Melissa

    2016-12-01

    In this article, we focus on applying methods of translational neuroscience to two-generation, family-based interventions. In recent years, a small but growing body of evidence has documented the reversibility of some of the neurobiological effects of early adversity in the context of environmental early interventions. Some of these interventions are now being implemented at scale, which may help reduce disparities in the face of early life stress. Further progress may occur by extending these efforts to two-generation models that target caregivers' capabilities to improve children's outcomes. In this article, we describe the content and processes of the Filming Interactions to Nurture Development (FIND) video coaching intervention. We also discuss the two-generation, translational neuroscience framework on which FIND is based, and how similar approaches can be developed and scaled to mitigate the effects of adversity.

  5. Promoting Healthy Child Development via a Two-Generation Translational Neuroscience Framework: The Filming Interactions to Nurture Development Video Coaching Program

    PubMed Central

    Fisher, Philip A.; Frenkel, Tahl I.; Noll, Laura K.; Berry, Melanie; Yockelson, Melissa

    2017-01-01

    In this article, we focus on applying methods of translational neuroscience to two-generation, family-based interventions. In recent years, a small but growing body of evidence has documented the reversibility of some of the neurobiological effects of early adversity in the context of environmental early interventions. Some of these interventions are now being implemented at scale, which may help reduce disparities in the face of early life stress. Further progress may occur by extending these efforts to two-generation models that target caregivers’ capabilities to improve children’s outcomes. In this article, we describe the content and processes of the Filming Interactions to Nurture Development (FIND) video coaching intervention. We also discuss the two-generation, translational neuroscience framework on which FIND is based, and how similar approaches can be developed and scaled to mitigate the effects of adversity. PMID:28936231

  6. Assurance Technology Challenges of Advanced Space Systems

    NASA Technical Reports Server (NTRS)

    Chern, E. James

    2004-01-01

    The initiative to explore space and extend a human presence across our solar system to revisit the moon and Mars post enormous technological challenges to the nation's space agency and aerospace industry. Key areas of technology development needs to enable the endeavor include advanced materials, structures and mechanisms; micro/nano sensors and detectors; power generation, storage and management; advanced thermal and cryogenic control; guidance, navigation and control; command and data handling; advanced propulsion; advanced communication; on-board processing; advanced information technology systems; modular and reconfigurable systems; precision formation flying; solar sails; distributed observing systems; space robotics; and etc. Quality assurance concerns such as functional performance, structural integrity, radiation tolerance, health monitoring, diagnosis, maintenance, calibration, and initialization can affect the performance of systems and subsystems. It is thus imperative to employ innovative nondestructive evaluation methodologies to ensure quality and integrity of advanced space systems. Advancements in integrated multi-functional sensor systems, autonomous inspection approaches, distributed embedded sensors, roaming inspectors, and shape adaptive sensors are sought. Concepts in computational models for signal processing and data interpretation to establish quantitative characterization and event determination are also of interest. Prospective evaluation technologies include ultrasonics, laser ultrasonics, optics and fiber optics, shearography, video optics and metrology, thermography, electromagnetics, acoustic emission, x-ray, data management, biomimetics, and nano-scale sensing approaches for structural health monitoring.

  7. A focus group study of the use of video-recorded simulated objective structured clinical examinations in nurse practitioner education.

    PubMed

    Barratt, Julian

    2010-05-01

    The objective structured clinical examination (OSCE) is a common method of clinical skills assessment used for advanced nurse practitioner students across the United Kingdom. The purpose of an advanced nursing OSCE is to assess a nurse practitioner student's competence and safety in the performance of commonly used advanced clinical practice skills. Students often feel nervous when preparing for and participating in an OSCE. Consideration of these identified anxieties led to the development of an alternative method of meeting students' OSCE learning and preparation needs; namely video-recorded simulated OSCEs. Video-recording was appealing for the following reasons: it provides a flexible usage of staff resources and time; OSCE performance mistakes can be rectified; it is possible to use the same video-recordings with multiple cohorts of students, and the recordings can be made conveniently available for students with video streaming on internet-based video-sharing sites or virtual learning environments. The aim of the study was to explore the value of using such recordings amongst nurse practitioner students, via online and face-to-face focus groups, to see if they are a suitable OSCE educational preparation technique. The study findings indicate that simulated OSCE video-recordings are an effective method for supporting nurse practitioner educational development. Copyright 2009 Elsevier Ltd. All rights reserved.

  8. Background-Modeling-Based Adaptive Prediction for Surveillance Video Coding.

    PubMed

    Zhang, Xianguo; Huang, Tiejun; Tian, Yonghong; Gao, Wen

    2014-02-01

    The exponential growth of surveillance videos presents an unprecedented challenge for high-efficiency surveillance video coding technology. Compared with the existing coding standards that were basically developed for generic videos, surveillance video coding should be designed to make the best use of the special characteristics of surveillance videos (e.g., relative static background). To do so, this paper first conducts two analyses on how to improve the background and foreground prediction efficiencies in surveillance video coding. Following the analysis results, we propose a background-modeling-based adaptive prediction (BMAP) method. In this method, all blocks to be encoded are firstly classified into three categories. Then, according to the category of each block, two novel inter predictions are selectively utilized, namely, the background reference prediction (BRP) that uses the background modeled from the original input frames as the long-term reference and the background difference prediction (BDP) that predicts the current data in the background difference domain. For background blocks, the BRP can effectively improve the prediction efficiency using the higher quality background as the reference; whereas for foreground-background-hybrid blocks, the BDP can provide a better reference after subtracting its background pixels. Experimental results show that the BMAP can achieve at least twice the compression ratio on surveillance videos as AVC (MPEG-4 Advanced Video Coding) high profile, yet with a slightly additional encoding complexity. Moreover, for the foreground coding performance, which is crucial to the subjective quality of moving objects in surveillance videos, BMAP also obtains remarkable gains over several state-of-the-art methods.

  9. Feasibility of a Video-Based Advance Care Planning Website to Facilitate Group Visits among Diverse Adults from a Safety-Net Health System.

    PubMed

    Zapata, Carly; Lum, Hillary D; Wistar, Emily; Horton, Claire; Sudore, Rebecca L

    2018-02-20

    Primary care providers in safety-net settings often do not have time to discuss advance care planning (ACP). Group visits (GV) may be an efficient means to provide ACP education. To assess the feasibility and impact of a video-based website to facilitate GVs to engage diverse adults in ACP. Feasibility pilot among patients who were ≥55 years of age from two primary care clinics in a Northern California safety-net setting. Participants attended two 90-minute GVs and viewed the five steps of the movie version of the PREPARE website ( www.prepareforyourcare.org ) concerning surrogates, values, and discussing wishes in video format. Two clinician facilitators were available to encourage participation. We assessed pre-to-post ACP knowledge, whether participants designated a surrogate or completed an advance directive (AD), and acceptability of GVs and PREPARE materials. We conducted two GVs with 22 participants. Mean age was 64 years (±7), 55% were women, 73% nonwhite, and 55% had limited literacy. Knowledge improved about surrogate designation (46% correct pre vs. 85% post, p = 0.01) and discussing decisions with others (59% vs. 90%, p = 0.01). Surrogate designation increased (48% vs. 85%, p = 0.01) and there was a trend toward AD completion (9% vs. 24%, p = 0.21). Participants rated the GVs and PREPARE materials a mean of 8 (±3.1) on a 10-point acceptability scale. Using the PREPARE movie to facilitate ACP GVs for diverse adults in safety net, primary care settings is feasible and shows potential for increasing ACP engagement.

  10. Localizing Target Structures in Ultrasound Video

    PubMed Central

    Kwitt, R.; Vasconcelos, N.; Razzaque, S.; Aylward, S.

    2013-01-01

    The problem of localizing specific anatomic structures using ultrasound (US) video is considered. This involves automatically determining when an US probe is acquiring images of a previously defined object of interest, during the course of an US examination. Localization using US is motivated by the increased availability of portable, low-cost US probes, which inspire applications where inexperienced personnel and even first-time users acquire US data that is then sent to experts for further assessment. This process is of particular interest for routine examinations in underserved populations as well as for patient triage after natural disasters and large-scale accidents, where experts may be in short supply. The proposed localization approach is motivated by research in the area of dynamic texture analysis and leverages several recent advances in the field of activity recognition. For evaluation, we introduce an annotated and publicly available database of US video, acquired on three phantoms. Several experiments reveal the challenges of applying video analysis approaches to US images and demonstrate that good localization performance is possible with the proposed solution. PMID:23746488

  11. Eye gaze correction with stereovision for video-teleconferencing.

    PubMed

    Yang, Ruigang; Zhang, Zhengyou

    2004-07-01

    The lack of eye contact in desktop video teleconferencing substantially reduces the effectiveness of video contents. While expensive and bulky hardware is available on the market to correct eye gaze, researchers have been trying to provide a practical software-based solution to bring video-teleconferencing one step closer to the mass market. This paper presents a novel approach: Based on stereo analysis combined with rich domain knowledge (a personalized face model), we synthesize, using graphics hardware, a virtual video that maintains eye contact. A 3D stereo head tracker with a personalized face model is used to compute initial correspondences across two views. More correspondences are then added through template and feature matching. Finally, all the correspondence information is fused together for view synthesis using view morphing techniques. The combined methods greatly enhance the accuracy and robustness of the synthesized views. Our current system is able to generate an eye-gaze corrected video stream at five frames per second on a commodity 1 GHz PC.

  12. Effects of Video Streaming Technology on Public Speaking Students' Communication Apprehension and Competence

    ERIC Educational Resources Information Center

    Dupagne, Michel; Stacks, Don W.; Giroux, Valerie Manno

    2007-01-01

    This study examines whether video streaming can reduce trait and state communication apprehension, as well as improve communication competence, in public speaking classes. Video streaming technology has been touted as the next generation of video feedback for public speaking students because it is not limited by time or space and allows Internet…

  13. An efficient HW and SW design of H.264 video compression, storage and playback on FPGA devices for handheld thermal imaging systems

    NASA Astrophysics Data System (ADS)

    Gunay, Omer; Ozsarac, Ismail; Kamisli, Fatih

    2017-05-01

    Video recording is an essential property of new generation military imaging systems. Playback of the stored video on the same device is also desirable as it provides several operational benefits to end users. Two very important constraints for many military imaging systems, especially for hand-held devices and thermal weapon sights, are power consumption and size. To meet these constraints, it is essential to perform most of the processing applied to the video signal, such as preprocessing, compression, storing, decoding, playback and other system functions on a single programmable chip, such as FPGA, DSP, GPU or ASIC. In this work, H.264/AVC (Advanced Video Coding) compatible video compression, storage, decoding and playback blocks are efficiently designed and implemented on FPGA platforms using FPGA fabric and Altera NIOS II soft processor. Many subblocks that are used in video encoding are also used during video decoding in order to save FPGA resources and power. Computationally complex blocks are designed using FPGA fabric, while blocks such as SD card write/read, H.264 syntax decoding and CAVLC decoding are done using NIOS processor to benefit from software flexibility. In addition, to keep power consumption low, the system was designed to require limited external memory access. The design was tested using 640x480 25 fps thermal camera on CYCLONE V FPGA, which is the ALTERA's lowest power FPGA family, and consumes lower than 40% of CYCLONE V 5CEFA7 FPGA resources on average.

  14. A case study : benefits associated with the sharing of ATMS-related video data in San Antonio, TX

    DOT National Transportation Integrated Search

    1998-08-11

    This paper summarizes various findings relating to the integration of Advanced Traffic Management System (ATMS) components of video data in San Antonio, TX. Specifically, the paper examines the perceived benefits derived from the sharing of video dat...

  15. Portrayal of Smokeless Tobacco in YouTube Videos

    PubMed Central

    Augustson, Erik M.; Backinger, Cathy L.

    2012-01-01

    Objectives: Videos of smokeless tobacco (ST) on YouTube are abundant and easily accessible, yet no studies have examined the content of ST videos. This study assesses the overall portrayal, genre, and messages of ST YouTube videos. Methods: In August 2010, researchers identified the top 20 search results on YouTube by “relevance” and “view count” for the following search terms: “ST,” “chewing tobacco,” “snus,” and “Skoal.” After eliminating videos that were not about ST (n = 26), non-English (n = 14), or duplicate (n = 42), a final sample of 78 unique videos was coded for overall portrayal, genre, and various content measures. Results: Among the 78 unique videos, 15.4% were anti-ST, while 74.4% were pro-ST. Researchers were unable to determine the portrayal of ST in the remaining 10.3% of videos because they involved excessive or “sensationalized” use of the ST, which could be interpreted either positively or negatively, depending on the viewer. The most common ST genre was positive video diaries (or “vlogs”), which made up almost one third of the videos (29.5%), followed by promotional advertisements (20.5%) and anti-ST public service announcements (12.8%). While YouTube is intended for user-generated content, 23.1% of the videos were created by professional organizations. Conclusions: These results demonstrate that ST videos on YouTube are overwhelmingly pro-ST. More research is needed to determine who is viewing these ST YouTube videos and how they may affect people’s knowledge, attitudes, and behaviors regarding ST use. PMID:22080585

  16. An affordable wearable video system for emergency response training

    NASA Astrophysics Data System (ADS)

    King-Smith, Deen; Mikkilineni, Aravind; Ebert, David; Collins, Timothy; Delp, Edward J.

    2009-02-01

    Many emergency response units are currently faced with restrictive budgets that prohibit their use of advanced technology-based training solutions. Our work focuses on creating an affordable, mobile, state-of-the-art emergency response training solution through the integration of low-cost, commercially available products. The system we have developed consists of tracking, audio, and video capability, coupled with other sensors that can all be viewed through a unified visualization system. In this paper we focus on the video sub-system which helps provide real time tracking and video feeds from the training environment through a system of wearable and stationary cameras. These two camera systems interface with a management system that handles storage and indexing of the video during and after training exercises. The wearable systems enable the command center to have live video and tracking information for each trainee in the exercise. The stationary camera systems provide a fixed point of reference for viewing action during the exercise and consist of a small Linux based portable computer and mountable camera. The video management system consists of a server and database which work in tandem with a visualization application to provide real-time and after action review capability to the training system.

  17. Video document

    NASA Astrophysics Data System (ADS)

    Davies, Bob; Lienhart, Rainer W.; Yeo, Boon-Lock

    1999-08-01

    The metaphor of film and TV permeates the design of software to support video on the PC. Simply transplanting the non- interactive, sequential experience of film to the PC fails to exploit the virtues of the new context. Video ont eh PC should be interactive and non-sequential. This paper experiments with a variety of tools for using video on the PC that exploits the new content of the PC. Some feature are more successful than others. Applications that use these tools are explored, including primarily the home video archive but also streaming video servers on the Internet. The ability to browse, edit, abstract and index large volumes of video content such as home video and corporate video is a problem without appropriate solution in today's market. The current tools available are complex, unfriendly video editors, requiring hours of work to prepare a short home video, far more work that a typical home user can be expected to provide. Our proposed solution treats video like a text document, providing functionality similar to a text editor. Users can browse, interact, edit and compose one or more video sequences with the same ease and convenience as handling text documents. With this level of text-like composition, we call what is normally a sequential medium a 'video document'. An important component of the proposed solution is shot detection, the ability to detect when a short started or stopped. When combined with a spreadsheet of key frames, the host become a grid of pictures that can be manipulated and viewed in the same way that a spreadsheet can be edited. Multiple video documents may be viewed, joined, manipulated, and seamlessly played back. Abstracts of unedited video content can be produce automatically to create novel video content for export to other venues. Edited and raw video content can be published to the net or burned to a CD-ROM with a self-installing viewer for Windows 98 and Windows NT 4.0.

  18. The presentation of seizures and epilepsy in YouTube videos.

    PubMed

    Wong, Victoria S S; Stevenson, Matthew; Selwa, Linda

    2013-04-01

    We evaluated videos on the social media website, YouTube, containing references to seizures and epilepsy. Of 100 videos, 28% contained an ictal event, and 25% featured a person with epilepsy recounting his or her personal experience. Videos most commonly fell into categories of Personal Experience/Anecdotal (44%) and Informative/Educational (38%). Fifty-one percent of videos were judged as accurate, and 9% were inaccurate; accuracy was not an applicable attribute in the remainder of the videos. Eighty-five percent of videos were sympathetic towards those with seizures or epilepsy, 9% were neutral, and only 6% were derogatory. Ninety-eight percent of videos were thought to be easily understood by a layperson. The user-generated content on YouTube appears to be more sympathetic and accurate compared to other forms of mass media. We are optimistic that with a shifting ratio towards sympathetic content about epilepsy, the amount of stigma towards epilepsy and seizures will continue to lessen. Copyright © 2013 Elsevier Inc. All rights reserved.

  19. Content-based video retrieval by example video clip

    NASA Astrophysics Data System (ADS)

    Dimitrova, Nevenka; Abdel-Mottaleb, Mohamed

    1997-01-01

    This paper presents a novel approach for video retrieval from a large archive of MPEG or Motion JPEG compressed video clips. We introduce a retrieval algorithm that takes a video clip as a query and searches the database for clips with similar contents. Video clips are characterized by a sequence of representative frame signatures, which are constructed from DC coefficients and motion information (`DC+M' signatures). The similarity between two video clips is determined by using their respective signatures. This method facilitates retrieval of clips for the purpose of video editing, broadcast news retrieval, or copyright violation detection.

  20. 2D Automatic body-fitted structured mesh generation using advancing extraction method

    USDA-ARS?s Scientific Manuscript database

    This paper presents an automatic mesh generation algorithm for body-fitted structured meshes in Computational Fluids Dynamics (CFD) analysis using the Advancing Extraction Method (AEM). The method is applicable to two-dimensional domains with complex geometries, which have the hierarchical tree-like...

  1. 2D automatic body-fitted structured mesh generation using advancing extraction method

    USDA-ARS?s Scientific Manuscript database

    This paper presents an automatic mesh generation algorithm for body-fitted structured meshes in Computational Fluids Dynamics (CFD) analysis using the Advancing Extraction Method (AEM). The method is applicable to two-dimensional domains with complex geometries, which have the hierarchical tree-like...

  2. Integrating Language and Vision to Generate Natural Language Descriptions of Videos in the Wild

    DTIC Science & Technology

    2014-08-23

    the videos and produce probabilistic detections of grammatical subjects, verbs, and objects. In our data-set there are 45 candidate entities for the... grammatical subject (such as animal, baby, cat, chef, and person) and 241 for the grammatical object (such as flute, motorbike, shrimp, person, and tv...There are 218 candidate activities for the grammatical verb, including climb, cut, play, ride, and walk. Entity Related Features From each video two

  3. Advanced synthetic image generation models and their application to multi/hyperspectral algorithm development

    NASA Astrophysics Data System (ADS)

    Schott, John R.; Brown, Scott D.; Raqueno, Rolando V.; Gross, Harry N.; Robinson, Gary

    1999-01-01

    The need for robust image data sets for algorithm development and testing has prompted the consideration of synthetic imagery as a supplement to real imagery. The unique ability of synthetic image generation (SIG) tools to supply per-pixel truth allows algorithm writers to test difficult scenarios that would require expensive collection and instrumentation efforts. In addition, SIG data products can supply the user with `actual' truth measurements of the entire image area that are not subject to measurement error thereby allowing the user to more accurately evaluate the performance of their algorithm. Advanced algorithms place a high demand on synthetic imagery to reproduce both the spectro-radiometric and spatial character observed in real imagery. This paper describes a synthetic image generation model that strives to include the radiometric processes that affect spectral image formation and capture. In particular, it addresses recent advances in SIG modeling that attempt to capture the spatial/spectral correlation inherent in real images. The model is capable of simultaneously generating imagery from a wide range of sensors allowing it to generate daylight, low-light-level and thermal image inputs for broadband, multi- and hyper-spectral exploitation algorithms.

  4. Multiple Sensor Camera for Enhanced Video Capturing

    NASA Astrophysics Data System (ADS)

    Nagahara, Hajime; Kanki, Yoshinori; Iwai, Yoshio; Yachida, Masahiko

    A resolution of camera has been drastically improved under a current request for high-quality digital images. For example, digital still camera has several mega pixels. Although a video camera has the higher frame-rate, the resolution of a video camera is lower than that of still camera. Thus, the high-resolution is incompatible with the high frame rate of ordinary cameras in market. It is difficult to solve this problem by a single sensor, since it comes from physical limitation of the pixel transfer rate. In this paper, we propose a multi-sensor camera for capturing a resolution and frame-rate enhanced video. Common multi-CCDs camera, such as 3CCD color camera, has same CCD for capturing different spectral information. Our approach is to use different spatio-temporal resolution sensors in a single camera cabinet for capturing higher resolution and frame-rate information separately. We build a prototype camera which can capture high-resolution (2588×1958 pixels, 3.75 fps) and high frame-rate (500×500, 90 fps) videos. We also proposed the calibration method for the camera. As one of the application of the camera, we demonstrate an enhanced video (2128×1952 pixels, 90 fps) generated from the captured videos for showing the utility of the camera.

  5. An overview of recent end-to-end wireless medical video telemedicine systems using 3G.

    PubMed

    Panayides, A; Pattichis, M S; Pattichis, C S; Schizas, C N; Spanias, A; Kyriacou, E

    2010-01-01

    Advances in video compression, network technologies, and computer technologies have contributed to the rapid growth of mobile health (m-health) systems and services. Wide deployment of such systems and services is expected in the near future, and it's foreseen that they will soon be incorporated in daily clinical practice. This study focuses in describing the basic components of an end-to-end wireless medical video telemedicine system, providing a brief overview of the recent advances in the field, while it also highlights future trends in the design of telemedicine systems that are diagnostically driven.

  6. Pro-Anorexia and Anti-Pro-Anorexia Videos on YouTube: Sentiment Analysis of User Responses.

    PubMed

    Oksanen, Atte; Garcia, David; Sirola, Anu; Näsi, Matti; Kaakinen, Markus; Keipi, Teo; Räsänen, Pekka

    2015-11-12

    Pro-anorexia communities exist online and encourage harmful weight loss and weight control practices, often through emotional content that enforces social ties within these communities. User-generated responses to videos that directly oppose pro-anorexia communities have not yet been researched in depth. The aim was to study emotional reactions to pro-anorexia and anti-pro-anorexia online content on YouTube using sentiment analysis. Using the 50 most popular YouTube pro-anorexia and anti-pro-anorexia user channels as a starting point, we gathered data on users, their videos, and their commentators. A total of 395 anorexia videos and 12,161 comments were analyzed using positive and negative sentiments and ratings submitted by the viewers of the videos. The emotional information was automatically extracted with an automatic sentiment detection tool whose reliability was tested with human coders. Ordinary least squares regression models were used to estimate the strength of sentiments. The models controlled for the number of video views and comments, number of months the video had been on YouTube, duration of the video, uploader's activity as a video commentator, and uploader's physical location by country. The 395 videos had more than 6 million views and comments by almost 8000 users. Anti-pro-anorexia video comments expressed more positive sentiments on a scale of 1 to 5 (adjusted prediction [AP] 2.15, 95% CI 2.11-2.19) than did those of pro-anorexia videos (AP 2.02, 95% CI 1.98-2.06). Anti-pro-anorexia videos also received more likes (AP 181.02, 95% CI 155.19-206.85) than pro-anorexia videos (AP 31.22, 95% CI 31.22-37.81). Negative sentiments and video dislikes were equally distributed in responses to both pro-anorexia and anti-pro-anorexia videos. Despite pro-anorexia content being widespread on YouTube, videos promoting help for anorexia and opposing the pro-anorexia community were more popular, gaining more positive feedback and comments than pro-anorexia videos

  7. Video Denoising via Dynamic Video Layering

    NASA Astrophysics Data System (ADS)

    Guo, Han; Vaswani, Namrata

    2018-07-01

    Video denoising refers to the problem of removing "noise" from a video sequence. Here the term "noise" is used in a broad sense to refer to any corruption or outlier or interference that is not the quantity of interest. In this work, we develop a novel approach to video denoising that is based on the idea that many noisy or corrupted videos can be split into three parts - the "low-rank layer", the "sparse layer", and a small residual (which is small and bounded). We show, using extensive experiments, that our denoising approach outperforms the state-of-the-art denoising algorithms.

  8. VideoANT: Extending Online Video Annotation beyond Content Delivery

    ERIC Educational Resources Information Center

    Hosack, Bradford

    2010-01-01

    This paper expands the boundaries of video annotation in education by outlining the need for extended interaction in online video use, identifying the challenges faced by existing video annotation tools, and introducing Video-ANT, a tool designed to create text-based annotations integrated within the time line of a video hosted online. Several…

  9. Making Sure What You See Is What You Get: Digital Video Technology and the Preparation of Teachers of Elementary Science

    ERIC Educational Resources Information Center

    Bueno de Mesquita, Paul; Dean, Ross F.; Young, Betty J.

    2010-01-01

    Advances in digital video technology create opportunities for more detailed qualitative analyses of actual teaching practice in science and other subject areas. User-friendly digital cameras and highly developed, flexible video-analysis software programs have made the tasks of video capture, editing, transcription, and subsequent data analysis…

  10. Video and Computer Technologies for Extended-Campus Programming.

    ERIC Educational Resources Information Center

    Sagan, Edgar L.; And Others

    This paper discusses video and computer technologies for extended-campus programming (courses and programs at off-campus sites). The first section provides an overview of the distance education program at the University of Kentucky (UK), and highlights the improved access to graduate and professional programs, advances in technology, funding,…

  11. Comparison of Video Head Impulse Test and Caloric Reflex Test in advanced unilateral definite Menière's disease.

    PubMed

    Rubin, F; Simon, F; Verillaud, B; Herman, P; Kania, R; Hautefort, C

    2018-06-01

    There have been very few studies of the Video Head Impulse Test (VHIT) in patients with Menière's Disease (MD). Some reported 100% normal VHIT results, others not. These discrepancies may be due to differences in severity. The present study compared VHIT and caloric reflex test results in advanced unilateral definite MD. A prospective study included 37 consecutive patients, with a mean age of 56±12 years. Mean hearing loss was 59±18dB HL; 12 patients were subject to Tumarkin's otolithic crises. Abnormal caloric reflex was defined as ≥20% deficit, and abnormal VHIT as presence of saccades or <0.64 gain in vertical semicircular canals and <0.78 in horizontal canals. All patients had normal VHIT results, and 3 had normal caloric reflex; mean caloric reflex deficit was 45%. The present study is the only one to use the August 2015 updated definition of MD. The results showed that, outside of episodes of crisis, VHIT was normal during advanced unilateral definite MD, in contrast to abnormal caloric reflex. This feature could help distinguish MD from other inner ear diseases, and it would be interesting to try to confirm this hypothesis by studying MD patients. Copyright © 2017 Elsevier Masson SAS. All rights reserved.

  12. Development of a Power Electronics Controller for the Advanced Stirling Radioisotope Generator

    NASA Technical Reports Server (NTRS)

    Leland, Douglas K.; Priest, Joel F.; Keiter, Douglas E.; Schreiber, Jeffrey G.

    2008-01-01

    Under a U.S. Department of Energy program for radioisotope power systems, Lockheed Martin is developing an Engineering Unit of the Advanced Stirling Radioisotope Generator (ASRG). This is an advanced version of the previously reported SRG110 generator. The ASRG uses Advanced Stirling Convertors (ASCs) developed by Sunpower Incorporated under a NASA Research Announcement contract. The ASRG makes use of a Stirling controller based on power electronics that eliminates the tuning capacitors. The power electronics controller synchronizes dual-opposed convertors and maintains a fixed frequency operating point. The controller is single-fault tolerant and uses high-frequency pulse width modulation to create the sinusoidal currents that are nearly in phase with the piston velocity, eliminating the need for large series tuning capacitors. Sunpower supports this effort through an extension of their controller development intended for other applications. Glenn Research Center (GRC) supports this effort through system dynamic modeling, analysis and test support. The ASRG design arrived at a new baseline based on a system-level trade study and extensive feedback from mission planners on the necessity of single-fault tolerance. This paper presents the baseline design with an emphasis on the power electronics controller detailed design concept that will meet space mission requirements including single fault tolerance.

  13. Practicality in Virtuality: Finding Student Meaning in Video Game Education

    NASA Astrophysics Data System (ADS)

    Barko, Timothy; Sadler, Troy D.

    2013-04-01

    This paper looks at the conceptual differences between video game learning and traditional classroom and laboratory learning. It explores the notion of virtual experience by comparing a commonly used high school laboratory protocol on DNA extraction with a similar experience provided by a biotechnology themed video game. When considered conceptually, the notion of virtual experience is not limited to those experiences generated by computer aided technology, as with a video game or computer simulation. The notion of virtuality can apply to many real world experiences as well. It is proposed that the medium of the learning experience, be it video game or classroom, is not an important distinction to consider; instead, we should seek to determine what kinds of meaningful experiences apply for both classrooms and video games.

  14. Holo-Chidi video concentrator card

    NASA Astrophysics Data System (ADS)

    Nwodoh, Thomas A.; Prabhakar, Aditya; Benton, Stephen A.

    2001-12-01

    The Holo-Chidi Video Concentrator Card is a frame buffer for the Holo-Chidi holographic video processing system. Holo- Chidi is designed at the MIT Media Laboratory for real-time computation of computer generated holograms and the subsequent display of the holograms at video frame rates. The Holo-Chidi system is made of two sets of cards - the set of Processor cards and the set of Video Concentrator Cards (VCCs). The Processor cards are used for hologram computation, data archival/retrieval from a host system, and for higher-level control of the VCCs. The VCC formats computed holographic data from multiple hologram computing Processor cards, converting the digital data to analog form to feed the acousto-optic-modulators of the Media lab's Mark-II holographic display system. The Video Concentrator card is made of: a High-Speed I/O (HSIO) interface whence data is transferred from the hologram computing Processor cards, a set of FIFOs and video RAM used as buffer for data for the hololines being displayed, a one-chip integrated microprocessor and peripheral combination that handles communication with other VCCs and furnishes the card with a USB port, a co-processor which controls display data formatting, and D-to-A converters that convert digital fringes to analog form. The co-processor is implemented with an SRAM-based FPGA with over 500,000 gates and controls all the signals needed to format the data from the multiple Processor cards into the format required by Mark-II. A VCC has three HSIO ports through which up to 500 Megabytes of computed holographic data can flow from the Processor Cards to the VCC per second. A Holo-Chidi system with three VCCs has enough frame buffering capacity to hold up to thirty two 36Megabyte hologram frames at a time. Pre-computed holograms may also be loaded into the VCC from a host computer through the low- speed USB port. Both the microprocessor and the co- processor in the VCC can access the main system memory used to store control

  15. A novel multiple description scalable coding scheme for mobile wireless video transmission

    NASA Astrophysics Data System (ADS)

    Zheng, Haifeng; Yu, Lun; Chen, Chang Wen

    2005-03-01

    We proposed in this paper a novel multiple description scalable coding (MDSC) scheme based on in-band motion compensation temporal filtering (IBMCTF) technique in order to achieve high video coding performance and robust video transmission. The input video sequence is first split into equal-sized groups of frames (GOFs). Within a GOF, each frame is hierarchically decomposed by discrete wavelet transform. Since there is a direct relationship between wavelet coefficients and what they represent in the image content after wavelet decomposition, we are able to reorganize the spatial orientation trees to generate multiple bit-streams and employed SPIHT algorithm to achieve high coding efficiency. We have shown that multiple bit-stream transmission is very effective in combating error propagation in both Internet video streaming and mobile wireless video. Furthermore, we adopt the IBMCTF scheme to remove the redundancy for inter-frames along the temporal direction using motion compensated temporal filtering, thus high coding performance and flexible scalability can be provided in this scheme. In order to make compressed video resilient to channel error and to guarantee robust video transmission over mobile wireless channels, we add redundancy to each bit-stream and apply error concealment strategy for lost motion vectors. Unlike traditional multiple description schemes, the integration of these techniques enable us to generate more than two bit-streams that may be more appropriate for multiple antenna transmission of compressed video. Simulate results on standard video sequences have shown that the proposed scheme provides flexible tradeoff between coding efficiency and error resilience.

  16. A Macintosh-Based Scientific Images Video Analysis System

    NASA Technical Reports Server (NTRS)

    Groleau, Nicolas; Friedland, Peter (Technical Monitor)

    1994-01-01

    A set of experiments was designed at MIT's Man-Vehicle Laboratory in order to evaluate the effects of zero gravity on the human orientation system. During many of these experiments, the movements of the eyes are recorded on high quality video cassettes. The images must be analyzed off-line to calculate the position of the eyes at every moment in time. To this aim, I have implemented a simple inexpensive computerized system which measures the angle of rotation of the eye from digitized video images. The system is implemented on a desktop Macintosh computer, processes one play-back frame per second and exhibits adequate levels of accuracy and precision. The system uses LabVIEW, a digital output board, and a video input board to control a VCR, digitize video images, analyze them, and provide a user friendly interface for the various phases of the process. The system uses the Concept Vi LabVIEW library (Graftek's Image, Meudon la Foret, France) for image grabbing and displaying as well as translation to and from LabVIEW arrays. Graftek's software layer drives an Image Grabber board from Neotech (Eastleigh, United Kingdom). A Colour Adapter box from Neotech provides adequate video signal synchronization. The system also requires a LabVIEW driven digital output board (MacADIOS II from GW Instruments, Cambridge, MA) controlling a slightly modified VCR remote control used mainly to advance the video tape frame by frame.

  17. Laser velocimeter measurements of the flowfield generated by an advanced counterrotating propeller

    NASA Technical Reports Server (NTRS)

    Podboy, Gary G.; Krupar, Martin J.

    1989-01-01

    Results are presented of an investigation to measure the flowfield generated by an advanced counterrotating pusher propeller model similar to the full-scale Unducted Fan demonstrator engine. A laser Doppler velocimeter was used to measure the velocity field in several planes normal to the centerline of the model at axial stations upstream and downstream of each rotor. During this investigation, blades of the F4/A4 type were installed on the model which was operating in a freestream Mach 0.72 regime, with the advance ratio of each rotor set at 2.80. The measured data indicate only a slight influence of the potential field of each front rotor blade on the flowfield upstream of the rotor. The data measured downstream of the front rotor characterize the tip vortices, vortex sheets and potential field nonuniformities generated by the front rotor. The unsteadiness of the flow in the rotating frame of reference of the aft rotor is also illustrated.

  18. Integrated homeland security system with passive thermal imaging and advanced video analytics

    NASA Astrophysics Data System (ADS)

    Francisco, Glen; Tillman, Jennifer; Hanna, Keith; Heubusch, Jeff; Ayers, Robert

    2007-04-01

    for creating initial alerts - we refer to this as software level detection, the next level building block Immersive 3D visual assessment for situational awareness and to manage the reaction process - we refer to this as automated intelligent situational awareness, a third building block Wide area command and control capabilities to allow control from a remote location - we refer to this as the management and process control building block integrating together the lower level building elements. In addition, this paper describes three live installations of complete, total systems that incorporate visible and thermal cameras as well as advanced video analytics. Discussion of both system elements and design is extensive.

  19. Research on quality metrics of wireless adaptive video streaming

    NASA Astrophysics Data System (ADS)

    Li, Xuefei

    2018-04-01

    With the development of wireless networks and intelligent terminals, video traffic has increased dramatically. Adaptive video streaming has become one of the most promising video transmission technologies. For this type of service, a good QoS (Quality of Service) of wireless network does not always guarantee that all customers have good experience. Thus, new quality metrics have been widely studies recently. Taking this into account, the objective of this paper is to investigate the quality metrics of wireless adaptive video streaming. In this paper, a wireless video streaming simulation platform with DASH mechanism and multi-rate video generator is established. Based on this platform, PSNR model, SSIM model and Quality Level model are implemented. Quality Level Model considers the QoE (Quality of Experience) factors such as image quality, stalling and switching frequency while PSNR Model and SSIM Model mainly consider the quality of the video. To evaluate the performance of these QoE models, three performance metrics (SROCC, PLCC and RMSE) which are used to make a comparison of subjective and predicted MOS (Mean Opinion Score) are calculated. From these performance metrics, the monotonicity, linearity and accuracy of these quality metrics can be observed.

  20. Video Dubbing Projects in the Foreign Language Curriculum

    ERIC Educational Resources Information Center

    Burston, Jack

    2005-01-01

    The dubbing of muted video clips offers an excellent opportunity to develop the skills of foreign language learners at all linguistic levels. In addition to its motivational value, soundtrack dubbing provides a rich source of activities in all language skill areas: listening, reading, writing, speaking. With advanced students, it also lends itself…

  1. Impact of Video Feedback on Teachers' Eye-Contact Mannerisms in Microteaching.

    ERIC Educational Resources Information Center

    Karasar, Niyazi

    To test the impact of video feedback on teachers' eye-contact mannerisms in microteaching in inservice vocational teacher education, the study utilized video recordings from the data bank generated by previous studies conducted at the Ohio State University's Center for Vocational and Technical Education. The tapes were assigned through a…

  2. H.264/AVC Video Compression on Smartphones

    NASA Astrophysics Data System (ADS)

    Sharabayko, M. P.; Markov, N. G.

    2017-01-01

    In this paper, we studied the usage of H.264/AVC video compression tools by the flagship smartphones. The results show that only a subset of tools is used, meaning that there is still a potential to achieve higher compression efficiency within the H.264/AVC standard, but the most advanced smartphones are already reaching the compression efficiency limit of H.264/AVC.

  3. Advancement of wave generation and signal transmission in wire waveguides for structural health monitoring applications

    NASA Astrophysics Data System (ADS)

    Kropf, M.; Pedrick, M.; Wang, X.; Tittmann, B. R.

    2005-05-01

    As per the recent advances in remote in situ monitoring of industrial equipment using long wire waveguides (~10m), novel applications of existing wave generation techniques and new acoustic modeling software have been used to advance waveguide technology. The amount of attainable information from an acoustic signal in such a system is limited by transmission through the waveguide along with frequency content of the generated waves. Magnetostrictive, and Electromagnetic generation techniques were investigated in order to maximize acoustic transmission along the waveguide and broaden the range of usable frequencies. Commercial EMAT, Magnetostrictive and piezoelectric disc transducers (through the innovative use of an acoustic horn) were utilized to generate waves in the wire waveguide. Insertion loss, frequency bandwidth and frequency range were examined for each technique. Electromagnetic techniques are shown to allow for higher frequency wave generation. This increases accessibility of dispersion curves providing further versatility in the selection of guided wave modes, thus increasing the sensitivity to physical characteristics of the specimen. Both electromagnetic and magnetostrictive transducers require the use of a ferromagnetic waveguide, typically coupled to a steel wire when considering long transmission lines (>2m). The interface between these wires introduces an acoustic transmission loss. Coupling designs were examined with acoustic finite element software (Coupled-Acoustic Piezoelectric Analysis). Simulations along with experimental results aided in the design of a novel joint which minimizes transmission loss. These advances result in the increased capability of remote sensing using wire waveguides.

  4. Method and apparatus for reading meters from a video image

    DOEpatents

    Lewis, Trevor J.; Ferguson, Jeffrey J.

    1997-01-01

    A method and system to enable acquisition of data about an environment from one or more meters using video images. One or more meters are imaged by a video camera and the video signal is digitized. Then, each region of the digital image which corresponds to the indicator of the meter is calibrated and the video signal is analyzed to determine the value indicated by each meter indicator. Finally, from the value indicated by each meter indicator in the calibrated region, a meter reading is generated. The method and system offer the advantages of automatic data collection in a relatively non-intrusive manner without making any complicated or expensive electronic connections, and without requiring intensive manpower.

  5. Modern Methods for fast generation of digital holograms

    NASA Astrophysics Data System (ADS)

    Tsang, P. W. M.; Liu, J. P.; Cheung, K. W. K.; Poon, T.-C.

    2010-06-01

    With the advancement of computers, digital holography (DH) has become an area of interest that has gained much popularity. Research findings derived from this technology enables holograms representing three dimensional (3-D) scenes to be acquired with optical means, or generated with numerical computation. In both cases, the holograms are in the form of numerical data that can be recorded, transmitted, and processed with digital techniques. On top of that, the availability of high capacity digital storage and wide-band communication technologies also cast light on the emergence of real time video holographic systems, enabling animated 3-D contents to be encoded as holographic data, and distributed via existing medium. At present, development in DH has reached a reasonable degree of maturity, but at the same time the heavy computation involved also imposes difficulty in practical applications. In this paper, a summary on a number of successful accomplishments that have been made recently in overcoming this problem is presented. Subsequently, we shall propose an economical framework that is suitable for real time generation and transmission of holographic video signals over existing distribution media. The proposed framework includes an aspect of extending the depth range of the object scene, which is important for the display of large-scale objects. [Figure not available: see fulltext.

  6. Oxygen Generation from Carbon Dioxide for Advanced Life Support

    NASA Technical Reports Server (NTRS)

    Bishop, Sean; Duncan, Keith; Hagelin-Weaver, Helena; Neal, Luke; Sanchez, Jose; Paul, Heather L.; Wachsman, Eric

    2007-01-01

    The partial electrochemical reduction of carbon dioxide (CO2) using ceramic oxygen generators (COGs) is well known and widely studied. However, complete reduction of metabolically produced CO2 (into carbon and oxygen) has the potential of reducing oxygen storage weight for life support if the oxygen can be recovered. Recently, the University of Florida devel- oped novel ceramic oxygen generators employing a bilayer elec- trolyte of gadolinia-doped ceria and erbia-stabilized bismuth ox- ide (ESB) for NASA's future exploration of Mars. The results showed that oxygen could be reliably produced from CO2 at temperatures as low as 400 C. The strategy discussed here for advanced life support systems employs a catalytic layer com- bined with a COG cell so that CO2 is reduced all the way to solid carbon and oxygen without carbon buildup on the COG cell and subsequent deactivation.

  7. English language YouTube videos as a source of lead poisoning-related information: a cross-sectional study.

    PubMed

    Basch, Corey H; Jackson, Ashley M; Yin, Jingjing; Hammond, Rodney N; Adhikari, Atin; Fung, Isaac Chun-Hai

    2017-07-01

    Exposure to lead is detrimental to children's development. YouTube is a form of social media through which people may learn about lead poisoning. The aim of this cross-sectional study was to analyze the variation in lead poisoning-related YouTube contents between different video sources. The 100 most viewed lead poisoning-related videos were manually coded, among which, 50 were consumer-generated, 19 were created by health care professionals, and 31 were news. The 100 videos had a total of more than 8.9 million views, with news videos accounting for 63% of those views. The odds of mentioning what lead poisoning is, how to remove lead, and specifically mentioning the danger in ages 1-5 because of rapid growth among videos created by health care professionals were 7.28 times (Odds ratio, OR = 7.28, 95% CI, 2.09, 25.37, p = 0.002); 6.83 times (OR = 6.83, 95% CI, 2.05, 22.75, p = 0.002) and 9.14 times (OR = 9.14, CI, 2.05, 40.70, p = 0.004) that of consumer-generated videos, respectively. In this study, professional videos had more accurate information regarding lead but their videos were less likely to be viewed compared to consumer-generated videos and news videos. If professional videos about lead poisoning can attract more viewers, more people would be better informed and could possibly influence policy agendas, thereby helping communities being affected by lead exposure.

  8. An Attention-Information-Based Spatial Adaptation Framework for Browsing Videos via Mobile Devices

    NASA Astrophysics Data System (ADS)

    Li, Houqiang; Wang, Yi; Chen, Chang Wen

    2007-12-01

    With the growing popularity of personal digital assistant devices and smart phones, more and more consumers are becoming quite enthusiastic to appreciate videos via mobile devices. However, limited display size of the mobile devices has been imposing significant barriers for users to enjoy browsing high-resolution videos. In this paper, we present an attention-information-based spatial adaptation framework to address this problem. The whole framework includes two major parts: video content generation and video adaptation system. During video compression, the attention information in video sequences will be detected using an attention model and embedded into bitstreams with proposed supplement-enhanced information (SEI) structure. Furthermore, we also develop an innovative scheme to adaptively adjust quantization parameters in order to simultaneously improve the quality of overall encoding and the quality of transcoding the attention areas. When the high-resolution bitstream is transmitted to mobile users, a fast transcoding algorithm we developed earlier will be applied to generate a new bitstream for attention areas in frames. The new low-resolution bitstream containing mostly attention information, instead of the high-resolution one, will be sent to users for display on the mobile devices. Experimental results show that the proposed spatial adaptation scheme is able to improve both subjective and objective video qualities.

  9. Video content parsing based on combined audio and visual information

    NASA Astrophysics Data System (ADS)

    Zhang, Tong; Kuo, C.-C. Jay

    1999-08-01

    While previous research on audiovisual data segmentation and indexing primarily focuses on the pictorial part, significant clues contained in the accompanying audio flow are often ignored. A fully functional system for video content parsing can be achieved more successfully through a proper combination of audio and visual information. By investigating the data structure of different video types, we present tools for both audio and visual content analysis and a scheme for video segmentation and annotation in this research. In the proposed system, video data are segmented into audio scenes and visual shots by detecting abrupt changes in audio and visual features, respectively. Then, the audio scene is categorized and indexed as one of the basic audio types while a visual shot is presented by keyframes and associate image features. An index table is then generated automatically for each video clip based on the integration of outputs from audio and visual analysis. It is shown that the proposed system provides satisfying video indexing results.

  10. Advanced Stirling Radioisotope Generator Life Certification Plan

    NASA Technical Reports Server (NTRS)

    Rusick, Jeffrey J.; Zampino, Edward J.

    2013-01-01

    An Advanced Stirling Radioisotope Generator (ASRG) power supply is being developed by the Department of Energy (DOE) in partnership with NASA for potential future deep space science missions. Unlike previous radioisotope power supplies for space exploration, such as the passive MMRTG used recently on the Mars Curiosity rover, the ASRG is an active dynamic power supply with moving Stirling engine mechanical components. Due to the long life requirement of 17 years and the dynamic nature of the Stirling engine, the ASRG project faced some unique challenges trying to establish full confidence that the power supply will function reliably over the mission life. These unique challenges resulted in the development of an overall life certification plan that emphasizes long-term Stirling engine test and inspection when analysis is not practical. The ASRG life certification plan developed is described.

  11. Investigation of visual fatigue/discomfort generated by S3D video using eye-tracking data

    NASA Astrophysics Data System (ADS)

    Iatsun, Iana; Larabi, Mohamed-Chaker; Fernandez-Maloigne, Christine

    2013-03-01

    Stereoscopic 3D is undoubtedly one of the most attractive content. It has been deployed intensively during the last decade through movies and games. Among the advantages of 3D are the strong involvement of viewers and the increased feeling of presence. However, the sanitary e ects that can be generated by 3D are still not precisely known. For example, visual fatigue and visual discomfort are among symptoms that an observer may feel. In this paper, we propose an investigation of visual fatigue generated by 3D video watching, with the help of eye-tracking. From one side, a questionnaire, with the most frequent symptoms linked with 3D, is used in order to measure their variation over time. From the other side, visual characteristics such as pupil diameter, eye movements ( xations and saccades) and eye blinking have been explored thanks to data provided by the eye-tracker. The statistical analysis showed an important link between blinking duration and number of saccades with visual fatigue while pupil diameter and xations are not precise enough and are highly dependent on content. Finally, time and content play an important role in the growth of visual fatigue due to 3D watching.

  12. Investigation of advancing front method for generating unstructured grid

    NASA Technical Reports Server (NTRS)

    Thomas, A. M.; Tiwari, S. N.

    1992-01-01

    The advancing front technique is used to generate an unstructured grid about simple aerodynamic geometries. Unstructured grids are generated using VGRID2D and VGRID3D software. Specific problems considered are a NACA 0012 airfoil, a bi-plane consisting of two NACA 0012 airfoil, a four element airfoil in its landing configuration, and an ONERA M6 wing. Inviscid time dependent solutions are computed on these geometries using USM3D and the results are compared with standard test results obtained by other investigators. A grid convergence study is conducted for the NACA 0012 airfoil and compared with a structured grid. A structured grid is generated using GRIDGEN software and inviscid solutions computed using CFL3D flow solver. The results obtained by unstructured grid for NACA 0012 airfoil showed an asymmetric distribution of flow quantities, and a fine distribution of grid was required to remove this asymmetry. On the other hand, the structured grid predicted a very symmetric distribution, but when the total number of points were compared to obtain the same results it was seen that structured grid required more grid points.

  13. Dashboard Videos

    NASA Astrophysics Data System (ADS)

    Gleue, Alan D.; Depcik, Chris; Peltier, Ted

    2012-11-01

    Last school year, I had a web link emailed to me entitled "A Dashboard Physics Lesson." The link, created and posted by Dale Basier on his Lab Out Loud blog, illustrates video of a car's speedometer synchronized with video of the road. These two separate video streams are compiled into one video that students can watch and analyze. After seeing this website and video, I decided to create my own dashboard videos to show to my high school physics students. I have produced and synchronized 12 separate dashboard videos, each about 10 minutes in length, driving around the city of Lawrence, KS, and Douglas County, and posted them to a website.2 Each video reflects different types of driving: both positive and negative accelerations and constant speeds. As shown in Fig. 1, I was able to capture speed, distance, and miles per gallon from my dashboard instrumentation. By linking this with a stopwatch, each of these quantities can be graphed with respect to time. I anticipate and hope that teachers will find these useful in their own classrooms, i.e., having physics students watch the videos and create their own motion maps (distance-time, speed-time) for study.

  14. Scalable Video Streaming Relay for Smart Mobile Devices in Wireless Networks

    PubMed Central

    Kwon, Dongwoo; Je, Huigwang; Kim, Hyeonwoo; Ju, Hongtaek; An, Donghyeok

    2016-01-01

    Recently, smart mobile devices and wireless communication technologies such as WiFi, third generation (3G), and long-term evolution (LTE) have been rapidly deployed. Many smart mobile device users can access the Internet wirelessly, which has increased mobile traffic. In 2014, more than half of the mobile traffic around the world was devoted to satisfying the increased demand for the video streaming. In this paper, we propose a scalable video streaming relay scheme. Because many collisions degrade the scalability of video streaming, we first separate networks to prevent excessive contention between devices. In addition, the member device controls the video download rate in order to adapt to video playback. If the data are sufficiently buffered, the member device stops the download. If not, it requests additional video data. We implemented apps to evaluate the proposed scheme and conducted experiments with smart mobile devices. The results showed that our scheme improves the scalability of video streaming in a wireless local area network (WLAN). PMID:27907113

  15. Scalable Video Streaming Relay for Smart Mobile Devices in Wireless Networks.

    PubMed

    Kwon, Dongwoo; Je, Huigwang; Kim, Hyeonwoo; Ju, Hongtaek; An, Donghyeok

    2016-01-01

    Recently, smart mobile devices and wireless communication technologies such as WiFi, third generation (3G), and long-term evolution (LTE) have been rapidly deployed. Many smart mobile device users can access the Internet wirelessly, which has increased mobile traffic. In 2014, more than half of the mobile traffic around the world was devoted to satisfying the increased demand for the video streaming. In this paper, we propose a scalable video streaming relay scheme. Because many collisions degrade the scalability of video streaming, we first separate networks to prevent excessive contention between devices. In addition, the member device controls the video download rate in order to adapt to video playback. If the data are sufficiently buffered, the member device stops the download. If not, it requests additional video data. We implemented apps to evaluate the proposed scheme and conducted experiments with smart mobile devices. The results showed that our scheme improves the scalability of video streaming in a wireless local area network (WLAN).

  16. An objective method for a video quality evaluation in a 3DTV service

    NASA Astrophysics Data System (ADS)

    Wilczewski, Grzegorz

    2015-09-01

    The following article describes proposed objective method for a 3DTV video quality evaluation, a Compressed Average Image Intensity (CAII) method. Identification of the 3DTV service's content chain nodes enables to design a versatile, objective video quality metric. It is based on an advanced approach to the stereoscopic videostream analysis. Insights towards designed metric mechanisms, as well as the evaluation of performance of the designed video quality metric, in the face of the simulated environmental conditions are herein discussed. As a result, created CAII metric might be effectively used in a variety of service quality assessment applications.

  17. Characterization of the Advanced Stirling Radioisotope Generator EU2

    NASA Technical Reports Server (NTRS)

    Lewandowski, Edward J.; Oriti, Salvatore M.; Schifer, Nicholas A.

    2015-01-01

    Significant progress was made developing the Advanced Stirling Radioisotope Generator (ASRG), a 140-watt radioisotope power system. While the ASRG flight development project has ended, the hardware that was designed and built under the project is continuing to be tested to support future Stirling-based power system development. NASA GRC recently completed the assembly of the ASRG Engineering Unit 2 (EU2). The ASRG EU2 consists of the first pair of Sunpower's ASC-E3 Stirling convertors mounted in an aluminum housing, and Lockheed Martin's Engineering Development Unit (EDU) 4 controller (a fourth generation controller). The ASC-E3 convertors and Generator Housing Assembly (GHA) closely match the intended ASRG Qualification Unit flight design. A series of tests were conducted to characterize the EU2, its controller, and the convertors in the flight-like GHA. The GHA contained an argon cover gas for these tests. The tests included: measurement of convertor, controller, and generator performance and efficiency, quantification of control authority of the controller, disturbance force measurement with varying piston phase and piston amplitude, and measurement of the effect of spacecraft DC bus voltage on EU2 performance. The results of these tests are discussed and summarized, providing a basic understanding of EU2 characteristics and the performance and capability of the EDU 4 controller.

  18. Selective encryption for H.264/AVC video coding

    NASA Astrophysics Data System (ADS)

    Shi, Tuo; King, Brian; Salama, Paul

    2006-02-01

    Due to the ease with which digital data can be manipulated and due to the ongoing advancements that have brought us closer to pervasive computing, the secure delivery of video and images has become a challenging problem. Despite the advantages and opportunities that digital video provide, illegal copying and distribution as well as plagiarism of digital audio, images, and video is still ongoing. In this paper we describe two techniques for securing H.264 coded video streams. The first technique, SEH264Algorithm1, groups the data into the following blocks of data: (1) a block that contains the sequence parameter set and the picture parameter set, (2) a block containing a compressed intra coded frame, (3) a block containing the slice header of a P slice, all the headers of the macroblock within the same P slice, and all the luma and chroma DC coefficients belonging to the all the macroblocks within the same slice, (4) a block containing all the ac coefficients, and (5) a block containing all the motion vectors. The first three are encrypted whereas the last two are not. The second method, SEH264Algorithm2, relies on the use of multiple slices per coded frame. The algorithm searches the compressed video sequence for start codes (0x000001) and then encrypts the next N bits of data.

  19. Pro-Anorexia and Anti-Pro-Anorexia Videos on YouTube: Sentiment Analysis of User Responses

    PubMed Central

    Garcia, David; Sirola, Anu; Näsi, Matti; Kaakinen, Markus; Keipi, Teo; Räsänen, Pekka

    2015-01-01

    Background Pro-anorexia communities exist online and encourage harmful weight loss and weight control practices, often through emotional content that enforces social ties within these communities. User-generated responses to videos that directly oppose pro-anorexia communities have not yet been researched in depth. Objective The aim was to study emotional reactions to pro-anorexia and anti-pro-anorexia online content on YouTube using sentiment analysis. Methods Using the 50 most popular YouTube pro-anorexia and anti-pro-anorexia user channels as a starting point, we gathered data on users, their videos, and their commentators. A total of 395 anorexia videos and 12,161 comments were analyzed using positive and negative sentiments and ratings submitted by the viewers of the videos. The emotional information was automatically extracted with an automatic sentiment detection tool whose reliability was tested with human coders. Ordinary least squares regression models were used to estimate the strength of sentiments. The models controlled for the number of video views and comments, number of months the video had been on YouTube, duration of the video, uploader’s activity as a video commentator, and uploader’s physical location by country. Results The 395 videos had more than 6 million views and comments by almost 8000 users. Anti-pro-anorexia video comments expressed more positive sentiments on a scale of 1 to 5 (adjusted prediction [AP] 2.15, 95% CI 2.11-2.19) than did those of pro-anorexia videos (AP 2.02, 95% CI 1.98-2.06). Anti-pro-anorexia videos also received more likes (AP 181.02, 95% CI 155.19-206.85) than pro-anorexia videos (AP 31.22, 95% CI 31.22-37.81). Negative sentiments and video dislikes were equally distributed in responses to both pro-anorexia and anti-pro-anorexia videos. Conclusions Despite pro-anorexia content being widespread on YouTube, videos promoting help for anorexia and opposing the pro-anorexia community were more popular, gaining more

  20. Leakage Currents and Gas Generation in Advanced Wet Tantalum Capacitors

    NASA Technical Reports Server (NTRS)

    Teverovsky, Alexander

    2015-01-01

    Currently, military grade, established reliability wet tantalum capacitors are among the most reliable parts used for space applications. This has been achieved over the years by extensive testing and improvements in design and materials. However, a rapid insertion of new types of advanced, high volumetric efficiency capacitors in space systems without proper testing and analysis of degradation mechanisms might increase risks of failures. The specifics of leakage currents in wet electrolytic capacitors is that the conduction process is associated with electrolysis of electrolyte and gas generation resulting in building up of internal gas pressure in the parts. The risk associated with excessive leakage currents and increased pressure is greater for high value advanced wet tantalum capacitors, but it has not been properly evaluated yet. In this work, in Part I, leakages currents in various types of tantalum capacitors have been analyzed in a wide range of voltages, temperatures, and time under bias. Gas generation and the level of internal pressure have been calculated in Part II for different case sizes and different hermeticity leak rates to assess maximal allowable leakage currents. Effects related to electrolyte penetration to the glass seal area have been studied and the possibility of failures analyzed in Part III. Recommendations for screening and qualification to reduce risks of failures have been suggested.

  1. Video coding for 3D-HEVC based on saliency information

    NASA Astrophysics Data System (ADS)

    Yu, Fang; An, Ping; Yang, Chao; You, Zhixiang; Shen, Liquan

    2016-11-01

    As an extension of High Efficiency Video Coding ( HEVC), 3D-HEVC has been widely researched under the impetus of the new generation coding standard in recent years. Compared with H.264/AVC, its compression efficiency is doubled while keeping the same video quality. However, its higher encoding complexity and longer encoding time are not negligible. To reduce the computational complexity and guarantee the subjective quality of virtual views, this paper presents a novel video coding method for 3D-HEVC based on the saliency informat ion which is an important part of Human Visual System (HVS). First of all, the relationship between the current coding unit and its adjacent units is used to adjust the maximum depth of each largest coding unit (LCU) and determine the SKIP mode reasonably. Then, according to the saliency informat ion of each frame image, the texture and its corresponding depth map will be divided into three regions, that is, salient area, middle area and non-salient area. Afterwards, d ifferent quantization parameters will be assigned to different regions to conduct low complexity coding. Finally, the compressed video will generate new view point videos through the renderer tool. As shown in our experiments, the proposed method saves more bit rate than other approaches and achieves up to highest 38% encoding time reduction without subjective quality loss in compression or rendering.

  2. The Impact of Video Gaming on Decision-Making and Teamworking Skills

    ERIC Educational Resources Information Center

    Campus-Wide Information Systems, 2005

    2005-01-01

    Purpose: To discuss the considerable impact of video gaming on young players' decision-making and teamworking skills, and the belief that video games provide an invaluable "training camp" for business. Design/methodology/approach: An interview with John Beck, the author of the book Got Game: How a New Generation of Gamers Is Reshaping Business…

  3. YouTube Video Project: A "Cool" Way to Learn Communication Ethics

    ERIC Educational Resources Information Center

    Lehman, Carol M.; DuFrene, Debbie D.; Lehman, Mark W.

    2010-01-01

    The millennial generation embraces new technologies as a natural way of accessing and exchanging information, staying connected, and having fun. YouTube, a video-sharing site that allows users to upload, view, and share video clips, is among the latest "cool" technologies for enjoying quick laughs, employing a wide variety of corporate activities,…

  4. Digital Video (DV): A Primer for Developing an Enterprise Video Strategy

    NASA Astrophysics Data System (ADS)

    Talovich, Thomas L.

    2002-09-01

    The purpose of this thesis is to provide an overview of digital video production and delivery. The thesis presents independent research demonstrating the educational value of incorporating video and multimedia content in training and education programs. The thesis explains the fundamental concepts associated with the process of planning, preparing, and publishing video content and assists in the development of follow-on strategies for incorporation of video content into distance training and education programs. The thesis provides an overview of the following technologies: Digital Video, Digital Video Editors, Video Compression, Streaming Video, and Optical Storage Media.

  5. External Magnetic Field Reduction Techniques for the Advanced Stirling Radioisotope Generator

    NASA Technical Reports Server (NTRS)

    Niedra, Janis M.; Geng, Steven M.

    2013-01-01

    Linear alternators coupled to high efficiency Stirling engines are strong candidates for thermal-to-electric power conversion in space. However, the magnetic field emissions, both AC and DC, of these permanent magnet excited alternators can interfere with sensitive instrumentation onboard a spacecraft. Effective methods to mitigate the AC and DC electromagnetic interference (EMI) from solenoidal type linear alternators (like that used in the Advanced Stirling Convertor) have been developed for potential use in the Advanced Stirling Radioisotope Generator. The methods developed avoid the complexity and extra mass inherent in data extraction from multiple sensors or the use of shielding. This paper discusses these methods, and also provides experimental data obtained during breadboard testing of both AC and DC external magnetic field devices.

  6. Video-based eye tracking for neuropsychiatric assessment.

    PubMed

    Adhikari, Sam; Stark, David E

    2017-01-01

    This paper presents a video-based eye-tracking method, ideally deployed via a mobile device or laptop-based webcam, as a tool for measuring brain function. Eye movements and pupillary motility are tightly regulated by brain circuits, are subtly perturbed by many disease states, and are measurable using video-based methods. Quantitative measurement of eye movement by readily available webcams may enable early detection and diagnosis, as well as remote/serial monitoring, of neurological and neuropsychiatric disorders. We successfully extracted computational and semantic features for 14 testing sessions, comprising 42 individual video blocks and approximately 17,000 image frames generated across several days of testing. Here, we demonstrate the feasibility of collecting video-based eye-tracking data from a standard webcam in order to assess psychomotor function. Furthermore, we were able to demonstrate through systematic analysis of this data set that eye-tracking features (in particular, radial and tangential variance on a circular visual-tracking paradigm) predict performance on well-validated psychomotor tests. © 2017 New York Academy of Sciences.

  7. MPEG-7 based video annotation and browsing

    NASA Astrophysics Data System (ADS)

    Hoeynck, Michael; Auweiler, Thorsten; Wellhausen, Jens

    2003-11-01

    The huge amount of multimedia data produced worldwide requires annotation in order to enable universal content access and to provide content-based search-and-retrieval functionalities. Since manual video annotation can be time consuming, automatic annotation systems are required. We review recent approaches to content-based indexing and annotation of videos for different kind of sports and describe our approach to automatic annotation of equestrian sports videos. We especially concentrate on MPEG-7 based feature extraction and content description, where we apply different visual descriptors for cut detection. Further, we extract the temporal positions of single obstacles on the course by analyzing MPEG-7 edge information. Having determined single shot positions as well as the visual highlights, the information is jointly stored with meta-textual information in an MPEG-7 description scheme. Based on this information, we generate content summaries which can be utilized in a user-interface in order to provide content-based access to the video stream, but further for media browsing on a streaming server.

  8. Video-assisted thoracoscopic surgery for posttraumatic hemothorax in the very elderly.

    PubMed

    Schweigert, Michael; Beron, Martin; Dubecz, Attila; Stadlhuber, Rudolf; Stein, Hubert

    2012-10-01

    Thoracic injury is a life-threatening condition with advanced age being an independent risk factor for both higher morbidity and mortality. Furthermore, elderly patients often have severe comorbidity and in case of chest trauma with rib fractures and hemothorax, their clinical condition is likely to deteriorate fast. Aim of this study is to investigate the feasibility and results of video-assisted thoracoscopy for the treatment of posttraumatic hemothorax in very elderly patients of 80 years or more. The outcomes of 60 consecutive patients who received video-assisted thoracoscopic surgery for posttraumatic hemothorax in a German tertiary referral hospital between 2006 and 2010 were reviewed in a retrospective case study. Patients older than 80 years were identified. There were 39 male and 21 female patients. The median age was 63.2 years. The in-hospital-mortality was 1.7% (1/60). Fifteen of the 60 patients were 80 years or older (80-91). Main reason for hemothorax was blunt chest trauma. Altogether 23 patients had fractures of three or more ribs including six octogenarians. Elderly patients suffered from preexisting cardiopulmonary disease and were often referred to the thoracic surgeon with considerable delay. Video-assisted thoracoscopic surgery was feasible and all octogenarian patients finally recovered well without in-hospital-mortality. Video-assisted thoracoscopic surgery for treatment of posttraumatic hemothorax shows excellent results in very elderly patients of 80 years or more. Despite severe comorbidity and often delayed surgery all patients recovered. We therefore conclude that advanced age is no contraindication for surgical management of posttraumatic hemothorax by means of video-assisted thoracoscopy. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  9. Method and apparatus for reading meters from a video image

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lewis, T.J.; Ferguson, J.J.

    1995-12-31

    A method and system enable acquisition of data about an environment from one or more meters using video images. One or more meters are imaged by a video camera and the video signal is digitized. Then, each region of the digital image which corresponds to the indicator of the meter is calibrated and the video signal is analyzed to determine the value indicated by each meter indicator. Finally, from the value indicated by each meter indicator in the calibrated region, a meter reading is generated. The method and system offer the advantages of automatic data collection in a relatively non-intrusivemore » manner without making any complicated or expensive electronic connections, and without requiring intensive manpower.« less

  10. Method and apparatus for reading meters from a video image

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lewis, T.J.; Ferguson, J.J.

    1997-09-30

    A method and system to enable acquisition of data about an environment from one or more meters using video images. One or more meters are imaged by a video camera and the video signal is digitized. Then, each region of the digital image which corresponds to the indicator of the meter is calibrated and the video signal is analyzed to determine the value indicated by each meter indicator. Finally, from the value indicated by each meter indicator in the calibrated region, a meter reading is generated. The method and system offer the advantages of automatic data collection in a relativelymore » non-intrusive manner without making any complicated or expensive electronic connections, and without requiring intensive manpower. 1 fig.« less

  11. Examination of YouTube videos related to synthetic cannabinoids

    PubMed Central

    Kecojevic, Aleksandar; Basch, Corey H.

    2016-01-01

    The popularity of synthetic cannabinoids (SCBs) is increasing the chance for adverse health issues in the United States. Moreover, social media platforms such as YouTube that provided a platform for user-generated content can convey misinformation or glorify use of SCBs. The aim of this study was to fill this gap by describing the content of the most popular YouTube videos containing content related to the SCBs. Videos with at least 1000 or more views found under the search terms “K2” and “spice” included in the analysis. The collective number of views was over 7.5 million. Nearly half of videos were consumer produced (n = 42). The most common content in the videos was description of K2 (n = 69), followed by mentioning dangers of using K2 (n = 47), mentioning side effects (n = 38) and showing a person using K2 (n = 37). One-third of videos (n = 34) promoted use of K2, while 22 videos mentioned risk of dying as a consequence of using K2. YouTube could be used as a surveillance tool to combat this epidemic, but instead, the most widely videos related to SCBs are uploaded by consumers. The content of these consumer videos on YouTube often provide the viewer with access to view a wide array of uploaders describing, encouraging, participating and promoting use. PMID:27639268

  12. Examination of YouTube videos related to synthetic cannabinoids.

    PubMed

    Fullwood, M Dottington; Kecojevic, Aleksandar; Basch, Corey H

    2016-08-17

    The popularity of synthetic cannabinoids (SCBs) is increasing the chance for adverse health issues in the United States. Moreover, social media platforms such as YouTube that provided a platform for user-generated content can convey misinformation or glorify use of SCBs. The aim of this study was to fill this gap by describing the content of the most popular YouTube videos containing content related to the SCBs. Videos with at least 1000 or more views found under the search terms "K2" and "spice" included in the analysis. The collective number of views was over 7.5 million. Nearly half of videos were consumer produced (n=42). The most common content in the videos was description of K2 (n=69), followed by mentioning dangers of using K2 (n=47), mentioning side effects (n=38) and showing a person using K2 (n=37). One-third of videos (n=34) promoted use of K2, while 22 videos mentioned risk of dying as a consequence of using K2. YouTube could be used as a surveillance tool to combat this epidemic, but instead, the most widely videos related to SCBs are uploaded by consumers. The content of these consumer videos on YouTube often provide the viewer with access to view a wide array of uploaders describing, encouraging, participating and promoting use.

  13. 3D reconstruction of cystoscopy videos for comprehensive bladder records

    PubMed Central

    Lurie, Kristen L.; Angst, Roland; Zlatev, Dimitar V.; Liao, Joseph C.; Ellerbee Bowden, Audrey K.

    2017-01-01

    White light endoscopy is widely used for diagnostic imaging of the interior of organs and body cavities, but the inability to correlate individual 2D images with 3D organ morphology limits its utility for quantitative or longitudinal studies of disease physiology or cancer surveillance. As a result, most endoscopy videos, which carry enormous data potential, are used only for real-time guidance and are discarded after collection. We present a computational method to reconstruct and visualize a 3D model of organs from an endoscopic video that captures the shape and surface appearance of the organ. A key aspect of our strategy is the use of advanced computer vision techniques and unmodified, clinical-grade endoscopy hardware with few constraints on the image acquisition protocol, which presents a low barrier to clinical translation. We validate the accuracy and robustness of our reconstruction and co-registration method using cystoscopy videos from tissue-mimicking bladder phantoms and show clinical utility during cystoscopy in the operating room for bladder cancer evaluation. As our method can powerfully augment the visual medical record of the appearance of internal organs, it is broadly applicable to endoscopy and represents a significant advance in cancer surveillance opportunities for big-data cancer research. PMID:28736658

  14. Public Education and Outreach Through Full-Dome Video Technology

    NASA Astrophysics Data System (ADS)

    Pollock, John

    2009-03-01

    My long-term goal is to enhance public understanding of complex systems that can be best demonstrated through richly detailed computer graphic animation displayed with full-dome video technology. My current focus is on health science advances that focus on regenerative medicine, which helps the body heal itself. Such topics facilitate science learning and health literacy. My team develops multi-media presentations that bring the scientific and medical advances to the public through immersive high-definition video animation. Implicit in treating the topics of regenerative medicine will be the need to address stem cell biology. The topics are clarified and presented from a platform of facts and balanced ethical consideration. The production process includes communicating scientific information about the excitement and importance of stem cell research. Principles of function are emphasized over specific facts or terminology by focusing on a limited, but fundamental set of concepts. To achieve this, visually rich, biologically accurate 3D computer graphic environments are created to illustrate the cells, tissues and organs of interest. A suite of films are produced, and evaluated in pre- post-surveys assessing attitudes, knowledge and learning. Each film uses engaging interactive demonstrations to illustrate biological functions, the things that go wrong due to disease and disability, and the remedy provided by regenerative medicine. While the images are rich and detailed, the language is accessible and appropriate to the audience. The digital, high-definition video is also re-edited for presentation in other ``flat screen'' formats, increasing our distribution potential. Show content is also presented in an interactive web space (www.sepa.duq.edu) with complementing teacher resource guides and student workbooks and companion video games.

  15. Evaluation of a video, telephone follow-ups, and an online forum as components of a psychoeducational intervention for caregivers of persons with advanced cancer.

    PubMed

    Leow, Mabel Q H; Chan, Sally W C

    2016-10-01

    Our aim was to evaluate caregivers' perceptions of a video, telephone follow-up, and online forum as components of a psychoeducational intervention. Qualitative semistructured face-to-face interviews were conducted with 12 participants two weeks post-intervention. The study was conducted from September of 2012 to May of 2015. Family caregivers were recruited from four home hospice organizations (HCA Hospice Care, Metta Hospice, Singapore Cancer Centre, and Agape Methodist Hospice) and the National Cancer Centre outpatient clinic in Singapore. A purposive sample was employed, and participants were recruited until data saturation. Qualitative interviews were transcribed verbatim. Transcripts were coded and analyzed using content analysis. Two of the research team members were involved in the data analysis. Two-thirds of participants were females (n = 8). Their ages ranged from 22 to 67 (mean = 50.50, SD = 11.53). About two-thirds were married (n = 7). Most participants were caring for a parent (n = 10), one for a spouse, and one for her mother-in-law. Caregivers favored the use of video for delivery of educational information. They liked the visual and audio aspects of the video. The ability to identify with the caregiver and scenarios in the video helped in the learning process. They appreciated telephone follow-ups from healthcare professionals for informational and emotional support. The online forum as a platform for sharing of information and provision of support was not received well by the caregivers in this study. The reasons for this included their being busy, not being computer savvy, rarely surfing the internet, and not feeling comfortable sharing with strangers on an online platform. This study provided insight into caregivers' perceptions of various components of a psychoeducational intervention. It also gave us a better understanding of how future psychoeducational interventions and support for caregivers of persons with advanced cancer could be

  16. Generating OER by Recording Lectures: A Case Study

    ERIC Educational Resources Information Center

    Llamas-Nistal, Martín; Mikic-Fonte, Fernando A.

    2014-01-01

    The University of Vigo, Vigo, Spain, has the objective of making all the teaching material generated by its teachers freely available. To attain this objective, it encourages the development of Open Educational Resources, especially videos. This paper presents an experience of recording lectures and generating the corresponding videos as a step…

  17. Dashboard Videos

    ERIC Educational Resources Information Center

    Gleue, Alan D.; Depcik, Chris; Peltier, Ted

    2012-01-01

    Last school year, I had a web link emailed to me entitled "A Dashboard Physics Lesson." The link, created and posted by Dale Basier on his "Lab Out Loud" blog, illustrates video of a car's speedometer synchronized with video of the road. These two separate video streams are compiled into one video that students can watch and analyze. After seeing…

  18. Advanced Training Techniques Using Computer Generated Imagery.

    DTIC Science & Technology

    1983-02-28

    described in this report has been made and is submitted along with this report. Unfortunately, the quality possible on standard monochromic 525 line...video tape is not representative of the quality of the presentations as displayed on a color beam penetration visual system, but one can, through the...YORK - LAGUARDIA (TWILIGHT) SEA SURFACE AND WAKE MINNEAPOLIS - ST. PAUL KC-135 TANKER INTERNATIONAL (TWILIGHT) MINNEAPOLIS - ST. PAUL GROUND TARGETS

  19. Adaptive format conversion for scalable video coding

    NASA Astrophysics Data System (ADS)

    Wan, Wade K.; Lim, Jae S.

    2001-12-01

    The enhancement layer in many scalable coding algorithms is composed of residual coding information. There is another type of information that can be transmitted instead of (or in addition to) residual coding. Since the encoder has access to the original sequence, it can utilize adaptive format conversion (AFC) to generate the enhancement layer and transmit the different format conversion methods as enhancement data. This paper investigates the use of adaptive format conversion information as enhancement data in scalable video coding. Experimental results are shown for a wide range of base layer qualities and enhancement bitrates to determine when AFC can improve video scalability. Since the parameters needed for AFC are small compared to residual coding, AFC can provide video scalability at low enhancement layer bitrates that are not possible with residual coding. In addition, AFC can also be used in addition to residual coding to improve video scalability at higher enhancement layer bitrates. Adaptive format conversion has not been studied in detail, but many scalable applications may benefit from it. An example of an application that AFC is well-suited for is the migration path for digital television where AFC can provide immediate video scalability as well as assist future migrations.

  20. Construction of a multimodal CT-video chest model

    NASA Astrophysics Data System (ADS)

    Byrnes, Patrick D.; Higgins, William E.

    2014-03-01

    Bronchoscopy enables a number of minimally invasive chest procedures for diseases such as lung cancer and asthma. For example, using the bronchoscope's continuous video stream as a guide, a physician can navigate through the lung airways to examine general airway health, collect tissue samples, or administer a disease treatment. In addition, physicians can now use new image-guided intervention (IGI) systems, which draw upon both three-dimensional (3D) multi-detector computed tomography (MDCT) chest scans and bronchoscopic video, to assist with bronchoscope navigation. Unfortunately, little use is made of the acquired video stream, a potentially invaluable source of information. In addition, little effort has been made to link the bronchoscopic video stream to the detailed anatomical information given by a patient's 3D MDCT chest scan. We propose a method for constructing a multimodal CT-video model of the chest. After automatically computing a patient's 3D MDCT-based airway-tree model, the method next parses the available video data to generate a positional linkage between a sparse set of key video frames and airway path locations. Next, a fusion/mapping of the video's color mucosal information and MDCT-based endoluminal surfaces is performed. This results in the final multimodal CT-video chest model. The data structure constituting the model provides a history of those airway locations visited during bronchoscopy. It also provides for quick visual access to relevant sections of the airway wall by condensing large portions of endoscopic video into representative frames containing important structural and textural information. When examined with a set of interactive visualization tools, the resulting fused data structure provides a rich multimodal data source. We demonstrate the potential of the multimodal model with both phantom and human data.

  1. Avionics-compatible video facial cognizer for detection of pilot incapacitation.

    PubMed

    Steffin, Morris

    2006-01-01

    High-acceleration loss of consciousness is a serious problem for military pilots. In this laboratory, a video cognizer has been developed that in real time detects facial changes closely coupled to the onset of loss of consciousness. Efficient algorithms are compatible with video digital signal processing hardware and are thus configurable on an autonomous single board that generates alarm triggers to activate autopilot, and is avionics-compatible.

  2. Raster Scan Computer Image Generation (CIG) System Based On Refresh Memory

    NASA Astrophysics Data System (ADS)

    Dichter, W.; Doris, K.; Conkling, C.

    1982-06-01

    A full color, Computer Image Generation (CIG) raster visual system has been developed which provides a high level of training sophistication by utilizing advanced semiconductor technology and innovative hardware and firmware techniques. Double buffered refresh memory and efficient algorithms eliminate the problem of conventional raster line ordering by allowing the generated image to be stored in a random fashion. Modular design techniques and simplified architecture provide significant advantages in reduced system cost, standardization of parts, and high reliability. The major system components are a general purpose computer to perform interfacing and data base functions; a geometric processor to define the instantaneous scene image; a display generator to convert the image to a video signal; an illumination control unit which provides final image processing; and a CRT monitor for display of the completed image. Additional optional enhancements include texture generators, increased edge and occultation capability, curved surface shading, and data base extensions.

  3. Quantitative assessment of human motion using video motion analysis

    NASA Technical Reports Server (NTRS)

    Probe, John D.

    1990-01-01

    In the study of the dynamics and kinematics of the human body, a wide variety of technologies was developed. Photogrammetric techniques are well documented and are known to provide reliable positional data from recorded images. Often these techniques are used in conjunction with cinematography and videography for analysis of planar motion, and to a lesser degree three-dimensional motion. Cinematography has been the most widely used medium for movement analysis. Excessive operating costs and the lag time required for film development coupled with recent advances in video technology have allowed video based motion analysis systems to emerge as a cost effective method of collecting and analyzing human movement. The Anthropometric and Biomechanics Lab at Johnson Space Center utilizes the video based Ariel Performance Analysis System to develop data on shirt-sleeved and space-suited human performance in order to plan efficient on orbit intravehicular and extravehicular activities. The system is described.

  4. Impact of video games on plasticity of the hippocampus.

    PubMed

    West, G L; Konishi, K; Diarra, M; Benady-Chorney, J; Drisdelle, B L; Dahmani, L; Sodums, D J; Lepore, F; Jolicoeur, P; Bohbot, V D

    2017-08-08

    The hippocampus is critical to healthy cognition, yet results in the current study show that action video game players have reduced grey matter within the hippocampus. A subsequent randomised longitudinal training experiment demonstrated that first-person shooting games reduce grey matter within the hippocampus in participants using non-spatial memory strategies. Conversely, participants who use hippocampus-dependent spatial strategies showed increased grey matter in the hippocampus after training. A control group that trained on 3D-platform games displayed growth in either the hippocampus or the functionally connected entorhinal cortex. A third study replicated the effect of action video game training on grey matter in the hippocampus. These results show that video games can be beneficial or detrimental to the hippocampal system depending on the navigation strategy that a person employs and the genre of the game.Molecular Psychiatry advance online publication, 8 August 2017; doi:10.1038/mp.2017.155.

  5. The Time-Frequency Signatures of Advanced Seismic Signals Generated by Debris Flows

    NASA Astrophysics Data System (ADS)

    Chu, C. R.; Huang, C. J.; Lin, C. R.; Wang, C. C.; Kuo, B. Y.; Yin, H. Y.

    2014-12-01

    The seismic monitoring is expected to reveal the process of debris flow from the initial area to alluvial fan, because other field monitoring techniques, such as the video camera and the ultrasonic sensor, are limited by detection range. For this reason, seismic approaches have been used as the detection system of debris flows over the past few decades. The analysis of the signatures of the seismic signals in time and frequency domain can be used to identify the different phases of debris flow. This study dedicates to investigate the different stages of seismic signals due to debris flow, including the advanced signal, the main front, and the decaying tail. Moreover, the characteristics of the advanced signals forward to the approach of main front were discussed for the warning purpose. This study presents a permanent system, composed by two seismometers, deployed along the bank of Ai-Yu-Zi Creek in Nantou County, which is one of the active streams with debris flow in Taiwan. The three axes seismometer with frequency response of 7 sec - 200 Hz was developed by the Institute of Earth Sciences (IES), Academia Sinica for the purpose to detect debris flow. The original idea of replacing the geophone system with the seismometer technique was for catching the advanced signals propagating from the upper reach of the stream before debris flow arrival because of the high sensitivity. Besides, the low frequency seismic waves could be also early detected because of the low attenuation. However, for avoiding other unnecessary ambient vibrations, the sensitivity of seismometer should be lower than the general seismometer for detecting teleseism. Three debris flows with different mean velocities were detected in 2013 and 2014. The typical triangular shape was obviously demonstrated in time series data and the spectrograms of the seismic signals from three events. The frequency analysis showed that enormous debris flow bearing huge boulders would induce low frequency seismic

  6. Video Creation: A Tool for Engaging Students to Learn Science

    NASA Astrophysics Data System (ADS)

    Courtney, A. R.

    2016-12-01

    Students today process information very differently than those of previous generations. They are used to getting their news from 140-character tweets, being entertained by You-Tube videos, and Googling everything. Thus, traditional passive methods of content delivery do not work well for many of these millennials. All students, regardless of career goals, need to become scientifically literate to be able to function in a world where scientific issues are of increasing importance. Those who have had experience applying scientific reasoning to real-world problems in the classroom will be better equipped to make informed decisions in the future. The problem to be solved is how to present scientific content in a manner that fosters student learning in today's world. This presentation will describe how the appeal of technology and social communication via creation of documentary-style videos has been used to engage students to learn scientific concepts in a university non-science major course focused on energy and the environment. These video projects place control of the learning experience into the hands of the learner and provide an opportunity to develop critical thinking skills. Students discover how to locate scientifically reliable information by limiting searches to respected sources and synthesize the information through collaborative content creation to generate a "story". Video projects have a number of advantages over research paper writing. They allow students to develop collaboration skills and be creative in how they deliver the scientific content. Research projects are more effective when the audience is larger than just a teacher. Although our videos are used as peer-teaching tools in the classroom, they also are shown to a larger audience in a public forum to increase the challenge. Video will be the professional communication tool of the future. This presentation will cover the components of the video production process and instructional lessons

  7. Camera Control and Geo-Registration for Video Sensor Networks

    NASA Astrophysics Data System (ADS)

    Davis, James W.

    With the use of large video networks, there is a need to coordinate and interpret the video imagery for decision support systems with the goal of reducing the cognitive and perceptual overload of human operators. We present computer vision strategies that enable efficient control and management of cameras to effectively monitor wide-coverage areas, and examine the framework within an actual multi-camera outdoor urban video surveillance network. First, we construct a robust and precise camera control model for commercial pan-tilt-zoom (PTZ) video cameras. In addition to providing a complete functional control mapping for PTZ repositioning, the model can be used to generate wide-view spherical panoramic viewspaces for the cameras. Using the individual camera control models, we next individually map the spherical panoramic viewspace of each camera to a large aerial orthophotograph of the scene. The result provides a unified geo-referenced map representation to permit automatic (and manual) video control and exploitation of cameras in a coordinated manner. The combined framework provides new capabilities for video sensor networks that are of significance and benefit to the broad surveillance/security community.

  8. Synchronizing A Stroboscope With A Video Camera

    NASA Technical Reports Server (NTRS)

    Rhodes, David B.; Franke, John M.; Jones, Stephen B.; Dismond, Harriet R.

    1993-01-01

    Circuit synchronizes flash of light from stroboscope with frame and field periods of video camera. Sync stripper sends vertical-synchronization signal to delay generator, which generates trigger signal. Flashlamp power supply accepts delayed trigger signal and sends pulse of power to flash lamp. Designed for use in making short-exposure images that "freeze" flow in wind tunnel. Also used for making longer-exposure images obtained by use of continuous intense illumination.

  9. Knowledge-based understanding of aerial surveillance video

    NASA Astrophysics Data System (ADS)

    Cheng, Hui; Butler, Darren

    2006-05-01

    Aerial surveillance has long been used by the military to locate, monitor and track the enemy. Recently, its scope has expanded to include law enforcement activities, disaster management and commercial applications. With the ever-growing amount of aerial surveillance video acquired daily, there is an urgent need for extracting actionable intelligence in a timely manner. Furthermore, to support high-level video understanding, this analysis needs to go beyond current approaches and consider the relationships, motivations and intentions of the objects in the scene. In this paper we propose a system for interpreting aerial surveillance videos that automatically generates a succinct but meaningful description of the observed regions, objects and events. For a given video, the semantics of important regions and objects, and the relationships between them, are summarised into a semantic concept graph. From this, a textual description is derived that provides new search and indexing options for aerial video and enables the fusion of aerial video with other information modalities, such as human intelligence, reports and signal intelligence. Using a Mixture-of-Experts video segmentation algorithm an aerial video is first decomposed into regions and objects with predefined semantic meanings. The objects are then tracked and coerced into a semantic concept graph and the graph is summarized spatially, temporally and semantically using ontology guided sub-graph matching and re-writing. The system exploits domain specific knowledge and uses a reasoning engine to verify and correct the classes, identities and semantic relationships between the objects. This approach is advantageous because misclassifications lead to knowledge contradictions and hence they can be easily detected and intelligently corrected. In addition, the graph representation highlights events and anomalies that a low-level analysis would overlook.

  10. High Resolution, High Frame Rate Video Technology

    NASA Technical Reports Server (NTRS)

    1990-01-01

    Papers and working group summaries presented at the High Resolution, High Frame Rate Video (HHV) Workshop are compiled. HHV system is intended for future use on the Space Shuttle and Space Station Freedom. The Workshop was held for the dual purpose of: (1) allowing potential scientific users to assess the utility of the proposed system for monitoring microgravity science experiments; and (2) letting technical experts from industry recommend improvements to the proposed near-term HHV system. The following topics are covered: (1) State of the art in the video system performance; (2) Development plan for the HHV system; (3) Advanced technology for image gathering, coding, and processing; (4) Data compression applied to HHV; (5) Data transmission networks; and (6) Results of the users' requirements survey conducted by NASA.

  11. Digital video technologies and their network requirements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    R. P. Tsang; H. Y. Chen; J. M. Brandt

    1999-11-01

    Coded digital video signals are considered to be one of the most difficult data types to transport due to their real-time requirements and high bit rate variability. In this study, the authors discuss the coding mechanisms incorporated by the major compression standards bodies, i.e., JPEG and MPEG, as well as more advanced coding mechanisms such as wavelet and fractal techniques. The relationship between the applications which use these coding schemes and their network requirements are the major focus of this study. Specifically, the authors relate network latency, channel transmission reliability, random access speed, buffering and network bandwidth with the variousmore » coding techniques as a function of the applications which use them. Such applications include High-Definition Television, Video Conferencing, Computer-Supported Collaborative Work (CSCW), and Medical Imaging.« less

  12. Detection of illegal transfer of videos over the Internet

    NASA Astrophysics Data System (ADS)

    Chaisorn, Lekha; Sainui, Janya; Manders, Corey

    2010-07-01

    In this paper, a method for detecting infringements or modifications of a video in real-time is proposed. The method first segments a video stream into shots, after which it extracts some reference frames as keyframes. This process is performed employing a Singular Value Decomposition (SVD) technique developed in this work. Next, for each input video (represented by its keyframes), ordinal-based signature and SIFT (Scale Invariant Feature Transform) descriptors are generated. The ordinal-based method employs a two-level bitmap indexing scheme to construct the index for each video signature. The first level clusters all input keyframes into k clusters while the second level converts the ordinal-based signatures into bitmap vectors. On the other hand, the SIFT-based method directly uses the descriptors as the index. Given a suspect video (being streamed or transferred on the Internet), we generate the signature (ordinal and SIFT descriptors) then we compute similarity between its signature and those signatures in the database based on ordinal signature and SIFT descriptors separately. For similarity measure, besides the Euclidean distance, Boolean operators are also utilized during the matching process. We have tested our system by performing several experiments on 50 videos (each about 1/2 hour in duration) obtained from the TRECVID 2006 data set. For experiments set up, we refer to the conditions provided by TRECVID 2009 on "Content-based copy detection" task. In addition, we also refer to the requirements issued in the call for proposals by MPEG standard on the similar task. Initial result shows that our framework is effective and robust. As compared to our previous work, on top of the achievement we obtained by reducing the storage space and time taken in the ordinal based method, by introducing the SIFT features, we could achieve an overall accuracy in F1 measure of about 96% (improved about 8%).

  13. Enhanced Video-Oculography System

    NASA Technical Reports Server (NTRS)

    Moore, Steven T.; MacDougall, Hamish G.

    2009-01-01

    A previously developed video-oculography system has been enhanced for use in measuring vestibulo-ocular reflexes of a human subject in a centrifuge, motor vehicle, or other setting. The system as previously developed included a lightweight digital video camera mounted on goggles. The left eye was illuminated by an infrared light-emitting diode via a dichroic mirror, and the camera captured images of the left eye in infrared light. To extract eye-movement data, the digitized video images were processed by software running in a laptop computer. Eye movements were calibrated by having the subject view a target pattern, fixed with respect to the subject s head, generated by a goggle-mounted laser with a diffraction grating. The system as enhanced includes a second camera for imaging the scene from the subject s perspective, and two inertial measurement units (IMUs) for measuring linear accelerations and rates of rotation for computing head movements. One IMU is mounted on the goggles, the other on the centrifuge or vehicle frame. All eye-movement and head-motion data are time-stamped. In addition, the subject s point of regard is superimposed on each scene image to enable analysis of patterns of gaze in real time.

  14. Bim Automation: Advanced Modeling Generative Process for Complex Structures

    NASA Astrophysics Data System (ADS)

    Banfi, F.; Fai, S.; Brumana, R.

    2017-08-01

    The new paradigm of the complexity of modern and historic structures, which are characterised by complex forms, morphological and typological variables, is one of the greatest challenges for building information modelling (BIM). Generation of complex parametric models needs new scientific knowledge concerning new digital technologies. These elements are helpful to store a vast quantity of information during the life cycle of buildings (LCB). The latest developments of parametric applications do not provide advanced tools, resulting in time-consuming work for the generation of models. This paper presents a method capable of processing and creating complex parametric Building Information Models (BIM) with Non-Uniform to NURBS) with multiple levels of details (Mixed and ReverseLoD) based on accurate 3D photogrammetric and laser scanning surveys. Complex 3D elements are converted into parametric BIM software and finite element applications (BIM to FEA) using specific exchange formats and new modelling tools. The proposed approach has been applied to different case studies: the BIM of modern structure for the courtyard of West Block on Parliament Hill in Ottawa (Ontario) and the BIM of Masegra Castel in Sondrio (Italy), encouraging the dissemination and interaction of scientific results without losing information during the generative process.

  15. Single-layer HDR video coding with SDR backward compatibility

    NASA Astrophysics Data System (ADS)

    Lasserre, S.; François, E.; Le Léannec, F.; Touzé, D.

    2016-09-01

    The migration from High Definition (HD) TV to Ultra High Definition (UHD) is already underway. In addition to an increase of picture spatial resolution, UHD will bring more color and higher contrast by introducing Wide Color Gamut (WCG) and High Dynamic Range (HDR) video. As both Standard Dynamic Range (SDR) and HDR devices will coexist in the ecosystem, the transition from Standard Dynamic Range (SDR) to HDR will require distribution solutions supporting some level of backward compatibility. This paper presents a new HDR content distribution scheme, named SL-HDR1, using a single layer codec design and providing SDR compatibility. The solution is based on a pre-encoding HDR-to-SDR conversion, generating a backward compatible SDR video, with side dynamic metadata. The resulting SDR video is then compressed, distributed and decoded using standard-compliant decoders (e.g. HEVC Main 10 compliant). The decoded SDR video can be directly rendered on SDR displays without adaptation. Dynamic metadata of limited size are generated by the pre-processing and used to reconstruct the HDR signal from the decoded SDR video, using a post-processing that is the functional inverse of the pre-processing. Both HDR quality and artistic intent are preserved. Pre- and post-processing are applied independently per picture, do not involve any inter-pixel dependency, and are codec agnostic. Compression performance, and SDR quality are shown to be solidly improved compared to the non-backward and backward-compatible approaches, respectively using the Perceptual Quantization (PQ) and Hybrid Log Gamma (HLG) Opto-Electronic Transfer Functions (OETF).

  16. Design of a Facility to Test the Advanced Stirling Radioisotope Generator Engineering Unit

    NASA Technical Reports Server (NTRS)

    Lewandowski, Edward J.; Schreiber, Jeffrey G.; Oriti, Salvatore M.; Meer, David W.; Brace, Michael H.; Dugala, Gina

    2010-01-01

    The Advanced Stirling Radioisotope Generator (ASRG), a high efficiency generator, is being considered for space missions. An engineering unit, the ASRG engineering unit (EU), was designed and fabricated by Lockheed Martin under contract to the Department of Energy. This unit is currently under extended operation test at the NASA Glenn Research Center (GRC) to generate performance data and validate the life and reliability predictions for the generator and the Stirling convertors. A special test facility was designed and built for the ASRG EU. This paper summarizes details of the test facility design, including the mechanical mounting, heat-rejection system, argon system, control systems, and maintenance. The effort proceeded from requirements definition through design, analysis, build, and test. Initial testing and facility performance results are discussed.

  17. Thematic video indexing to support video database retrieval and query processing

    NASA Astrophysics Data System (ADS)

    Khoja, Shakeel A.; Hall, Wendy

    1999-08-01

    This paper presents a novel video database system, which caters for complex and long videos, such as documentaries, educational videos, etc. As compared to relatively structured format videos like CNN news or commercial advertisements, this database system has the capacity to work with long and unstructured videos.

  18. Video game addiction, ADHD symptomatology, and video game reinforcement.

    PubMed

    Mathews, Christine L; Morrell, Holly E R; Molle, Jon E

    2018-06-06

    Up to 23% of people who play video games report symptoms of addiction. Individuals with attention deficit hyperactivity disorder (ADHD) may be at increased risk for video game addiction, especially when playing games with more reinforcing properties. The current study tested whether level of video game reinforcement (type of game) places individuals with greater ADHD symptom severity at higher risk for developing video game addiction. Adult video game players (N = 2,801; Mean age = 22.43, SD = 4.70; 93.30% male; 82.80% Caucasian) completed an online survey. Hierarchical multiple linear regression analyses were used to test type of game, ADHD symptom severity, and the interaction between type of game and ADHD symptomatology as predictors of video game addiction severity, after controlling for age, gender, and weekly time spent playing video games. ADHD symptom severity was positively associated with increased addiction severity (b = .73 and .68, ps < 0.001). Type of game played or preferred the most was not associated with addiction severity, ps > .05. The relationship between ADHD symptom severity and addiction severity did not depend on the type of video game played or preferred most, ps > .05. Gamers who have greater ADHD symptom severity may be at greater risk for developing symptoms of video game addiction and its negative consequences, regardless of type of video game played or preferred most. Individuals who report ADHD symptomatology and also identify as gamers may benefit from psychoeducation about the potential risk for problematic play.

  19. The Measurement of Intelligence in the XXI Century using Video Games.

    PubMed

    Quiroga, M A; Román, F J; De La Fuente, J; Privado, J; Colom, R

    2016-12-05

    This paper reviews the use of video games for measuring intelligence differences and reports two studies analyzing the relationship between intelligence and performance on a leisure video game. In the first study, the main focus was to design an Intelligence Test using puzzles from the video game. Forty-seven young participants played "Professor Layton and the curious village"® for a maximum of 15 hours and completed a set of intelligence standardized tests. Results show that the time required for completing the game interacts with intelligence differences: the higher the intelligence, the lower the time (d = .91). Furthermore, a set of 41 puzzles showed excellent psychometric properties. The second study, done seven years later, confirmed the previous findings. We finally discuss the pros and cons of video games as tools for measuring cognitive abilities with commercial video games, underscoring that psychologists must develop their own intelligence video games and delineate their key features for the measurement devices of next generation.

  20. OceanVideoLab: A Tool for Exploring Underwater Video

    NASA Astrophysics Data System (ADS)

    Ferrini, V. L.; Morton, J. J.; Wiener, C.

    2016-02-01

    Video imagery acquired with underwater vehicles is an essential tool for characterizing seafloor ecosystems and seafloor geology. It is a fundamental component of ocean exploration that facilitates real-time operations, augments multidisciplinary scientific research, and holds tremendous potential for public outreach and engagement. Acquiring, documenting, managing, preserving and providing access to large volumes of video acquired with underwater vehicles presents a variety of data stewardship challenges to the oceanographic community. As a result, only a fraction of underwater video content collected with research submersibles is documented, discoverable and/or viewable online. With more than 1 billion users, YouTube offers infrastructure that can be leveraged to help address some of the challenges associated with sharing underwater video with a broad global audience. Anyone can post content to YouTube, and some oceanographic organizations, such as the Schmidt Ocean Institute, have begun live-streaming video directly from underwater vehicles. OceanVideoLab (oceanvideolab.org) was developed to help improve access to underwater video through simple annotation, browse functionality, and integration with related environmental data. Any underwater video that is publicly accessible on YouTube can be registered with OceanVideoLab by simply providing a URL. It is strongly recommended that a navigational file also be supplied to enable geo-referencing of observations. Once a video is registered, it can be viewed and annotated using a simple user interface that integrates observations with vehicle navigation data if provided. This interface includes an interactive map and a list of previous annotations that allows users to jump to times of specific observations in the video. Future enhancements to OceanVideoLab will include the deployment of a search interface, the development of an application program interface (API) that will drive the search and enable querying of

  1. Advanced Direct-Drive Generator for Improved Availability of Oscillating Wave Surge Converter Power Generation Systems Final Technical Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Englebretson, Steven; Ouyang, Wen; Tschida, Colin

    This report summarizes the activities conducted under the DOE-EERE funded project DE-EE0006400, where ABB Inc. (ABB), in collaboration with Texas A&M’s Advanced Electric Machines & Power Electronics (EMPE) Lab and Resolute Marine Energy (RME) designed, derisked, developed, and demonstrated a novel magnetically geared electrical generator for direct-drive, low-speed, high torque MHK applications The project objective was to investigate a novel and compact direct-drive electric generator and its system aspects that would enable elimination of hydraulic components in the Power Take-Off (PTO) of a Marine and Hydrokinetic (MHK) system with an oscillating wave surge converter (OWSC), thereby improving the availability ofmore » the MHK system. The scope of this project was limited to the development and dry lab demonstration of a low speed generator to enable future direct drive MHK systems.« less

  2. The role of taxonomies in social media and the semantic web for health education. A study of SNOMED CT terms in YouTube health video tags.

    PubMed

    Konstantinidis, S; Fernandez-Luque, L; Bamidis, P; Karlsen, R

    2013-01-01

    An increasing amount of health education resources for patients and professionals are distributed via social media channels. For example, thousands of health education videos are disseminated via YouTube. Often, tags are assigned by the disseminator. However, the lack of use of standardized terminologies in those tags and the presence of misleading videos make it particularly hard to retrieve relevant videos. i) Identify the use of standardized medical thesauri (SNOMED CT) in YouTube Health videos tags from preselected YouTube Channels and demonstrate an information technology (IT) architecture for treating the tags of these health (video) resources. ii) Investigate the relative percentage of the tags used that relate to SNOMED CT terms. As such resources may play a key role in educating professionals and patients, the use of standardized vocabularies may facilitate the sharing of such resources. iii) Demonstrate how such resources may be properly exploited within the new generation of semantically enriched content or learning management systems that allow for knowledge expansion through the use of linked medical data and numerous literature resources also described through the same vocabularies. We implemented a video portal integrating videos from 500 US Hospital channels. The portal integrated 4,307 YouTube videos regarding surgery as described by 64,367 tags. BioPortal REST services were used within our portal to match SNOMED CT terms with YouTube tags by both exact match and non-exact match. The whole architecture was complemented with a mechanism to enrich the retrieved video resources with other educational material residing in other repositories by following contemporary semantic web advances, in the form of Linked Open Data (LOD) principles. The average percentage of YouTube tags that were expressed using SNOMED CT terms was about 22.5%, while one third of YouTube tags per video contained a SNOMED CT term in a loose search; this analogy became one tenth in

  3. Generation of Well-Defined Micro/Nanoparticles via Advanced Manufacturing Techniques for Therapeutic Delivery

    PubMed Central

    Zhang, Peipei; Xia, Junfei; Luo, Sida

    2018-01-01

    Micro/nanoparticles have great potentials in biomedical applications, especially for drug delivery. Existing studies identified that major micro/nanoparticle features including size, shape, surface property and component materials play vital roles in their in vitro and in vivo applications. However, a demanding challenge is that most conventional particle synthesis techniques such as emulsion can only generate micro/nanoparticles with a very limited number of shapes (i.e., spherical or rod shapes) and have very loose control in terms of particle sizes. We reviewed the advanced manufacturing techniques for producing micro/nanoparticles with precisely defined characteristics, emphasizing the use of these well-controlled micro/nanoparticles for drug delivery applications. Additionally, to illustrate the vital roles of particle features in therapeutic delivery, we also discussed how the above-mentioned micro/nanoparticle features impact in vitro and in vivo applications. Through this review, we highlighted the unique opportunities in generating controllable particles via advanced manufacturing techniques and the great potential of using these micro/nanoparticles for therapeutic delivery. PMID:29670013

  4. Physiological responses during exercise with video games in patients with cystic fibrosis: A systematic review.

    PubMed

    Carbonera, Raquel Pinto; Vendrusculo, Fernanda Maria; Donadio, Márcio Vinícius Fagundes

    2016-10-01

    Interactive video games are recently being used as an exercise tool in cystic fibrosis (CF). This study aimed to assess the literature describing whether video games generate a physiological response similar to the exercise intensity needed for training in CF. An online search in PubMed, Embase, Cochrane, SciELO, LILACS and PEDro databases was conducted and original studies describing physiological responses of the use of video games as exercise in CF were included. In four, out of five studies, the heart rate achieved during video games was within the standards recommended for training (60-80%). Two studies assessed VO 2 and showed higher levels compared to the six-minute walk test. No desaturation was reported. Most games were classified as moderate intensity. Only one study used a maximum exercise test as comparator. Interactive video games generate a heart rate response similar to the intensity required for training in CF patients. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. Quantitative assessment of human motion using video motion analysis

    NASA Technical Reports Server (NTRS)

    Probe, John D.

    1993-01-01

    In the study of the dynamics and kinematics of the human body a wide variety of technologies has been developed. Photogrammetric techniques are well documented and are known to provide reliable positional data from recorded images. Often these techniques are used in conjunction with cinematography and videography for analysis of planar motion, and to a lesser degree three-dimensional motion. Cinematography has been the most widely used medium for movement analysis. Excessive operating costs and the lag time required for film development, coupled with recent advances in video technology, have allowed video based motion analysis systems to emerge as a cost effective method of collecting and analyzing human movement. The Anthropometric and Biomechanics Lab at Johnson Space Center utilizes the video based Ariel Performance Analysis System (APAS) to develop data on shirtsleeved and space-suited human performance in order to plan efficient on-orbit intravehicular and extravehicular activities. APAS is a fully integrated system of hardware and software for biomechanics and the analysis of human performance and generalized motion measurement. Major components of the complete system include the video system, the AT compatible computer, and the proprietary software.

  6. Water surface modeling from a single viewpoint video.

    PubMed

    Li, Chuan; Pickup, David; Saunders, Thomas; Cosker, Darren; Marshall, David; Hall, Peter; Willis, Philip

    2013-07-01

    We introduce a video-based approach for producing water surface models. Recent advances in this field output high-quality results but require dedicated capturing devices and only work in limited conditions. In contrast, our method achieves a good tradeoff between the visual quality and the production cost: It automatically produces a visually plausible animation using a single viewpoint video as the input. Our approach is based on two discoveries: first, shape from shading (SFS) is adequate to capture the appearance and dynamic behavior of the example water; second, shallow water model can be used to estimate a velocity field that produces complex surface dynamics. We will provide qualitative evaluation of our method and demonstrate its good performance across a wide range of scenes.

  7. Mobile-Cloud Assisted Video Summarization Framework for Efficient Management of Remote Sensing Data Generated by Wireless Capsule Sensors

    PubMed Central

    Mehmood, Irfan; Sajjad, Muhammad; Baik, Sung Wook

    2014-01-01

    Wireless capsule endoscopy (WCE) has great advantages over traditional endoscopy because it is portable and easy to use, especially in remote monitoring health-services. However, during the WCE process, the large amount of captured video data demands a significant deal of computation to analyze and retrieve informative video frames. In order to facilitate efficient WCE data collection and browsing task, we present a resource- and bandwidth-aware WCE video summarization framework that extracts the representative keyframes of the WCE video contents by removing redundant and non-informative frames. For redundancy elimination, we use Jeffrey-divergence between color histograms and inter-frame Boolean series-based correlation of color channels. To remove non-informative frames, multi-fractal texture features are extracted to assist the classification using an ensemble-based classifier. Owing to the limited WCE resources, it is impossible for the WCE system to perform computationally intensive video summarization tasks. To resolve computational challenges, mobile-cloud architecture is incorporated, which provides resizable computing capacities by adaptively offloading video summarization tasks between the client and the cloud server. The qualitative and quantitative results are encouraging and show that the proposed framework saves information transmission cost and bandwidth, as well as the valuable time of data analysts in browsing remote sensing data. PMID:25225874

  8. Mobile-cloud assisted video summarization framework for efficient management of remote sensing data generated by wireless capsule sensors.

    PubMed

    Mehmood, Irfan; Sajjad, Muhammad; Baik, Sung Wook

    2014-09-15

    Wireless capsule endoscopy (WCE) has great advantages over traditional endoscopy because it is portable and easy to use, especially in remote monitoring health-services. However, during the WCE process, the large amount of captured video data demands a significant deal of computation to analyze and retrieve informative video frames. In order to facilitate efficient WCE data collection and browsing task, we present a resource- and bandwidth-aware WCE video summarization framework that extracts the representative keyframes of the WCE video contents by removing redundant and non-informative frames. For redundancy elimination, we use Jeffrey-divergence between color histograms and inter-frame Boolean series-based correlation of color channels. To remove non-informative frames, multi-fractal texture features are extracted to assist the classification using an ensemble-based classifier. Owing to the limited WCE resources, it is impossible for the WCE system to perform computationally intensive video summarization tasks. To resolve computational challenges, mobile-cloud architecture is incorporated, which provides resizable computing capacities by adaptively offloading video summarization tasks between the client and the cloud server. The qualitative and quantitative results are encouraging and show that the proposed framework saves information transmission cost and bandwidth, as well as the valuable time of data analysts in browsing remote sensing data.

  9. Combining 3D structure of real video and synthetic objects

    NASA Astrophysics Data System (ADS)

    Kim, Man-Bae; Song, Mun-Sup; Kim, Do-Kyoon

    1998-04-01

    This paper presents a new approach of combining real video and synthetic objects. The purpose of this work is to use the proposed technology in the fields of advanced animation, virtual reality, games, and so forth. Computer graphics has been used in the fields previously mentioned. Recently, some applications have added real video to graphic scenes for the purpose of augmenting the realism that the computer graphics lacks in. This approach called augmented or mixed reality can produce more realistic environment that the entire use of computer graphics. Our approach differs from the virtual reality and augmented reality in the manner that computer- generated graphic objects are combined to 3D structure extracted from monocular image sequences. The extraction of the 3D structure requires the estimation of 3D depth followed by the construction of a height map. Graphic objects are then combined to the height map. The realization of our proposed approach is carried out in the following steps: (1) We derive 3D structure from test image sequences. The extraction of the 3D structure requires the estimation of depth and the construction of a height map. Due to the contents of the test sequence, the height map represents the 3D structure. (2) The height map is modeled by Delaunay triangulation or Bezier surface and each planar surface is texture-mapped. (3) Finally, graphic objects are combined to the height map. Because 3D structure of the height map is already known, Step (3) is easily manipulated. Following this procedure, we produced an animation video demonstrating the combination of the 3D structure and graphic models. Users can navigate the realistic 3D world whose associated image is rendered on the display monitor.

  10. Prior video game exposure does not enhance robotic surgical performance.

    PubMed

    Harper, Jonathan D; Kaiser, Stefan; Ebrahimi, Kamyar; Lamberton, Gregory R; Hadley, H Roger; Ruckle, Herbert C; Baldwin, D Duane

    2007-10-01

    Prior research has demonstrated that counterintuitive laparoscopic surgical skills are enhanced by experience with video games. A similar relation with robotic surgical skills has not been tested. The purpose of this study was to determine whether prior video-game experience enhances the acquisition of robotic surgical skills. A series of 242 preclinical medical students completed a self-reported video-game questionnaire detailing the frequency, duration, and peak playing time. The 10 students with the highest and lowest video-game exposure completed a follow-up questionnaire further quantifying video game, sports, musical instrument, and craft and hobby exposure. Each subject viewed a training video demonstrating the use of the da Vinci surgical robot in tying knots, followed by 3 minutes of proctored practice time. Subjects then tied knots for 5 minutes while an independent blinded observer recorded the number of knots tied, missed knots, frayed sutures, broken sutures, and mechanical errors. The mean playing time for the 10 game players was 15,136 total hours (range 5,840-30,000 hours). Video-game players tied fewer knots than nonplayers (5.8 v 9.0; P = 0.04). Subjects who had played sports for at least 4 years had fewer mechanical errors (P = 0.04), broke fewer sutures (P = 0.01), and committed fewer total errors (P = 0.01). Similarly, those playing musical instruments longer than 5 years missed fewer knots (P = 0.05). In the extremes of video-game experience tested in this study, game playing was inversely correlated with the ability to learn robotic suturing. This study suggests that advanced surgical skills such as robotic suturing may be learned more quickly by athletes and musicians. Prior extensive video-game exposure had a negative impact on robotic performance.

  11. Clinical use of three-dimensional video measurements of eye movements

    NASA Technical Reports Server (NTRS)

    Merfeld, D. M.; Black, F. O.; Wade, S.; Paloski, W. H. (Principal Investigator)

    1998-01-01

    Noninvasive measurements of three-dimensional eye position can be accurately achieved with video methods. A case study showing the potential clinical benefit of these enhanced measurements is presented along with some thoughts about technological advances, essential for clinical application, that are likely to occur in the next several years.

  12. A Framework of Simple Event Detection in Surveillance Video

    NASA Astrophysics Data System (ADS)

    Xu, Weiguang; Zhang, Yafei; Lu, Jianjiang; Tian, Yulong; Wang, Jiabao

    Video surveillance is playing more and more important role in people's social life. Real-time alerting of threaten events and searching interesting content in stored large scale video footage needs human operator to pay full attention on monitor for long time. The labor intensive mode has limit the effectiveness and efficiency of the system. A framework of simple event detection is presented advance the automation of video surveillance. An improved inner key point matching approach is used to compensate motion of background in real-time; frame difference are used to detect foreground; HOG based classifiers are used to classify foreground object into people and car; mean-shift is used to tracking the recognized objects. Events are detected based on predefined rules. The maturity of the algorithms guarantee the robustness of the framework, and the improved approach and the easily checked rules enable the framework to work in real-time. Future works to be done are also discussed.

  13. Streaming Video--The Wave of the Video Future!

    ERIC Educational Resources Information Center

    Brown, Laura

    2004-01-01

    Videos and DVDs give the teachers more flexibility than slide projectors, filmstrips, and 16mm films but teachers and students are excited about a new technology called streaming. Streaming allows the educators to view videos on demand via the Internet, which works through the transfer of digital media like video, and voice data that is received…

  14. Advances in Thermal Spray Coatings for Gas Turbines and Energy Generation: A Review

    NASA Astrophysics Data System (ADS)

    Hardwicke, Canan U.; Lau, Yuk-Chiu

    2013-06-01

    Functional coatings are widely used in energy generation equipment in industries such as renewables, oil and gas, propulsion engines, and gas turbines. Intelligent thermal spray processing is vital in many of these areas for efficient manufacturing. Advanced thermal spray coating applications include thermal management, wear, oxidation, corrosion resistance, sealing systems, vibration and sound absorbance, and component repair. This paper reviews the current status of materials, equipment, processing, and properties' aspects for key coatings in the energy industry, especially the developments in large-scale gas turbines. In addition to the most recent industrial advances in thermal spray technologies, future technical needs are also highlighted.

  15. Video conferencing made easy

    NASA Technical Reports Server (NTRS)

    Larsen, D. Gail; Schwieder, Paul R.

    1993-01-01

    Network video conferencing is advancing rapidly throughout the nation, and the Idaho National Engineering Laboratory (INEL), a Department of Energy (DOE) facility, is at the forefront of the development. Engineers at INEL/EG&G designed and installed a very unique DOE videoconferencing system, offering many outstanding features, that include true multipoint conferencing, user-friendly design and operation with no full-time operators required, and the potential for cost effective expansion of the system. One area where INEL/EG&G engineers made a significant contribution to video conferencing was in the development of effective, user-friendly, end station driven scheduling software. A PC at each user site is used to schedule conferences via a windows package. This software interface provides information to the users concerning conference availability, scheduling, initiation, and termination. The menus are 'mouse' controlled. Once a conference is scheduled, a workstation at the hubs monitors the network to initiate all scheduled conferences. No active operator participation is required once a user schedules a conference through the local PC; the workstation automatically initiates and terminates the conference as scheduled. As each conference is scheduled, hard copy notification is also printed at each participating site. Video conferencing is the wave of the future. The use of these user-friendly systems will save millions in lost productivity and travel cost throughout the nation. The ease of operation and conference scheduling will play a key role on the extent industry uses this new technology. The INEL/EG&G has developed a prototype scheduling system for both commercial and federal government use.

  16. Video conferencing made easy

    NASA Astrophysics Data System (ADS)

    Larsen, D. Gail; Schwieder, Paul R.

    1993-02-01

    Network video conferencing is advancing rapidly throughout the nation, and the Idaho National Engineering Laboratory (INEL), a Department of Energy (DOE) facility, is at the forefront of the development. Engineers at INEL/EG&G designed and installed a very unique DOE videoconferencing system, offering many outstanding features, that include true multipoint conferencing, user-friendly design and operation with no full-time operators required, and the potential for cost effective expansion of the system. One area where INEL/EG&G engineers made a significant contribution to video conferencing was in the development of effective, user-friendly, end station driven scheduling software. A PC at each user site is used to schedule conferences via a windows package. This software interface provides information to the users concerning conference availability, scheduling, initiation, and termination. The menus are 'mouse' controlled. Once a conference is scheduled, a workstation at the hubs monitors the network to initiate all scheduled conferences. No active operator participation is required once a user schedules a conference through the local PC; the workstation automatically initiates and terminates the conference as scheduled. As each conference is scheduled, hard copy notification is also printed at each participating site. Video conferencing is the wave of the future. The use of these user-friendly systems will save millions in lost productivity and travel cost throughout the nation. The ease of operation and conference scheduling will play a key role on the extent industry uses this new technology. The INEL/EG&G has developed a prototype scheduling system for both commercial and federal government use.

  17. Blind identification of full-field vibration modes from video measurements with phase-based video motion magnification

    NASA Astrophysics Data System (ADS)

    Yang, Yongchao; Dorn, Charles; Mancini, Tyler; Talken, Zachary; Kenyon, Garrett; Farrar, Charles; Mascareñas, David

    2017-02-01

    Experimental or operational modal analysis traditionally requires physically-attached wired or wireless sensors for vibration measurement of structures. This instrumentation can result in mass-loading on lightweight structures, and is costly and time-consuming to install and maintain on large civil structures, especially for long-term applications (e.g., structural health monitoring) that require significant maintenance for cabling (wired sensors) or periodic replacement of the energy supply (wireless sensors). Moreover, these sensors are typically placed at a limited number of discrete locations, providing low spatial sensing resolution that is hardly sufficient for modal-based damage localization, or model correlation and updating for larger-scale structures. Non-contact measurement methods such as scanning laser vibrometers provide high-resolution sensing capacity without the mass-loading effect; however, they make sequential measurements that require considerable acquisition time. As an alternative non-contact method, digital video cameras are relatively low-cost, agile, and provide high spatial resolution, simultaneous, measurements. Combined with vision based algorithms (e.g., image correlation, optical flow), video camera based measurements have been successfully used for vibration measurements and subsequent modal analysis, based on techniques such as the digital image correlation (DIC) and the point-tracking. However, they typically require speckle pattern or high-contrast markers to be placed on the surface of structures, which poses challenges when the measurement area is large or inaccessible. This work explores advanced computer vision and video processing algorithms to develop a novel video measurement and vision-based operational (output-only) modal analysis method that alleviate the need of structural surface preparation associated with existing vision-based methods and can be implemented in a relatively efficient and autonomous manner with little

  18. The LivePhoto Physics videos and video analysis site

    NASA Astrophysics Data System (ADS)

    Abbott, David

    2009-09-01

    The LivePhoto site is similar to an archive of short films for video analysis. Some videos have Flash tools for analyzing the video embedded in the movie. Most of the videos address mechanics topics with titles like Rolling Pencil (check this one out for pedagogy and content knowledge—nicely done!), Juggler, Yo-yo, Puck and Bar (this one is an inelastic collision with rotation), but there are a few titles in other areas (E&M, waves, thermo, etc.).

  19. Video consultation use by Australian general practitioners: video vignette study.

    PubMed

    Jiwa, Moyez; Meng, Xingqiong

    2013-06-19

    There is unequal access to health care in Australia, particularly for the one-third of the population living in remote and rural areas. Video consultations delivered via the Internet present an opportunity to provide medical services to those who are underserviced, but this is not currently routine practice in Australia. There are advantages and shortcomings to using video consultations for diagnosis, and general practitioners (GPs) have varying opinions regarding their efficacy. The aim of this Internet-based study was to explore the attitudes of Australian GPs toward video consultation by using a range of patient scenarios presenting different clinical problems. Overall, 102 GPs were invited to view 6 video vignettes featuring patients presenting with acute and chronic illnesses. For each vignette, they were asked to offer a differential diagnosis and to complete a survey based on the theory of planned behavior documenting their views on the value of a video consultation. A total of 47 GPs participated in the study. The participants were younger than Australian GPs based on national data, and more likely to be working in a larger practice. Most participants (72%-100%) agreed on the differential diagnosis in all video scenarios. Approximately one-third of the study participants were positive about video consultations, one-third were ambivalent, and one-third were against them. In all, 91% opposed conducting a video consultation for the patient with symptoms of an acute myocardial infarction. Inability to examine the patient was most frequently cited as the reason for not conducting a video consultation. Australian GPs who were favorably inclined toward video consultations were more likely to work in larger practices, and were more established GPs, especially in rural areas. The survey results also suggest that the deployment of video technology will need to focus on follow-up consultations. Patients with minor self-limiting illnesses and those with medical

  20. Instant Video Revisiting for Reflection: Extending the Learning of Children and Teachers.

    ERIC Educational Resources Information Center

    Hong, Seong B.; Broderick, Jane T.

    This article discusses how instant video revisiting (IVR) promotes reflective thinking for both teachers and children. IVR was used as a daily classroom experience with both the children and the teachers throughout one semester in two preschool classrooms with children 2.5 to 5 years old. The teachers used a digital video camera to generate data…

  1. Ranking Highlights in Personal Videos by Analyzing Edited Videos.

    PubMed

    Sun, Min; Farhadi, Ali; Chen, Tseng-Hung; Seitz, Steve

    2016-11-01

    We present a fully automatic system for ranking domain-specific highlights in unconstrained personal videos by analyzing online edited videos. A novel latent linear ranking model is proposed to handle noisy training data harvested online. Specifically, given a targeted domain such as "surfing," our system mines the YouTube database to find pairs of raw and their corresponding edited videos. Leveraging the assumption that an edited video is more likely to contain highlights than the trimmed parts of the raw video, we obtain pair-wise ranking constraints to train our model. The learning task is challenging due to the amount of noise and variation in the mined data. Hence, a latent loss function is incorporated to mitigate the issues caused by the noise. We efficiently learn the latent model on a large number of videos (about 870 min in total) using a novel EM-like procedure. Our latent ranking model outperforms its classification counterpart and is fairly competitive compared with a fully supervised ranking system that requires labels from Amazon Mechanical Turk. We further show that a state-of-the-art audio feature mel-frequency cepstral coefficients is inferior to a state-of-the-art visual feature. By combining both audio-visual features, we obtain the best performance in dog activity, surfing, skating, and viral video domains. Finally, we show that impressive highlights can be detected without additional human supervision for seven domains (i.e., skating, surfing, skiing, gymnastics, parkour, dog activity, and viral video) in unconstrained personal videos.

  2. Modern Warfare: Video Game Playing and Posttraumatic Symptoms in Veterans.

    PubMed

    Etter, Darryl; Kamen, Charles; Etter, Kelly; Gore-Felton, Cheryl

    2017-04-01

    Many of the current generation of veterans grew up with video games, including military first-person shooter (MFPS) video games. In MFPS games, players take the role of soldiers engaged in combat in environments modeled on real-life warzones. Exposure to trauma-congruent game content may either serve to exacerbate or to ameliorate posttraumatic symptoms. The current study examined the relationship between MFPS and other shooter video game playing and posttraumatic stress disorder (PTSD) symptoms among current and former members of the military (N = 111). Results indicated that video game play was very common, and 41.4% of participants reported playing MFPS or other shooter games (shooter players group). The shooter players group reported higher levels of PTSD symptoms than participants who did not play any video or shooter games (nonshooter/nonplayers group; d = 0.44); however, playing shooter games was not predictive of PTSD symptoms after accounting for personality, combat exposure, and social support variables. This may indicate that the same psychosocial factors predict both PTSD and shooter video game play. Although veterans may benefit from the development and use of clinical applications of video games in PTSD treatment, clinical attention should continue to focus on established psychosocial predictors of PTSD symptoms. Copyright © 2017 International Society for Traumatic Stress Studies.

  3. Designing a scalable video-on-demand server with data sharing

    NASA Astrophysics Data System (ADS)

    Lim, Hyeran; Du, David H.

    2000-12-01

    As current disk space and transfer speed increase, the bandwidth between a server and its disks has become critical for video-on-demand (VOD) services. Our VOD server consists of several hosts sharing data on disks through a ring-based network. Data sharing provided by the spatial-reuse ring network between servers and disks not only increases the utilization towards full bandwidth but also improves the availability of videos. Striping and replication methods are introduced in order to improve the efficiency of our VOD server system as well as the availability of videos. We consider tow kinds of resources of a VOD server system. Given a representative access profile, our intention is to propose an algorithm to find an initial condition, place videos on disks in the system successfully. If any copy of a video cannot be placed due to lack of resources, more servers/disks are added. When all videos are place on the disks by our algorithm, the final configuration is determined with indicator of how tolerable it is against the fluctuation in demand of videos. Considering it is a NP-hard problem, our algorithm generates the final configuration with O(M log M) at best, where M is the number of movies.

  4. Designing a scalable video-on-demand server with data sharing

    NASA Astrophysics Data System (ADS)

    Lim, Hyeran; Du, David H. C.

    2001-01-01

    As current disk space and transfer speed increase, the bandwidth between a server and its disks has become critical for video-on-demand (VOD) services. Our VOD server consists of several hosts sharing data on disks through a ring-based network. Data sharing provided by the spatial-reuse ring network between servers and disks not only increases the utilization towards full bandwidth but also improves the availability of videos. Striping and replication methods are introduced in order to improve the efficiency of our VOD server system as well as the availability of videos. We consider tow kinds of resources of a VOD server system. Given a representative access profile, our intention is to propose an algorithm to find an initial condition, place videos on disks in the system successfully. If any copy of a video cannot be placed due to lack of resources, more servers/disks are added. When all videos are place on the disks by our algorithm, the final configuration is determined with indicator of how tolerable it is against the fluctuation in demand of videos. Considering it is a NP-hard problem, our algorithm generates the final configuration with O(M log M) at best, where M is the number of movies.

  5. PRagmatic trial Of Video Education in Nursing homes: The design and rationale for a pragmatic cluster randomized trial in the nursing home setting.

    PubMed

    Mor, Vincent; Volandes, Angelo E; Gutman, Roee; Gatsonis, Constantine; Mitchell, Susan L

    2017-04-01

    Background/Aims Nursing homes are complex healthcare systems serving an increasingly sick population. Nursing homes must engage patients in advance care planning, but do so inconsistently. Video decision support tools improved advance care planning in small randomized controlled trials. Pragmatic trials are increasingly employed in health services research, although not commonly in the nursing home setting to which they are well-suited. This report presents the design and rationale for a pragmatic cluster randomized controlled trial that evaluated the "real world" application of an Advance Care Planning Video Program in two large US nursing home healthcare systems. Methods PRagmatic trial Of Video Education in Nursing homes was conducted in 360 nursing homes (N = 119 intervention/N = 241 control) owned by two healthcare systems. Over an 18-month implementation period, intervention facilities were instructed to offer the Advance Care Planning Video Program to all patients. Control facilities employed usual advance care planning practices. Patient characteristics and outcomes were ascertained from Medicare Claims, Minimum Data Set assessments, and facility electronic medical record data. Intervention adherence was measured using a Video Status Report embedded into electronic medical record systems. The primary outcome was the number of hospitalizations/person-day alive among long-stay patients with advanced dementia or cardiopulmonary disease. The rationale for the approaches to facility randomization and recruitment, intervention implementation, population selection, data acquisition, regulatory issues, and statistical analyses are discussed. Results The large number of well-characterized candidate facilities enabled several unique design features including stratification on historical hospitalization rates, randomization prior to recruitment, and 2:1 control to intervention facilities ratio. Strong endorsement from corporate leadership made randomization

  6. Thermal Model Predictions of Advanced Stirling Radioisotope Generator Performance

    NASA Technical Reports Server (NTRS)

    Wang, Xiao-Yen J.; Fabanich, William Anthony; Schmitz, Paul C.

    2014-01-01

    This paper presents recent thermal model results of the Advanced Stirling Radioisotope Generator (ASRG). The three-dimensional (3D) ASRG thermal power model was built using the Thermal Desktop(trademark) thermal analyzer. The model was correlated with ASRG engineering unit test data and ASRG flight unit predictions from Lockheed Martin's (LM's) I-deas(trademark) TMG thermal model. The auxiliary cooling system (ACS) of the ASRG is also included in the ASRG thermal model. The ACS is designed to remove waste heat from the ASRG so that it can be used to heat spacecraft components. The performance of the ACS is reported under nominal conditions and during a Venus flyby scenario. The results for the nominal case are validated with data from Lockheed Martin. Transient thermal analysis results of ASRG for a Venus flyby with a representative trajectory are also presented. In addition, model results of an ASRG mounted on a Cassini-like spacecraft with a sunshade are presented to show a way to mitigate the high temperatures of a Venus flyby. It was predicted that the sunshade can lower the temperature of the ASRG alternator by 20 C for the representative Venus flyby trajectory. The 3D model also was modified to predict generator performance after a single Advanced Stirling Convertor failure. The geometry of the Microtherm HT insulation block on the outboard side was modified to match deformation and shrinkage observed during testing of a prototypic ASRG test fixture by LM. Test conditions and test data were used to correlate the model by adjusting the thermal conductivity of the deformed insulation to match the post-heat-dump steady state temperatures. Results for these conditions showed that the performance of the still-functioning inboard ACS was unaffected.

  7. Quantifying the rapid evolution of a nourishment project with video imagery

    USGS Publications Warehouse

    Elko, N.A.; Holman, R.A.; Gelfenbaum, G.

    2005-01-01

    Spatially and temporally high-resolution video imagery was combined with traditional surveyed beach profiles to investigate the evolution of a rapidly eroding beach nourishment project. Upham Beach is a 0.6-km beach located downdrift of a structured inlet on the west coast of Florida. The beach was stabilized in seaward advanced position during the 1960s and has been nourished every 4-5 years since 1975. During the 1996 nourishment project, 193,000 m 3 of sediment advanced the shoreline as much as 175 m. Video images were collected concurrent with traditional surveys during the 1996 nourishment project to test video imaging as a nourishment monitoring technique. Video imagery illustrated morphologic changes that were unapparent in survey data. Increased storminess during the second (El Nin??o) winter after the 1996 project resulted in increased erosion rates of 0.4 m/d (135.0 m/y) as compared with 0.2 m/d (69.4 m/y) during the first winter. The measured half-life, the time at which 50% of the nourished material remains, of the nourishment project was 0.94 years. A simple analytical equation indicates reasonable agreement with the measured values, suggesting that project evolution follows a predictable pattern of exponential decay. Long-shore planform equilibration does not occur on Upham Beach, rather sediment diffuses downdrift until 100% of the nourished material erodes. The wide nourished beach erodes rapidly due to the lack of sediment bypassing from the north and the stabilized headland at Upham Beach that is exposed to wave energy.

  8. Objective video presentation QoE predictor for smart adaptive video streaming

    NASA Astrophysics Data System (ADS)

    Wang, Zhou; Zeng, Kai; Rehman, Abdul; Yeganeh, Hojatollah; Wang, Shiqi

    2015-09-01

    How to deliver videos to consumers over the network for optimal quality-of-experience (QoE) has been the central goal of modern video delivery services. Surprisingly, regardless of the large volume of videos being delivered everyday through various systems attempting to improve visual QoE, the actual QoE of end consumers is not properly assessed, not to say using QoE as the key factor in making critical decisions at the video hosting, network and receiving sites. Real-world video streaming systems typically use bitrate as the main video presentation quality indicator, but using the same bitrate to encode different video content could result in drastically different visual QoE, which is further affected by the display device and viewing condition of each individual consumer who receives the video. To correct this, we have to put QoE back to the driver's seat and redesign the video delivery systems. To achieve this goal, a major challenge is to find an objective video presentation QoE predictor that is accurate, fast, easy-to-use, display device adaptive, and provides meaningful QoE predictions across resolution and content. We propose to use the newly developed SSIMplus index (https://ece.uwaterloo.ca/~z70wang/research/ssimplus/) for this role. We demonstrate that based on SSIMplus, one can develop a smart adaptive video streaming strategy that leads to much smoother visual QoE impossible to achieve using existing adaptive bitrate video streaming approaches. Furthermore, SSIMplus finds many more applications, in live and file-based quality monitoring, in benchmarking video encoders and transcoders, and in guiding network resource allocations.

  9. High temperature, harsh environment sensors for advanced power generation systems

    NASA Astrophysics Data System (ADS)

    Ohodnicki, P. R.; Credle, S.; Buric, M.; Lewis, R.; Seachman, S.

    2015-05-01

    One mission of the Crosscutting Technology Research program at the National Energy Technology Laboratory is to develop a suite of sensors and controls technologies that will ultimately increase efficiencies of existing fossil-fuel fired power plants and enable a new generation of more efficient and lower emission power generation technologies. The program seeks to accomplish this mission through soliciting, managing, and monitoring a broad range of projects both internal and external to the laboratory which span sensor material and device development, energy harvesting and wireless telemetry methodologies, and advanced controls algorithms and approaches. A particular emphasis is placed upon harsh environment sensing for compatibility with high temperature, erosive, corrosive, and highly reducing or oxidizing environments associated with large-scale centralized power generation. An overview of the full sensors and controls portfolio is presented and a selected set of current and recent research successes and on-going projects are highlighted. A more detailed emphasis will be placed on an overview of the current research thrusts and successes of the in-house sensor material and device research efforts that have been established to support the program.

  10. Physics and Video Analysis

    NASA Astrophysics Data System (ADS)

    Allain, Rhett

    2016-05-01

    We currently live in a world filled with videos. There are videos on YouTube, feature movies and even videos recorded with our own cameras and smartphones. These videos present an excellent opportunity to not only explore physical concepts, but also inspire others to investigate physics ideas. With video analysis, we can explore the fantasy world in science-fiction films. We can also look at online videos to determine if they are genuine or fake. Video analysis can be used in the introductory physics lab and it can even be used to explore the make-believe physics embedded in video games. This book covers the basic ideas behind video analysis along with the fundamental physics principles used in video analysis. The book also includes several examples of the unique situations in which video analysis can be used.

  11. Keys to Successful Interactive Storytelling: A Study of the Booming "Choose-Your-Own-Adventure" Video Game Industry

    ERIC Educational Resources Information Center

    Tyndale, Eric; Ramsoomair, Franklin

    2016-01-01

    Video gaming has become a multi-billion dollar industry that continues to capture the hearts, minds and pocketbooks of millions of gamers who span all ages. Narrative and interactive games form part of this market. The popularity of tablet computers and the technological advances of video games have led to a renaissance in the genre for both youth…

  12. A Secure and Robust Object-Based Video Authentication System

    NASA Astrophysics Data System (ADS)

    He, Dajun; Sun, Qibin; Tian, Qi

    2004-12-01

    An object-based video authentication system, which combines watermarking, error correction coding (ECC), and digital signature techniques, is presented for protecting the authenticity between video objects and their associated backgrounds. In this system, a set of angular radial transformation (ART) coefficients is selected as the feature to represent the video object and the background, respectively. ECC and cryptographic hashing are applied to those selected coefficients to generate the robust authentication watermark. This content-based, semifragile watermark is then embedded into the objects frame by frame before MPEG4 coding. In watermark embedding and extraction, groups of discrete Fourier transform (DFT) coefficients are randomly selected, and their energy relationships are employed to hide and extract the watermark. The experimental results demonstrate that our system is robust to MPEG4 compression, object segmentation errors, and some common object-based video processing such as object translation, rotation, and scaling while securely preventing malicious object modifications. The proposed solution can be further incorporated into public key infrastructure (PKI).

  13. 77 FR 48102 - Closed Captioning and Video Description of Video Programming

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-13

    ... Captioning and Video Description of Video Programming AGENCY: Federal Communications Commission. [[Page 48103..., enacted by the Twenty-First Century Communications and Video Accessibility Act of 2010 (CVAA), which...) establishing requirements for closed captioning on video programming to ensure access by persons with hearing...

  14. A novel video recommendation system based on efficient retrieval of human actions

    NASA Astrophysics Data System (ADS)

    Ramezani, Mohsen; Yaghmaee, Farzin

    2016-09-01

    In recent years, fast growth of online video sharing eventuated new issues such as helping users to find their requirements in an efficient way. Hence, Recommender Systems (RSs) are used to find the users' most favorite items. Finding these items relies on items or users similarities. Though, many factors like sparsity and cold start user impress the recommendation quality. In some systems, attached tags are used for searching items (e.g. videos) as personalized recommendation. Different views, incomplete and inaccurate tags etc. can weaken the performance of these systems. Considering the advancement of computer vision techniques can help improving RSs. To this end, content based search can be used for finding items (here, videos are considered). In such systems, a video is taken from the user to find and recommend a list of most similar videos to the query one. Due to relating most videos to humans, we present a novel low complex scalable method to recommend videos based on the model of included action. This method has recourse to human action retrieval approaches. For modeling human actions, some interest points are extracted from each action and their motion information are used to compute the action representation. Moreover, a fuzzy dissimilarity measure is presented to compare videos for ranking them. The experimental results on HMDB, UCFYT, UCF sport and KTH datasets illustrated that, in most cases, the proposed method can reach better results than most used methods.

  15. Quality versus intelligibility: studying human preferences for American Sign Language video

    NASA Astrophysics Data System (ADS)

    Ciaramello, Frank M.; Hemami, Sheila S.

    2011-03-01

    Real-time videoconferencing using cellular devices provides natural communication to the Deaf community. For this application, compressed American Sign Language (ASL) video must be evaluated in terms of the intelligibility of the conversation and not in terms of the overall aesthetic quality of the video. This work presents a paired comparison experiment to determine the subjective preferences of ASL users in terms of the trade-off between intelligibility and quality when varying the proportion of the bitrate allocated explicitly to the regions of the video containing the signer. A rate-distortion optimization technique, which jointly optimizes a quality criteria and an intelligibility criteria according to a user-specified parameter, generates test video pairs for the subjective experiment. Experimental results suggest that at sufficiently high bitrates, all users prefer videos in which the non-signer regions in the video are encoded with some nominal rate. As the total encoding bitrate decreases, users generally prefer video in which a greater proportion of the rate is allocated to the signer. The specific operating points preferred in the quality-intelligibility trade-off vary with the demographics of the users.

  16. Real-Time Acquisition and Display of Data and Video

    NASA Technical Reports Server (NTRS)

    Bachnak, Rafic; Chakinarapu, Ramya; Garcia, Mario; Kar, Dulal; Nguyen, Tien

    2007-01-01

    This paper describes the development of a prototype that takes in an analog National Television System Committee (NTSC) video signal generated by a video camera and data acquired by a microcontroller and display them in real-time on a digital panel. An 8051 microcontroller is used to acquire power dissipation by the display panel, room temperature, and camera zoom level. The paper describes the major hardware components and shows how they are interfaced into a functional prototype. Test data results are presented and discussed.

  17. Automated Scoring of Teachers' Open-Ended Responses to Video Prompts: Bringing the Classroom-Video-Analysis Assessment to Scale

    ERIC Educational Resources Information Center

    Kersting, Nicole B.; Sherin, Bruce L.; Stigler, James W.

    2014-01-01

    In this study, we explored the potential for machine scoring of short written responses to the Classroom-Video-Analysis (CVA) assessment, which is designed to measure teachers' usable mathematics teaching knowledge. We created naïve Bayes classifiers for CVA scales assessing three different topic areas and compared computer-generated scores to…

  18. Using Video Images to Improve the Accuracy of Surrogate Decision-Making: A Randomized Controlled Trial

    PubMed Central

    Volandes, Angelo E.; Mitchell, Susan L.; Gillick, Muriel R.; Chang, Yuchiao; Paasche-Orlow, Michael K.

    2009-01-01

    Introduction When patients are unable to make important end-of-life decisions, doctors ask surrogate decision makers to provide insight into patients’ preferences. Unfortunately, multiple studies have shown that surrogates’ knowledge of patient preferences is poor. We hypothesized that a video decision tool would improve concordance between patients and their surrogates for end-of-life preferences. Objective To compare the concordance of preferences among elderly patients and their surrogates listening to only a verbal description of advanced dementia or viewing a video decision support tool of the disease after hearing the verbal description. Methods This was a randomized controlled trial of a convenience sample of community-dwelling elderly subjects (≥65 years) and their surrogates, and was conducted at 2 geriatric clinics affiliated with 2 academic medical centers in Boston. The study was conducted between September 1, 2007, and May 30, 2008. Random assignment of patient and surrogate dyads was to either a verbal narrative or a video decision support tool after the verbal narrative. End points were goals of care chosen by the patient and predicted goals of care by the surrogate. Goals of care included life-prolonging care (CPR, mechanical ventilation), limited care (hospitalization, antibiotics, but not CPR), and comfort care (only treatment to relieve symptoms). The primary outcome measure was the concordance rate of preferences between patients and their surrogates. Results A total of 14 pairs of patients and their surrogates were randomized to verbal narrative (n = 6) or video after verbal narrative (n = 8). Among the 6 patients receiving only the verbal narrative, 3 (50%) preferred comfort care, 1 (17%) chose limited care, and 2 (33%) desired life-prolonging care. Among the surrogates for these patients, only 2 correctly chose what their loved one would want if in a state of advanced dementia, yielding a concordance rate of 33%. Among the 8 patients

  19. Reliability Demonstration Approach for Advanced Stirling Radioisotope Generator

    NASA Technical Reports Server (NTRS)

    Ha, CHuong; Zampino, Edward; Penswick, Barry; Spronz, Michael

    2010-01-01

    Developed for future space missions as a high-efficiency power system, the Advanced Stirling Radioisotope Generator (ASRG) has a design life requirement of 14 yr in space following a potential storage of 3 yr after fueling. In general, the demonstration of long-life dynamic systems remains difficult in part due to the perception that the wearout of moving parts cannot be minimized, and associated failures are unpredictable. This paper shows a combination of systematic analytical methods, extensive experience gained from technology development, and well-planned tests can be used to ensure a high level reliability of ASRG. With this approach, all potential risks from each life phase of the system are evaluated and the mitigation adequately addressed. This paper also provides a summary of important test results obtained to date for ASRG and the planned effort for system-level extended operation.

  20. Content validation of an interprofessional learning video peer assessment tool.

    PubMed

    Nisbet, Gillian; Jorm, Christine; Roberts, Chris; Gordon, Christopher J; Chen, Timothy F

    2017-12-16

    Large scale models of interprofessional learning (IPL) where outcomes are assessed are rare within health professional curricula. To date, there is sparse research describing robust assessment strategies to support such activities. We describe the development of an IPL assessment task based on peer rating of a student generated video evidencing collaborative interprofessional practice. We provide content validation evidence of an assessment rubric in the context of large scale IPL. Two established approaches to scale development in an educational setting were combined. A literature review was undertaken to develop a conceptual model of the relevant domains and issues pertaining to assessment of student generated videos within IPL. Starting with a prototype rubric developed from the literature, a series of staff and student workshops were undertaken to integrate expert opinion and user perspectives. Participants assessed five-minute videos produced in a prior pilot IPL activity. Outcomes from each workshop informed the next version of the rubric until agreement was reached on anchoring statements and criteria. At this point the rubric was declared fit to be used in the upcoming mandatory large scale IPL activity. The assessment rubric consisted of four domains: patient issues, interprofessional negotiation; interprofessional management plan in action; and effective use of video medium to engage audience. The first three domains reflected topic content relevant to the underlying construct of interprofessional collaborative practice. The fourth domain was consistent with the broader video assessment literature calling for greater emphasis on creativity in education. We have provided evidence for the content validity of a video-based peer assessment task portraying interprofessional collaborative practice in the context of large-scale IPL activities for healthcare professional students. Further research is needed to establish the reliability of such a scale.

  1. UrtheCast Second-Generation Earth Observation Sensors

    NASA Astrophysics Data System (ADS)

    Beckett, K.

    2015-04-01

    UrtheCast's Second-Generation state-of-the-art Earth Observation (EO) remote sensing platform will be hosted on the NASA segment of International Space Station (ISS). This platform comprises a high-resolution dual-mode (pushbroom and video) optical camera and a dual-band (X and L) Synthetic Aperture RADAR (SAR) instrument. These new sensors will complement the firstgeneration medium-resolution pushbroom and high-definition video cameras that were mounted on the Russian segment of the ISS in early 2014. The new cameras are expected to be launched to the ISS in late 2017 via the Space Exploration Technologies Corporation Dragon spacecraft. The Canadarm will then be used to install the remote sensing platform onto a CBM (Common Berthing Mechanism) hatch on Node 3, allowing the sensor electronics to be accessible from the inside of the station, thus limiting their exposure to the space environment and allowing for future capability upgrades. The UrtheCast second-generation system will be able to take full advantage of the strengths that each of the individual sensors offers, such that the data exploitation capabilities of the combined sensors is significantly greater than from either sensor alone. This represents a truly novel platform that will lead to significant advances in many other Earth Observation applications such as environmental monitoring, energy and natural resources management, and humanitarian response, with data availability anticipated to begin after commissioning is completed in early 2018.

  2. Heterogeneity image patch index and its application to consumer video summarization.

    PubMed

    Dang, Chinh T; Radha, Hayder

    2014-06-01

    Automatic video summarization is indispensable for fast browsing and efficient management of large video libraries. In this paper, we introduce an image feature that we refer to as heterogeneity image patch (HIP) index. The proposed HIP index provides a new entropy-based measure of the heterogeneity of patches within any picture. By evaluating this index for every frame in a video sequence, we generate a HIP curve for that sequence. We exploit the HIP curve in solving two categories of video summarization applications: key frame extraction and dynamic video skimming. Under the key frame extraction frame-work, a set of candidate key frames is selected from abundant video frames based on the HIP curve. Then, a proposed patch-based image dissimilarity measure is used to create affinity matrix of these candidates. Finally, a set of key frames is extracted from the affinity matrix using a min–max based algorithm. Under video skimming, we propose a method to measure the distance between a video and its skimmed representation. The video skimming problem is then mapped into an optimization framework and solved by minimizing a HIP-based distance for a set of extracted excerpts. The HIP framework is pixel-based and does not require semantic information or complex camera motion estimation. Our simulation results are based on experiments performed on consumer videos and are compared with state-of-the-art methods. It is shown that the HIP approach outperforms other leading methods, while maintaining low complexity.

  3. Violence in Teen-Rated Video Games

    PubMed Central

    Haninger, Kevin; Ryan, M. Seamus; Thompson, Kimberly M

    2004-01-01

    Context: Children's exposure to violence in the media remains a source of public health concern; however, violence in video games rated T (for “Teen”) by the Entertainment Software Rating Board (ESRB) has not been quantified. Objective: To quantify and characterize the depiction of violence and blood in T-rated video games. According to the ESRB, T-rated video games may be suitable for persons aged 13 years and older and may contain violence, mild or strong language, and/or suggestive themes. Design: We created a database of all 396 T-rated video game titles released on the major video game consoles in the United States by April 1, 2001 to identify the distribution of games by genre and to characterize the distribution of content descriptors for violence and blood assigned to these games. We randomly sampled 80 game titles (which included 81 games because 1 title included 2 separate games), played each game for at least 1 hour, and quantitatively assessed the content. Given the release of 2 new video game consoles, Microsoft Xbox and Nintendo GameCube, and a significant number of T-rated video games released after we drew our random sample, we played and assessed 9 additional games for these consoles. Finally, we assessed the content of 2 R-rated films, The Matrix and The Matrix: Reloaded, associated with the T-rated video game Enter the Matrix. Main Outcome Measures: Game genre; percentage of game play depicting violence; depiction of injury; depiction of blood; number of human and nonhuman fatalities; types of weapons used; whether injuring characters, killing characters, or destroying objects is rewarded or is required to advance in the game; and content that may raise concerns about marketing T-rated video games to children. Results: Based on analysis of the 396 T-rated video game titles, 93 game titles (23%) received content descriptors for both violence and blood, 280 game titles (71%) received only a content descriptor for violence, 9 game titles (2

  4. Violence in teen-rated video games.

    PubMed

    Haninger, Kevin; Ryan, M Seamus; Thompson, Kimberly M

    2004-03-11

    Children's exposure to violence in the media remains a source of public health concern; however, violence in video games rated T (for "Teen") by the Entertainment Software Rating Board (ESRB) has not been quantified. To quantify and characterize the depiction of violence and blood in T-rated video games. According to the ESRB, T-rated video games may be suitable for persons aged 13 years and older and may contain violence, mild or strong language, and/or suggestive themes. We created a database of all 396 T-rated video game titles released on the major video game consoles in the United States by April 1, 2001 to identify the distribution of games by genre and to characterize the distribution of content descriptors for violence and blood assigned to these games. We randomly sampled 80 game titles (which included 81 games because 1 title included 2 separate games), played each game for at least 1 hour, and quantitatively assessed the content. Given the release of 2 new video game consoles, Microsoft Xbox and Nintendo GameCube, and a significant number of T-rated video games released after we drew our random sample, we played and assessed 9 additional games for these consoles. Finally, we assessed the content of 2 R-rated films, The Matrix and The Matrix: Reloaded, associated with the T-rated video game Enter the Matrix. Game genre; percentage of game play depicting violence; depiction of injury; depiction of blood; number of human and nonhuman fatalities; types of weapons used; whether injuring characters, killing characters, or destroying objects is rewarded or is required to advance in the game; and content that may raise concerns about marketing T-rated video games to children. Based on analysis of the 396 T-rated video game titles, 93 game titles (23%) received content descriptors for both violence and blood, 280 game titles (71%) received only a content descriptor for violence, 9 game titles (2%) received only a content descriptor for blood, and 14 game titles

  5. Efficient Use of Video for 3d Modelling of Cultural Heritage Objects

    NASA Astrophysics Data System (ADS)

    Alsadik, B.; Gerke, M.; Vosselman, G.

    2015-03-01

    Currently, there is a rapid development in the techniques of the automated image based modelling (IBM), especially in advanced structure-from-motion (SFM) and dense image matching methods, and camera technology. One possibility is to use video imaging to create 3D reality based models of cultural heritage architectures and monuments. Practically, video imaging is much easier to apply when compared to still image shooting in IBM techniques because the latter needs a thorough planning and proficiency. However, one is faced with mainly three problems when video image sequences are used for highly detailed modelling and dimensional survey of cultural heritage objects. These problems are: the low resolution of video images, the need to process a large number of short baseline video images and blur effects due to camera shake on a significant number of images. In this research, the feasibility of using video images for efficient 3D modelling is investigated. A method is developed to find the minimal significant number of video images in terms of object coverage and blur effect. This reduction in video images is convenient to decrease the processing time and to create a reliable textured 3D model compared with models produced by still imaging. Two experiments for modelling a building and a monument are tested using a video image resolution of 1920×1080 pixels. Internal and external validations of the produced models are applied to find out the final predicted accuracy and the model level of details. Related to the object complexity and video imaging resolution, the tests show an achievable average accuracy between 1 - 5 cm when using video imaging, which is suitable for visualization, virtual museums and low detailed documentation.

  6. High-resolution streaming video integrated with UGS systems

    NASA Astrophysics Data System (ADS)

    Rohrer, Matthew

    2010-04-01

    Imagery has proven to be a valuable complement to Unattended Ground Sensor (UGS) systems. It provides ultimate verification of the nature of detected targets. However, due to the power, bandwidth, and technological limitations inherent to UGS, sacrifices have been made to the imagery portion of such systems. The result is that these systems produce lower resolution images in small quantities. Currently, a high resolution, wireless imaging system is being developed to bring megapixel, streaming video to remote locations to operate in concert with UGS. This paper will provide an overview of how using Wifi radios, new image based Digital Signal Processors (DSP) running advanced target detection algorithms, and high resolution cameras gives the user an opportunity to take high-powered video imagers to areas where power conservation is a necessity.

  7. Generalized parallel-perspective stereo mosaics from airborne video.

    PubMed

    Zhu, Zhigang; Hanson, Allen R; Riseman, Edward M

    2004-02-01

    In this paper, we present a new method for automatically and efficiently generating stereoscopic mosaics by seamless registration of images collected by a video camera mounted on an airborne platform. Using a parallel-perspective representation, a pair of geometrically registered stereo mosaics can be precisely constructed under quite general motion. A novel parallel ray interpolation for stereo mosaicing (PRISM) approach is proposed to make stereo mosaics seamless in the presence of obvious motion parallax and for rather arbitrary scenes. Parallel-perspective stereo mosaics generated with the PRISM method have better depth resolution than perspective stereo due to the adaptive baseline geometry. Moreover, unlike previous results showing that parallel-perspective stereo has a constant depth error, we conclude that the depth estimation error of stereo mosaics is in fact a linear function of the absolute depths of a scene. Experimental results on long video sequences are given.

  8. Generation of optimum vertical profiles for an advanced flight management system

    NASA Technical Reports Server (NTRS)

    Sorensen, J. A.; Waters, M. H.

    1981-01-01

    Algorithms for generating minimum fuel or minimum cost vertical profiles are derived and examined. The option for fixing the time of flight is included in the concepts developed. These algorithms form the basis for the design of an advanced on-board flight management system. The variations in the optimum vertical profiles (resulting from these concepts) due to variations in wind, takeoff mass, and range-to-destination are presented. Fuel savings due to optimum climb, free cruise altitude, and absorbing delays enroute are examined.

  9. Videos for Science Communication and Nature Interpretation: The TIB|AV-Portal as Resource.

    NASA Astrophysics Data System (ADS)

    Marín Arraiza, Paloma; Plank, Margret; Löwe, Peter

    2016-04-01

    Scientific audiovisual media such as videos of research, interactive displays or computer animations has become an important part of scientific communication and education. Dynamic phenomena can be described better by audiovisual media than by words and pictures. For this reason, scientific videos help us to understand and discuss environmental phenomena more efficiently. Moreover, the creation of scientific videos is easier than ever, thanks to mobile devices and open source editing software. Video-clips, webinars or even the interactive part of a PICO are formats of scientific audiovisual media used in the Geosciences. This type of media translates the location-referenced Science Communication such as environmental interpretation into computed-based Science Communication. A new way of Science Communication is video abstracting. A video abstract is a three- to five-minute video statement that provides background information about a research paper. It also gives authors the opportunity to present their research activities to a wider audience. Since this kind of media have become an important part of scientific communication there is a need for reliable infrastructures which are capable of managing the digital assets researchers generate. Using the reference of the usecase of video abstracts this paper gives an overview over the activities by the German National Library of Science and Technology (TIB) regarding publishing and linking audiovisual media in a scientifically sound way. The German National Library of Science and Technology (TIB) in cooperation with the Hasso Plattner Institute (HPI) developed a web-based portal (av.tib.eu) that optimises access to scientific videos in the fields of science and technology. Videos from the realms of science and technology can easily be uploaded onto the TIB|AV Portal. Within a short period of time the videos are assigned a digital object identifier (DOI). This enables them to be referenced, cited, and linked (e.g. to the

  10. Video Compression

    NASA Technical Reports Server (NTRS)

    1996-01-01

    Optivision developed two PC-compatible boards and associated software under a Goddard Space Flight Center Small Business Innovation Research grant for NASA applications in areas such as telerobotics, telesciences and spaceborne experimentation. From this technology, the company used its own funds to develop commercial products, the OPTIVideo MPEG Encoder and Decoder, which are used for realtime video compression and decompression. They are used in commercial applications including interactive video databases and video transmission. The encoder converts video source material to a compressed digital form that can be stored or transmitted, and the decoder decompresses bit streams to provide high quality playback.

  11. Reliability of trauma management videos on YouTube and their compliance with ATLS® (9th edition) guideline.

    PubMed

    Şaşmaz, M I; Akça, A H

    2017-06-01

    In this study, the reliability of trauma management scenario videos (in English) on YouTube and their compliance with Advanced Trauma Life Support (ATLS ® ) guidelines were investigated. The search was conducted on February 15, 2016 by using the terms "assessment of trauma" and ''management of trauma''. All videos that were uploaded between January 2011 and June 2016 were viewed by two experienced emergency physicians. The data regarding the date of upload, the type of the uploader, duration of the video and view counts were recorded. The videos were categorized according to the video source and scores. The search results yielded 880 videos. Eight hundred and thirteen videos were excluded by the researchers. The distribution of videos by years was found to be balanced. The scores of videos uploaded by an institution were determined to be higher compared to other groups (p = 0.003). The findings of this study display that trauma management videos on YouTube in the majority of cases are not reliable/compliant with ATLS-guidelines and can therefore not be recommended for educational purposes. These data may only be used in public education after making necessary arrangements.

  12. Learning neuroendoscopy with an exoscope system (video telescopic operating monitor): Early clinical results.

    PubMed

    Parihar, Vijay; Yadav, Y R; Kher, Yatin; Ratre, Shailendra; Sethi, Ashish; Sharma, Dhananjaya

    2016-01-01

    Steep learning curve is found initially in pure endoscopic procedures. Video telescopic operating monitor (VITOM) is an advance in rigid-lens telescope systems provides an alternative method for learning basics of neuroendoscopy with the help of the familiar principle of microneurosurgery. The aim was to evaluate the clinical utility of VITOM as a learning tool for neuroendoscopy. Video telescopic operating monitor was used 39 cranial and spinal procedures and its utility as a tool for minimally invasive neurosurgery and neuroendoscopy for initial learning curve was studied. Video telescopic operating monitor was used in 25 cranial and 14 spinal procedures. Image quality is comparable to endoscope and microscope. Surgeons comfort improved with VITOM. Frequent repositioning of scope holder and lack of stereopsis is initial limiting factor was compensated for with repeated procedures. Video telescopic operating monitor is found useful to reduce initial learning curve of neuroendoscopy.

  13. Flame experiments at the advanced light source: new insights into soot formation processes.

    PubMed

    Hansen, Nils; Skeen, Scott A; Michelsen, Hope A; Wilson, Kevin R; Kohse-Höinghaus, Katharina

    2014-05-26

    The following experimental protocols and the accompanying video are concerned with the flame experiments that are performed at the Chemical Dynamics Beamline of the Advanced Light Source (ALS) of the Lawrence Berkeley National Laboratory(1-4). This video demonstrates how the complex chemical structures of laboratory-based model flames are analyzed using flame-sampling mass spectrometry with tunable synchrotron-generated vacuum-ultraviolet (VUV) radiation. This experimental approach combines isomer-resolving capabilities with high sensitivity and a large dynamic range(5,6). The first part of the video describes experiments involving burner-stabilized, reduced-pressure (20-80 mbar) laminar premixed flames. A small hydrocarbon fuel was used for the selected flame to demonstrate the general experimental approach. It is shown how species' profiles are acquired as a function of distance from the burner surface and how the tunability of the VUV photon energy is used advantageously to identify many combustion intermediates based on their ionization energies. For example, this technique has been used to study gas-phase aspects of the soot-formation processes, and the video shows how the resonance-stabilized radicals, such as C3H3, C3H5, and i-C4H5, are identified as important intermediates(7). The work has been focused on soot formation processes, and, from the chemical point of view, this process is very intriguing because chemical structures containing millions of carbon atoms are assembled from a fuel molecule possessing only a few carbon atoms in just milliseconds. The second part of the video highlights a new experiment, in which an opposed-flow diffusion flame and synchrotron-based aerosol mass spectrometry are used to study the chemical composition of the combustion-generated soot particles(4). The experimental results indicate that the widely accepted H-abstraction-C2H2-addition (HACA) mechanism is not the sole molecular growth process responsible for the formation

  14. Flame Experiments at the Advanced Light Source: New Insights into Soot Formation Processes

    PubMed Central

    Hansen, Nils; Skeen, Scott A.; Michelsen, Hope A.; Wilson, Kevin R.; Kohse-Höinghaus, Katharina

    2014-01-01

    The following experimental protocols and the accompanying video are concerned with the flame experiments that are performed at the Chemical Dynamics Beamline of the Advanced Light Source (ALS) of the Lawrence Berkeley National Laboratory1-4. This video demonstrates how the complex chemical structures of laboratory-based model flames are analyzed using flame-sampling mass spectrometry with tunable synchrotron-generated vacuum-ultraviolet (VUV) radiation. This experimental approach combines isomer-resolving capabilities with high sensitivity and a large dynamic range5,6. The first part of the video describes experiments involving burner-stabilized, reduced-pressure (20-80 mbar) laminar premixed flames. A small hydrocarbon fuel was used for the selected flame to demonstrate the general experimental approach. It is shown how species’ profiles are acquired as a function of distance from the burner surface and how the tunability of the VUV photon energy is used advantageously to identify many combustion intermediates based on their ionization energies. For example, this technique has been used to study gas-phase aspects of the soot-formation processes, and the video shows how the resonance-stabilized radicals, such as C3H3, C3H5, and i-C4H5, are identified as important intermediates7. The work has been focused on soot formation processes, and, from the chemical point of view, this process is very intriguing because chemical structures containing millions of carbon atoms are assembled from a fuel molecule possessing only a few carbon atoms in just milliseconds. The second part of the video highlights a new experiment, in which an opposed-flow diffusion flame and synchrotron-based aerosol mass spectrometry are used to study the chemical composition of the combustion-generated soot particles4. The experimental results indicate that the widely accepted H-abstraction-C2H2-addition (HACA) mechanism is not the sole molecular growth process responsible for the formation of the

  15. Video conference quality assessment based on cooperative sensing of video and audio

    NASA Astrophysics Data System (ADS)

    Wang, Junxi; Chen, Jialin; Tian, Xin; Zhou, Cheng; Zhou, Zheng; Ye, Lu

    2015-12-01

    This paper presents a method to video conference quality assessment, which is based on cooperative sensing of video and audio. In this method, a proposed video quality evaluation method is used to assess the video frame quality. The video frame is divided into noise image and filtered image by the bilateral filters. It is similar to the characteristic of human visual, which could also be seen as a low-pass filtering. The audio frames are evaluated by the PEAQ algorithm. The two results are integrated to evaluate the video conference quality. A video conference database is built to test the performance of the proposed method. It could be found that the objective results correlate well with MOS. Then we can conclude that the proposed method is efficiency in assessing video conference quality.

  16. Advanced Datapresence From A New Generation Of Research Vessels

    NASA Astrophysics Data System (ADS)

    Romsos, C. G.; Nahorniak, J.; Watkins-Brandt, K.; Bailey, D.; Reimers, C.

    2016-02-01

    The design of the next generation Regional Class Research Vessels (RCRV) for the U.S. academic research fleet includes the development of advanced datapresence systems and capabilities. Datapresence is defined here as the real-time transfer of scientific and operational data between vessel and shore, to facilitate shore-based participation in oceanographic expeditions. Datapresent technologies on the RCRVs build upon the demonstrated success of telepresence activities on satellite-connected ships. Specifically, the RCRV datapresence design integrates a broad suite of ocean and meteorological sensors on the vessel into a networked environment with satellite communication access. In addition to enabling operational decisions from shore, these capabilities will bring ocean research to the classroom and local communities, advancing ocean and atmospheric literacy, via dynamic data products that support hands-on exercises and demonstrations of oceanographic and atmospheric processes. The operational requirements of data integration, management, visualization, and user-interaction are being developed and tested now and will be refined over the next 5-6 years during the RCRV construction and transition to operations phases. This presentation will illustrate the RCRV datapresence design and how datapresent technologies will transform these National Science Foundation-owned coastal ships into continuous sampling and data streaming platforms that leverage onshore resources for making efficient scientific and operational decisions.

  17. Guerrilla Video: A New Protocol for Producing Classroom Video

    ERIC Educational Resources Information Center

    Fadde, Peter; Rich, Peter

    2010-01-01

    Contemporary changes in pedagogy point to the need for a higher level of video production value in most classroom video, replacing the default video protocol of an unattended camera in the back of the classroom. The rich and complex environment of today's classroom can be captured more fully using the higher level, but still easily manageable,…

  18. Retrieval evaluation and distance learning from perceived similarity between endomicroscopy videos.

    PubMed

    André, Barbara; Vercauteren, Tom; Buchner, Anna M; Wallace, Michael B; Ayache, Nicholas

    2011-01-01

    Evaluating content-based retrieval (CBR) is challenging because it requires an adequate ground-truth. When the available groundtruth is limited to textual metadata such as pathological classes, retrieval results can only be evaluated indirectly, for example in terms of classification performance. In this study we first present a tool to generate perceived similarity ground-truth that enables direct evaluation of endomicroscopic video retrieval. This tool uses a four-points Likert scale and collects subjective pairwise similarities perceived by multiple expert observers. We then evaluate against the generated ground-truth a previously developed dense bag-of-visual-words method for endomicroscopic video retrieval. Confirming the results of previous indirect evaluation based on classification, our direct evaluation shows that this method significantly outperforms several other state-of-the-art CBR methods. In a second step, we propose to improve the CBR method by learning an adjusted similarity metric from the perceived similarity ground-truth. By minimizing a margin-based cost function that differentiates similar and dissimilar video pairs, we learn a weight vector applied to the visual word signatures of videos. Using cross-validation, we demonstrate that the learned similarity distance is significantly better correlated with the perceived similarity than the original visual-word-based distance.

  19. Scalable gastroscopic video summarization via similar-inhibition dictionary selection.

    PubMed

    Wang, Shuai; Cong, Yang; Cao, Jun; Yang, Yunsheng; Tang, Yandong; Zhao, Huaici; Yu, Haibin

    2016-01-01

    This paper aims at developing an automated gastroscopic video summarization algorithm to assist clinicians to more effectively go through the abnormal contents of the video. To select the most representative frames from the original video sequence, we formulate the problem of gastroscopic video summarization as a dictionary selection issue. Different from the traditional dictionary selection methods, which take into account only the number and reconstruction ability of selected key frames, our model introduces the similar-inhibition constraint to reinforce the diversity of selected key frames. We calculate the attention cost by merging both gaze and content change into a prior cue to help select the frames with more high-level semantic information. Moreover, we adopt an image quality evaluation process to eliminate the interference of the poor quality images and a segmentation process to reduce the computational complexity. For experiments, we build a new gastroscopic video dataset captured from 30 volunteers with more than 400k images and compare our method with the state-of-the-arts using the content consistency, index consistency and content-index consistency with the ground truth. Compared with all competitors, our method obtains the best results in 23 of 30 videos evaluated based on content consistency, 24 of 30 videos evaluated based on index consistency and all videos evaluated based on content-index consistency. For gastroscopic video summarization, we propose an automated annotation method via similar-inhibition dictionary selection. Our model can achieve better performance compared with other state-of-the-art models and supplies more suitable key frames for diagnosis. The developed algorithm can be automatically adapted to various real applications, such as the training of young clinicians, computer-aided diagnosis or medical report generation. Copyright © 2015 Elsevier B.V. All rights reserved.

  20. Scalable Adaptive Graphics Environment (SAGE) Software for the Visualization of Large Data Sets on a Video Wall

    NASA Technical Reports Server (NTRS)

    Jedlovec, Gary; Srikishen, Jayanthi; Edwards, Rita; Cross, David; Welch, Jon; Smith, Matt

    2013-01-01

    The use of collaborative scientific visualization systems for the analysis, visualization, and sharing of "big data" available from new high resolution remote sensing satellite sensors or four-dimensional numerical model simulations is propelling the wider adoption of ultra-resolution tiled display walls interconnected by high speed networks. These systems require a globally connected and well-integrated operating environment that provides persistent visualization and collaboration services. This abstract and subsequent presentation describes a new collaborative visualization system installed for NASA's Shortterm Prediction Research and Transition (SPoRT) program at Marshall Space Flight Center and its use for Earth science applications. The system consists of a 3 x 4 array of 1920 x 1080 pixel thin bezel video monitors mounted on a wall in a scientific collaboration lab. The monitors are physically and virtually integrated into a 14' x 7' for video display. The display of scientific data on the video wall is controlled by a single Alienware Aurora PC with a 2nd Generation Intel Core 4.1 GHz processor, 32 GB memory, and an AMD Fire Pro W600 video card with 6 mini display port connections. Six mini display-to-dual DVI cables are used to connect the 12 individual video monitors. The open source Scalable Adaptive Graphics Environment (SAGE) windowing and media control framework, running on top of the Ubuntu 12 Linux operating system, allows several users to simultaneously control the display and storage of high resolution still and moving graphics in a variety of formats, on tiled display walls of any size. The Ubuntu operating system supports the open source Scalable Adaptive Graphics Environment (SAGE) software which provides a common environment, or framework, enabling its users to access, display and share a variety of data-intensive information. This information can be digital-cinema animations, high-resolution images, high-definition video

  1. Scalable Adaptive Graphics Environment (SAGE) Software for the Visualization of Large Data Sets on a Video Wall

    NASA Astrophysics Data System (ADS)

    Jedlovec, G.; Srikishen, J.; Edwards, R.; Cross, D.; Welch, J. D.; Smith, M. R.

    2013-12-01

    The use of collaborative scientific visualization systems for the analysis, visualization, and sharing of 'big data' available from new high resolution remote sensing satellite sensors or four-dimensional numerical model simulations is propelling the wider adoption of ultra-resolution tiled display walls interconnected by high speed networks. These systems require a globally connected and well-integrated operating environment that provides persistent visualization and collaboration services. This abstract and subsequent presentation describes a new collaborative visualization system installed for NASA's Short-term Prediction Research and Transition (SPoRT) program at Marshall Space Flight Center and its use for Earth science applications. The system consists of a 3 x 4 array of 1920 x 1080 pixel thin bezel video monitors mounted on a wall in a scientific collaboration lab. The monitors are physically and virtually integrated into a 14' x 7' for video display. The display of scientific data on the video wall is controlled by a single Alienware Aurora PC with a 2nd Generation Intel Core 4.1 GHz processor, 32 GB memory, and an AMD Fire Pro W600 video card with 6 mini display port connections. Six mini display-to-dual DVI cables are used to connect the 12 individual video monitors. The open source Scalable Adaptive Graphics Environment (SAGE) windowing and media control framework, running on top of the Ubuntu 12 Linux operating system, allows several users to simultaneously control the display and storage of high resolution still and moving graphics in a variety of formats, on tiled display walls of any size. The Ubuntu operating system supports the open source Scalable Adaptive Graphics Environment (SAGE) software which provides a common environment, or framework, enabling its users to access, display and share a variety of data-intensive information. This information can be digital-cinema animations, high-resolution images, high-definition video

  2. Content Based Lecture Video Retrieval Using Speech and Video Text Information

    ERIC Educational Resources Information Center

    Yang, Haojin; Meinel, Christoph

    2014-01-01

    In the last decade e-lecturing has become more and more popular. The amount of lecture video data on the "World Wide Web" (WWW) is growing rapidly. Therefore, a more efficient method for video retrieval in WWW or within large lecture video archives is urgently needed. This paper presents an approach for automated video indexing and video…

  3. Tiny videos: a large data set for nonparametric video retrieval and frame classification.

    PubMed

    Karpenko, Alexandre; Aarabi, Parham

    2011-03-01

    In this paper, we present a large database of over 50,000 user-labeled videos collected from YouTube. We develop a compact representation called "tiny videos" that achieves high video compression rates while retaining the overall visual appearance of the video as it varies over time. We show that frame sampling using affinity propagation-an exemplar-based clustering algorithm-achieves the best trade-off between compression and video recall. We use this large collection of user-labeled videos in conjunction with simple data mining techniques to perform related video retrieval, as well as classification of images and video frames. The classification results achieved by tiny videos are compared with the tiny images framework [24] for a variety of recognition tasks. The tiny images data set consists of 80 million images collected from the Internet. These are the largest labeled research data sets of videos and images available to date. We show that tiny videos are better suited for classifying scenery and sports activities, while tiny images perform better at recognizing objects. Furthermore, we demonstrate that combining the tiny images and tiny videos data sets improves classification precision in a wider range of categories.

  4. Home Telehealth Video Conferencing: Perceptions and Performance

    PubMed Central

    Morris, Greg; Pech, Joanne; Rechter, Stuart; Carati, Colin; Kidd, Michael R

    2015-01-01

    Background The Flinders Telehealth in the Home trial (FTH trial), conducted in South Australia, was an action research initiative to test and evaluate the inclusion of telehealth services and broadband access technologies for palliative care patients living in the community and home-based rehabilitation services for the elderly at home. Telehealth services at home were supported by video conferencing between a therapist, nurse or doctor, and a patient using the iPad tablet. Objective The aims of this study are to identify which technical factors influence the quality of video conferencing in the home setting and to assess the impact of these factors on the clinical perceptions and acceptance of video conferencing for health care delivery into the home. Finally, we aim to identify any relationships between technical factors and clinical acceptance of this technology. Methods An action research process developed several quantitative and qualitative procedures during the FTH trial to investigate technology performance and users perceptions of the technology including measurements of signal power, data transmission throughput, objective assessment of user perceptions of videoconference quality, and questionnaires administered to clinical users. Results The effectiveness of telehealth was judged by clinicians as equivalent to or better than a home visit on 192 (71.6%, 192/268) occasions, and clinicians rated the experience of conducting a telehealth session compared with a home visit as equivalent or better in 90.3% (489/540) of the sessions. It was found that the quality of video conferencing when using a third generation mobile data service (3G) in comparison to broadband fiber-based services was concerning as 23.5% (220/936) of the calls failed during the telehealth sessions. The experimental field tests indicated that video conferencing audio and video quality was worse when using mobile data services compared with fiber to the home services. As well, statistically

  5. Student Estimates of Public Speaking Competency: The Meaning Extraction Helper and Video Self-Evaluation

    ERIC Educational Resources Information Center

    LeFebvre, Luke; LeFebvre, Leah; Blackburn, Kate; Boyd, Ryan

    2015-01-01

    Video continues to be used in many basic communication courses as a way for students to self-evaluate speechmaking. In this study, students (N = 71) presented speeches, viewed the video recordings, and produced self-generated feedback. Comparing student's self-estimated grades from the self-evaluation against earned grades resulted in composite…

  6. Using video recording to identify management errors in pediatric trauma resuscitation.

    PubMed

    Oakley, Ed; Stocker, Sergio; Staubli, Georg; Young, Simon

    2006-03-01

    To determine the ability of video recording to identify management errors in trauma resuscitation and to compare this method with medical record review. The resuscitation of children who presented to the emergency department of the Royal Children's Hospital between February 19, 2001, and August 18, 2002, for whom the trauma team was activated was video recorded. The tapes were analyzed, and management was compared with Advanced Trauma Life Support guidelines. Deviations from these guidelines were recorded as errors. Fifty video recordings were analyzed independently by 2 reviewers. Medical record review was undertaken for a cohort of the most seriously injured patients, and errors were identified. The errors detected with the 2 methods were compared. Ninety resuscitations were video recorded and analyzed. An average of 5.9 errors per resuscitation was identified with this method (range: 1-12 errors). Twenty-five children (28%) had an injury severity score of >11; there was an average of 2.16 errors per patient in this group. Only 10 (20%) of these errors were detected in the medical record review. Medical record review detected an additional 8 errors that were not evident on the video recordings. Concordance between independent reviewers was high, with 93% agreement. Video recording is more effective than medical record review in detecting management errors in pediatric trauma resuscitation. Management errors in pediatric trauma resuscitation are common and often involve basic resuscitation principles. Resuscitation of the most seriously injured children was associated with fewer errors. Video recording is a useful adjunct to trauma resuscitation auditing.

  7. Development of Kinetic Mechanisms for Next-Generation Fuels and CFD Simulation of Advanced Combustion Engines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pitz, William J.; McNenly, Matt J.; Whitesides, Russell

    Predictive chemical kinetic models are needed to represent next-generation fuel components and their mixtures with conventional gasoline and diesel fuels. These kinetic models will allow the prediction of the effect of alternative fuel blends in CFD simulations of advanced spark-ignition and compression-ignition engines. Enabled by kinetic models, CFD simulations can be used to optimize fuel formulations for advanced combustion engines so that maximum engine efficiency, fossil fuel displacement goals, and low pollutant emission goals can be achieved.

  8. Detection and localization of copy-paste forgeries in digital videos.

    PubMed

    Singh, Raahat Devender; Aggarwal, Naveen

    2017-12-01

    -world forgery scenario where the forensic investigator has no control over any of the variable parameters of the tampering process. When tested in such an experimental set-up, the four forensic schemes achieved varying levels of detection accuracies and exhibited different scopes of applicabilities. For videos compressed using QFs in the range 70-100, the existing noise residue based technique generated average detection accuracy in the range 64.5%-82.0%, while the proposed sensor pattern noise based scheme generated average accuracy in the range 89.9%-98.7%. For the aforementioned range of QFs, average accuracy rates achieved by the suggested clustering technique and the demosaicing artifact based approach were in the range 79.1%-90.1% and 83.2%-93.3%, respectively. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Using standardized patients versus video cases for representing clinical problems in problem-based learning.

    PubMed

    Yoon, Bo Young; Choi, Ikseon; Choi, Seokjin; Kim, Tae-Hee; Roh, Hyerin; Rhee, Byoung Doo; Lee, Jong-Tae

    2016-06-01

    The quality of problem representation is critical for developing students' problem-solving abilities in problem-based learning (PBL). This study investigates preclinical students' experience with standardized patients (SPs) as a problem representation method compared to using video cases in PBL. A cohort of 99 second-year preclinical students from Inje University College of Medicine (IUCM) responded to a Likert scale questionnaire on their learning experiences after they had experienced both video cases and SPs in PBL. The questionnaire consisted of 14 items with eight subcategories: problem identification, hypothesis generation, motivation, collaborative learning, reflective thinking, authenticity, patient-doctor communication, and attitude toward patients. The results reveal that using SPs led to the preclinical students having significantly positive experiences in boosting patient-doctor communication skills; the perceived authenticity of their clinical situations; development of proper attitudes toward patients; and motivation, reflective thinking, and collaborative learning when compared to using video cases. The SPs also provided more challenges than the video cases during problem identification and hypotheses generation. SPs are more effective than video cases in delivering higher levels of authenticity in clinical problems for PBL. The interaction with SPs engages preclinical students in deeper thinking and discussion; growth of communication skills; development of proper attitudes toward patients; and motivation. Considering the higher cost of SPs compared with video cases, SPs could be used most advantageously during the preclinical period in the IUCM curriculum.

  10. Using a digital video camera to examine coupled oscillations

    NASA Astrophysics Data System (ADS)

    Greczylo, T.; Debowska, E.

    2002-07-01

    In our previous paper (Debowska E, Jakubowicz S and Mazur Z 1999 Eur. J. Phys. 20 89-95), thanks to the use of an ultrasound distance sensor, experimental verification of the solution of Lagrange equations for longitudinal oscillations of the Wilberforce pendulum was shown. In this paper the sensor and a digital video camera were used to monitor and measure the changes of both the pendulum's coordinates (vertical displacement and angle of rotation) simultaneously. The experiments were performed with the aid of the integrated software package COACH 5. Fourier analysis in Microsoft^{\\circledR} Excel 97 was used to find normal modes in each case of the measured oscillations. Comparison of the results with those presented in our previous paper (as given above) leads to the conclusion that a digital video camera is a powerful tool for measuring coupled oscillations of a Wilberforce pendulum. The most important conclusion is that a video camera is able to do something more than merely register interesting physical phenomena - it can be used to perform measurements of physical quantities at an advanced level.

  11. A new method for digital video documentation in surgical procedures and minimally invasive surgery.

    PubMed

    Wurnig, P N; Hollaus, P H; Wurnig, C H; Wolf, R K; Ohtsuka, T; Pridun, N S

    2003-02-01

    Documentation of surgical procedures is limited to the accuracy of description, which depends on the vocabulary and the descriptive prowess of the surgeon. Even analog video recording could not solve the problem of documentation satisfactorily due to the abundance of recorded material. By capturing the video digitally, most problems are solved in the circumstances described in this article. We developed a cheap and useful digital video capturing system that consists of conventional computer components. Video images and clips can be captured intraoperatively and are immediately available. The system is a commercial personal computer specially configured for digital video capturing and is connected by wire to the video tower. Filming was done with a conventional endoscopic video camera. A total of 65 open and endoscopic procedures were documented in an orthopedic and a thoracic surgery unit. The median number of clips per surgical procedure was 6 (range, 1-17), and the median storage volume was 49 MB (range, 3-360 MB) in compressed form. The median duration of a video clip was 4 min 25 s (range, 45 s to 21 min). Median time for editing a video clip was 12 min for an advanced user (including cutting, title for the movie, and compression). The quality of the clips renders them suitable for presentations. This digital video documentation system allows easy capturing of intraoperative video sequences in high quality. All possibilities of documentation can be performed. With the use of an endoscopic video camera, no compromises with respect to sterility and surgical elbowroom are necessary. The cost is much lower than commercially available systems, and setting changes can be performed easily without trained specialists.

  12. Using Video-Based Instruction to Integrate Ethics into the Curriculum

    ERIC Educational Resources Information Center

    Sedaghat, Ali M.; Mintz, Steven M.; Wright, George M.

    2011-01-01

    This paper describes a video case discussion project based on the IMA's Statement of Ethical Professional Practice that was administered in a cost accounting class to assess the extent to which students were able to identify and discuss ethical issues raised by the facts of a case scenario. The case was developed by the IMA to advance the…

  13. A depth video sensor-based life-logging human activity recognition system for elderly care in smart indoor environments.

    PubMed

    Jalal, Ahmad; Kamal, Shaharyar; Kim, Daijin

    2014-07-02

    Recent advancements in depth video sensors technologies have made human activity recognition (HAR) realizable for elderly monitoring applications. Although conventional HAR utilizes RGB video sensors, HAR could be greatly improved with depth video sensors which produce depth or distance information. In this paper, a depth-based life logging HAR system is designed to recognize the daily activities of elderly people and turn these environments into an intelligent living space. Initially, a depth imaging sensor is used to capture depth silhouettes. Based on these silhouettes, human skeletons with joint information are produced which are further used for activity recognition and generating their life logs. The life-logging system is divided into two processes. Firstly, the training system includes data collection using a depth camera, feature extraction and training for each activity via Hidden Markov Models. Secondly, after training, the recognition engine starts to recognize the learned activities and produces life logs. The system was evaluated using life logging features against principal component and independent component features and achieved satisfactory recognition rates against the conventional approaches. Experiments conducted on the smart indoor activity datasets and the MSRDailyActivity3D dataset show promising results. The proposed system is directly applicable to any elderly monitoring system, such as monitoring healthcare problems for elderly people, or examining the indoor activities of people at home, office or hospital.

  14. A Depth Video Sensor-Based Life-Logging Human Activity Recognition System for Elderly Care in Smart Indoor Environments

    PubMed Central

    Jalal, Ahmad; Kamal, Shaharyar; Kim, Daijin

    2014-01-01

    Recent advancements in depth video sensors technologies have made human activity recognition (HAR) realizable for elderly monitoring applications. Although conventional HAR utilizes RGB video sensors, HAR could be greatly improved with depth video sensors which produce depth or distance information. In this paper, a depth-based life logging HAR system is designed to recognize the daily activities of elderly people and turn these environments into an intelligent living space. Initially, a depth imaging sensor is used to capture depth silhouettes. Based on these silhouettes, human skeletons with joint information are produced which are further used for activity recognition and generating their life logs. The life-logging system is divided into two processes. Firstly, the training system includes data collection using a depth camera, feature extraction and training for each activity via Hidden Markov Models. Secondly, after training, the recognition engine starts to recognize the learned activities and produces life logs. The system was evaluated using life logging features against principal component and independent component features and achieved satisfactory recognition rates against the conventional approaches. Experiments conducted on the smart indoor activity datasets and the MSRDailyActivity3D dataset show promising results. The proposed system is directly applicable to any elderly monitoring system, such as monitoring healthcare problems for elderly people, or examining the indoor activities of people at home, office or hospital. PMID:24991942

  15. Digital Video for Fostering Self-Reflection in an ePortfolio Environment

    ERIC Educational Resources Information Center

    Cheng, Gary; Chau, Juliana

    2009-01-01

    The ability to self-reflect is widely recognized as a desirable learner attribute that can induce deep learning. Advances in computer-mediated communication technologies have led to intense interest in higher education in exploring the potential of digital tools, particularly digital video, for fostering self-reflection. While there are reports…

  16. Subxiphoid uniportal video-assisted thoracoscopic surgery for synchronous bilateral lung resection.

    PubMed

    Yang, Xueying; Wang, Linlin

    2018-01-01

    With advancements in medical imaging and current emphasis on regular physical examinations, multiple pulmonary lesions increasingly are being detected, including bilateral pulmonary lesions. Video-assisted thoracic surgery is an important method for treating such lesions. Most of video-assisted thoracic surgeries for bilateral pulmonary lesions were two separate operations. Herein, we report a novel technique of synchronous subxiphoid uniportal video-assisted thoracic surgery for bilateral pulmonary lesions. Synchronous bilateral lung resection procedures were performed through a single incision (~4 cm, subxiphoid). This technique was used successfully in 11 patients with bilateral pulmonary lesions. There were no intraoperative deaths or mortality recorded at 30 days. Our results show that the subxiphoid uniportal thoracoscopic procedure is a safe and feasible surgical procedure for synchronous bilateral lung resection with less surgical trauma, postoperative pain and better cosmetic results in qualifying patients. Further analysis is ongoing, involving a larger number of subjects.

  17. Applications study of advanced power generation systems utilizing coal-derived fuels, volume 2

    NASA Technical Reports Server (NTRS)

    Robson, F. L.

    1981-01-01

    Technology readiness and development trends are discussed for three advanced power generation systems: combined cycle gas turbine, fuel cells, and magnetohydrodynamics. Power plants using these technologies are described and their performance either utilizing a medium-Btu coal derived fuel supplied by pipeline from a large central coal gasification facility or integrated with a gasification facility for supplying medium-Btu fuel gas is assessed.

  18. A comparison of cigarette- and hookah-related videos on YouTube.

    PubMed

    Carroll, Mary V; Shensa, Ariel; Primack, Brian A

    2013-09-01

    YouTube is now the second most visited site on the internet. The authors aimed to compare characteristics of and messages conveyed by cigarette- and hookah-related videos on YouTube. Systematic search procedures yielded 66 cigarette-related and 61 hookah-related videos. After three trained qualitative researchers used an iterative approach to develop and refine definitions for the coding of variables, two of them independently coded each video for content including positive and negative associations with smoking and major content type. Median view counts were 606,884 for cigarettes-related videos and 102,307 for hookah-related videos (p<0.001). However, the number of comments per 1000 views was significantly lower for cigarette-related videos than for hookah-related videos (1.6 vs 2.5, p=0.003). There was no significant difference in the number of 'like' designations per 100 reactions (91 vs 87, p=0.39). Cigarette-related videos were less likely than hookah-related videos to portray tobacco use in a positive light (24% vs 92%, p<0.001). In addition, cigarette-related videos were more likely to be of high production quality (42% vs 5%, p<0.001), to mention short-term consequences (50% vs 18%, p<0.001) and long-term consequences (44% vs 2%, p<0.001) of tobacco use, to contain explicit antismoking messages (39% vs 0%, p<0.001) and to provide specific information on how to quit tobacco use (21% vs 0%, p<0.001). Although internet user-generated videos related to cigarette smoking often acknowledge harmful consequences and provide explicit antismoking messages, hookah-related videos do not. It may be valuable for public health programmes to correct common misconceptions regarding hookah use.

  19. Extended image differencing for change detection in UAV video mosaics

    NASA Astrophysics Data System (ADS)

    Saur, Günter; Krüger, Wolfgang; Schumann, Arne

    2014-03-01

    Change detection is one of the most important tasks when using unmanned aerial vehicles (UAV) for video reconnaissance and surveillance. We address changes of short time scale, i.e. the observations are taken in time distances from several minutes up to a few hours. Each observation is a short video sequence acquired by the UAV in near-nadir view and the relevant changes are, e.g., recently parked or moved vehicles. In this paper we extend our previous approach of image differencing for single video frames to video mosaics. A precise image-to-image registration combined with a robust matching approach is needed to stitch the video frames to a mosaic. Additionally, this matching algorithm is applied to mosaic pairs in order to align them to a common geometry. The resulting registered video mosaic pairs are the input of the change detection procedure based on extended image differencing. A change mask is generated by an adaptive threshold applied to a linear combination of difference images of intensity and gradient magnitude. The change detection algorithm has to distinguish between relevant and non-relevant changes. Examples for non-relevant changes are stereo disparity at 3D structures of the scene, changed size of shadows, and compression or transmission artifacts. The special effects of video mosaicking such as geometric distortions and artifacts at moving objects have to be considered, too. In our experiments we analyze the influence of these effects on the change detection results by considering several scenes. The results show that for video mosaics this task is more difficult than for single video frames. Therefore, we extended the image registration by estimating an elastic transformation using a thin plate spline approach. The results for mosaics are comparable to that of single video frames and are useful for interactive image exploitation due to a larger scene coverage.

  20. Surgical Videos with Synchronised Vertical 2-Split Screens Recording the Surgeons' Hand Movement.

    PubMed

    Kaneko, Hiroki; Ra, Eimei; Kawano, Kenichi; Yasukawa, Tsutomu; Takayama, Kei; Iwase, Takeshi; Terasaki, Hiroko

    2015-01-01

    To improve the state-of-the-art teaching system by creating surgical videos with synchronised vertical 2-split screens. An ultra-compact, wide-angle point-of-view camcorder (HX-A1, Panasonic) was mounted on the surgical microscope focusing mostly on the surgeons' hand movements. In combination with the regular surgical videos obtained from the CCD camera in the surgical microscope, synchronised vertical 2-split-screen surgical videos were generated with the video-editing software. Using synchronised vertical 2-split-screen videos, residents of the ophthalmology department could watch and learn how assistant surgeons controlled the eyeball, while the main surgeons performed scleral buckling surgery. In vitrectomy, the synchronised vertical 2-split-screen videos showed the surgeons' hands holding the instruments and moving roughly and boldly, in contrast to the very delicate movements of the vitrectomy instruments inside the eye. Synchronised vertical 2-split-screen surgical videos are beneficial for the education of young surgical trainees when learning surgical skills including the surgeons' hand movements. © 2015 S. Karger AG, Basel.

  1. Business Model Evaluation for an Advanced Multimedia Service Portfolio

    NASA Astrophysics Data System (ADS)

    Pisciella, Paolo; Zoric, Josip; Gaivoronski, Alexei A.

    In this paper we analyze quantitatively a business model for the collaborative provision of an advanced mobile data service portfolio composed of three multimedia services: Video on Demand, Internet Protocol Television and User Generated Content. We provide a description of the provision system considering the relation occurring between tecnical aspects and business aspects for each agent providing the basic multimedia service. Such a techno-business analysis is then projected into a mathematical model dealing with the problem of the definition of incentives between the different agents involved in a collaborative service provision. Through the implementation of this model we aim at shaping the behaviour of each of the contributing agents modifying the level of profitability that the Service Portfolio yields to each of them.

  2. Video Image Stabilization and Registration

    NASA Technical Reports Server (NTRS)

    Hathaway, David H. (Inventor); Meyer, Paul J. (Inventor)

    2002-01-01

    A method of stabilizing and registering a video image in multiple video fields of a video sequence provides accurate determination of the image change in magnification, rotation and translation between video fields, so that the video fields may be accurately corrected for these changes in the image in the video sequence. In a described embodiment, a key area of a key video field is selected which contains an image which it is desired to stabilize in a video sequence. The key area is subdivided into nested pixel blocks and the translation of each of the pixel blocks from the key video field to a new video field is determined as a precursor to determining change in magnification, rotation and translation of the image from the key video field to the new video field.

  3. Video Image Stabilization and Registration

    NASA Technical Reports Server (NTRS)

    Hathaway, David H. (Inventor); Meyer, Paul J. (Inventor)

    2003-01-01

    A method of stabilizing and registering a video image in multiple video fields of a video sequence provides accurate determination of the image change in magnification, rotation and translation between video fields, so that the video fields may be accurately corrected for these changes in the image in the video sequence. In a described embodiment, a key area of a key video field is selected which contains an image which it is desired to stabilize in a video sequence. The key area is subdivided into nested pixel blocks and the translation of each of the pixel blocks from the key video field to a new video field is determined as a precursor to determining change in magnification, rotation and translation of the image from the key video field to the new video field.

  4. Video Captions Benefit Everyone.

    PubMed

    Gernsbacher, Morton Ann

    2015-10-01

    Video captions, also known as same-language subtitles, benefit everyone who watches videos (children, adolescents, college students, and adults). More than 100 empirical studies document that captioning a video improves comprehension of, attention to, and memory for the video. Captions are particularly beneficial for persons watching videos in their non-native language, for children and adults learning to read, and for persons who are D/deaf or hard of hearing. However, despite U.S. laws, which require captioning in most workplace and educational contexts, many video audiences and video creators are naïve about the legal mandate to caption, much less the empirical benefit of captions.

  5. Video Captions Benefit Everyone

    PubMed Central

    Gernsbacher, Morton Ann

    2016-01-01

    Video captions, also known as same-language subtitles, benefit everyone who watches videos (children, adolescents, college students, and adults). More than 100 empirical studies document that captioning a video improves comprehension of, attention to, and memory for the video. Captions are particularly beneficial for persons watching videos in their non-native language, for children and adults learning to read, and for persons who are D/deaf or hard of hearing. However, despite U.S. laws, which require captioning in most workplace and educational contexts, many video audiences and video creators are naïve about the legal mandate to caption, much less the empirical benefit of captions. PMID:28066803

  6. Developing model-making and model-breaking skills using direct measurement video-based activities

    NASA Astrophysics Data System (ADS)

    Vonk, Matthew; Bohacek, Peter; Militello, Cheryl; Iverson, Ellen

    2017-12-01

    This study focuses on student development of two important laboratory skills in the context of introductory college-level physics. The first skill, which we call model making, is the ability to analyze a phenomenon in a way that produces a quantitative multimodal model. The second skill, which we call model breaking, is the ability to critically evaluate if the behavior of a system is consistent with a given model. This study involved 116 introductory physics students in four different sections, each taught by a different instructor. All of the students within a given class section participated in the same instruction (including labs) with the exception of five activities performed throughout the semester. For those five activities, each class section was split into two groups; one group was scaffolded to focus on model-making skills and the other was scaffolded to focus on model-breaking skills. Both conditions involved direct measurement videos. In some cases, students could vary important experimental parameters within the video like mass, frequency, and tension. Data collected at the end of the semester indicate that students in the model-making treatment group significantly outperformed the other group on the model-making skill despite the fact that both groups shared a common physical lab experience. Likewise, the model-breaking treatment group significantly outperformed the other group on the model-breaking skill. This is important because it shows that direct measurement video-based instruction can help students acquire science-process skills, which are critical for scientists, and which are a key part of current science education approaches such as the Next Generation Science Standards and the Advanced Placement Physics 1 course.

  7. Photometric Calibration of Consumer Video Cameras

    NASA Technical Reports Server (NTRS)

    Suggs, Robert; Swift, Wesley, Jr.

    2007-01-01

    analyze. The light source used to generate the calibration images is an artificial variable star comprising a Newtonian collimator illuminated by a light source modulated by a rotating variable neutral- density filter. This source acts as a point source, the brightness of which varies at a known rate. A video camera to be calibrated is aimed at this source. Fixed neutral-density filters are inserted in or removed from the light path as needed to make the video image of the source appear to fluctuate between dark and saturated bright. The resulting video-image data are analyzed by use of custom software that determines the integrated signal in each video frame and determines the system response curve (measured output signal versus input brightness). These determinations constitute the calibration, which is thereafter used in automatic, frame-by-frame processing of the data from the video images to be analyzed.

  8. A scheme for racquet sports video analysis with the combination of audio-visual information

    NASA Astrophysics Data System (ADS)

    Xing, Liyuan; Ye, Qixiang; Zhang, Weigang; Huang, Qingming; Yu, Hua

    2005-07-01

    As a very important category in sports video, racquet sports video, e.g. table tennis, tennis and badminton, has been paid little attention in the past years. Considering the characteristics of this kind of sports video, we propose a new scheme for structure indexing and highlight generating based on the combination of audio and visual information. Firstly, a supervised classification method is employed to detect important audio symbols including impact (ball hit), audience cheers, commentator speech, etc. Meanwhile an unsupervised algorithm is proposed to group video shots into various clusters. Then, by taking advantage of temporal relationship between audio and visual signals, we can specify the scene clusters with semantic labels including rally scenes and break scenes. Thirdly, a refinement procedure is developed to reduce false rally scenes by further audio analysis. Finally, an exciting model is proposed to rank the detected rally scenes from which many exciting video clips such as game (match) points can be correctly retrieved. Experiments on two types of representative racquet sports video, table tennis video and tennis video, demonstrate encouraging results.

  9. An investigation into online videos as a source of safety hazard reports.

    PubMed

    Nasri, Leila; Baghersad, Milad; Gruss, Richard; Marucchi, Nico Sung Won; Abrahams, Alan S; Ehsani, Johnathon P

    2018-06-01

    Despite the advantages of video-based product reviews relative to text-based reviews in detecting possible safety hazard issues, video-based product reviews have received no attention in prior literature. This study focuses on online video-based product reviews as possible sources to detect safety hazards. We use two common text mining methods - sentiment and smoke words - to detect safety issues mentioned in videos on the world's most popular video sharing platform, YouTube. 15,402 product review videos from YouTube were identified as containing either negative sentiment or smoke words, and were carefully manually viewed to verify whether hazards were indeed mentioned. 496 true safety issues (3.2%) were found. Out of 9,453 videos that contained smoke words, 322 (3.4%) mentioned safety issues, vs. only 174 (2.9%) of the 5,949 videos with negative sentiment words. Only 1% of randomly-selected videos mentioned safety hazards. Comparing the number of videos with true safety issues that contain sentiment words vs. smoke words in their title or description, we show that smoke words are a more accurate predictor of safety hazards in video-based product reviews than sentiment words. This research also discovers words that are indicative of true hazards versus false positives in online video-based product reviews. Practical applications: The smoke words lists and word sub-groups generated in this paper can be used by manufacturers and consumer product safety organizations to more efficiently identify product safety issues from online videos. This project also provides realistic baselines for resource estimates for future projects that aim to discover safety issues from online videos or reviews. Copyright © 2018 National Safety Council and Elsevier Ltd. All rights reserved.

  10. Algorithm for Video Summarization of Bronchoscopy Procedures

    PubMed Central

    2011-01-01

    challenge of generating summaries of bronchoscopy video recordings. PMID:22185344

  11. On Finding the C in CBT: The Challenges of Applying Gambling-Related Cognitive Approaches to Video-Gaming.

    PubMed

    Delfabbro, Paul; King, Daniel

    2013-11-14

    Many similarities have been drawn between the activities of gambling and video-gaming. Both are repetitive activities with intermittent reinforcement, decision-making opportunities, and elements of risk-taking. As a result, it might be tempting to believe that cognitive strategies that are used to treat problem gambling might also be applied to problematic video gaming. In this paper, we argue that many cognitive approaches to gambling that typically involve a focus on erroneous beliefs about probabilities and randomness are not readily applicable to video gaming. Instead, we encourage a focus on other clusters of cognitions that relate to: (a) the salience and over-valuing of gaming rewards, experiences, and identities, (b) maladaptive and inflexible rules about behaviour, (c) the use of video-gaming to maintain self-esteem, and (d) video-gaming for social status and recognition. This theoretical discussion is advanced as a starting point for the development of more refined cognitive treatment approaches for problematic video gaming.

  12. On finding the C in CBT: the challenges of applying gambling-related cognitive approaches to video-gaming.

    PubMed

    Delfabbro, Paul; King, Daniel

    2015-03-01

    Many similarities have been drawn between the activities of gambling and video-gaming. Both are repetitive activities with intermittent reinforcement, decision-making opportunities, and elements of risk-taking. As a result, it might be tempting to believe that cognitive strategies that are used to treat problem gambling might also be applied to problematic video gaming. In this paper, we argue that many cognitive approaches to gambling that typically involve a focus on erroneous beliefs about probabilities and randomness are not readily applicable to video gaming. Instead, we encourage a focus on other clusters of cognitions that relate to: (a) the salience and over-valuing of gaming rewards, experiences, and identities, (b) maladaptive and inflexible rules about behaviour, (c) the use of video-gaming to maintain self-esteem, and (d) video-gaming for social status and recognition. This theoretical discussion is advanced as a starting point for the development of more refined cognitive treatment approaches for problematic video gaming.

  13. High-speed holographic correlation system for video identification on the internet

    NASA Astrophysics Data System (ADS)

    Watanabe, Eriko; Ikeda, Kanami; Kodate, Kashiko

    2013-12-01

    Automatic video identification is important for indexing, search purposes, and removing illegal material on the Internet. By combining a high-speed correlation engine and web-scanning technology, we developed the Fast Recognition Correlation system (FReCs), a video identification system for the Internet. FReCs is an application thatsearches through a number of websites with user-generated content (UGC) and detects video content that violates copyright law. In this paper, we describe the FReCs configuration and an approach to investigating UGC websites using FReCs. The paper also illustrates the combination of FReCs with an optical correlation system, which is capable of easily replacing a digital authorization sever in FReCs with optical correlation.

  14. Effects of video compression on target acquisition performance

    NASA Astrophysics Data System (ADS)

    Espinola, Richard L.; Cha, Jae; Preece, Bradley

    2008-04-01

    The bandwidth requirements of modern target acquisition systems continue to increase with larger sensor formats and multi-spectral capabilities. To obviate this problem, still and moving imagery can be compressed, often resulting in greater than 100 fold decrease in required bandwidth. Compression, however, is generally not error-free and the generated artifacts can adversely affect task performance. The U.S. Army RDECOM CERDEC Night Vision and Electronic Sensors Directorate recently performed an assessment of various compression techniques on static imagery for tank identification. In this paper, we expand this initial assessment by studying and quantifying the effect of various video compression algorithms and their impact on tank identification performance. We perform a series of controlled human perception tests using three dynamic simulated scenarios: target moving/sensor static, target static/sensor static, sensor tracking the target. Results of this study will quantify the effect of video compression on target identification and provide a framework to evaluate video compression on future sensor systems.

  15. Materials Advances for Next-Generation Ingestible Electronic Medical Devices.

    PubMed

    Bettinger, Christopher J

    2015-10-01

    Electronic medical implants have collectively transformed the diagnosis and treatment of many diseases, but have many inherent limitations. Electronic implants require invasive surgeries, operate in challenging microenvironments, and are susceptible to bacterial infection and persistent inflammation. Novel materials and nonconventional device fabrication strategies may revolutionize the way electronic devices are integrated with the body. Ingestible electronic devices offer many advantages compared with implantable counterparts that may improve the diagnosis and treatment of pathologies ranging from gastrointestinal infections to diabetes. This review summarizes current technologies and highlights recent materials advances. Specific focus is dedicated to next-generation materials for packaging, circuit design, and on-board power supplies that are benign, nontoxic, and even biodegradable. Future challenges and opportunities are also highlighted. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. The virtual brain: 30 years of video-game play and cognitive abilities.

    PubMed

    Latham, Andrew J; Patston, Lucy L M; Tippett, Lynette J

    2013-09-13

    Forty years have passed since video-games were first made widely available to the public and subsequently playing games has become a favorite past-time for many. Players continuously engage with dynamic visual displays with success contingent on the time-pressured deployment, and flexible allocation, of attention as well as precise bimanual movements. Evidence to date suggests that both brief and extensive exposure to video-game play can result in a broad range of enhancements to various cognitive faculties that generalize beyond the original context. Despite promise, video-game research is host to a number of methodological issues that require addressing before progress can be made in this area. Here an effort is made to consolidate the past 30 years of literature examining the effects of video-game play on cognitive faculties and, more recently, neural systems. Future work is required to identify the mechanism that allows the act of video-game play to generate such a broad range of generalized enhancements.

  17. The virtual brain: 30 years of video-game play and cognitive abilities

    PubMed Central

    Latham, Andrew J.; Patston, Lucy L. M.; Tippett, Lynette J.

    2013-01-01

    Forty years have passed since video-games were first made widely available to the public and subsequently playing games has become a favorite past-time for many. Players continuously engage with dynamic visual displays with success contingent on the time-pressured deployment, and flexible allocation, of attention as well as precise bimanual movements. Evidence to date suggests that both brief and extensive exposure to video-game play can result in a broad range of enhancements to various cognitive faculties that generalize beyond the original context. Despite promise, video-game research is host to a number of methodological issues that require addressing before progress can be made in this area. Here an effort is made to consolidate the past 30 years of literature examining the effects of video-game play on cognitive faculties and, more recently, neural systems. Future work is required to identify the mechanism that allows the act of video-game play to generate such a broad range of generalized enhancements. PMID:24062712

  18. Minimally invasive video-assisted thyroidectomy: Ascending the learning curve

    PubMed Central

    Capponi, Michela Giulii; Bellotti, Carlo; Lotti, Marco; Ansaloni, Luca

    2015-01-01

    BACKGROUND: Minimally invasive video-assisted thyroidectomy (MIVAT) is a technically demanding procedure and requires a surgical team skilled in both endocrine and endoscopic surgery. The aim of this report is to point out some aspects of the learning curve of the video-assisted thyroid surgery, through the analysis of our preliminary series of procedures. PATIENTS AND METHODS: Over a period of 8 months, we selected 36 patients for minimally invasive video-assisted surgery of the thyroid. The patients were considered eligible if they presented with a nodule not exceeding 35 mm and total thyroid volume <20 ml; presence of biochemical and ultrasound signs of thyroiditis and pre-operative diagnosis of cancer were exclusion criteria. We analysed surgical results, conversion rate, operating time, post-operative complications, hospital stay and cosmetic outcomes of the series. RESULTS: We performed 36 total thyroidectomy and in one case we performed a consensual parathyroidectomy. The procedure was successfully carried out in 33 out of 36 cases (conversion rate 8.3%). The mean operating time was 109 min (range: 80-241 min) and reached a plateau after 29 MIVAT. Post-operative complications included three transient recurrent nerve palsies and two transient hypocalcemias; no definitive hypoparathyroidism was registered. The cosmetic result was considered excellent by most patients. CONCLUSIONS: Advances in skills and technology allow surgeons to easily reproduce the standard open total thyroidectomy with video-assistance. Although the learning curve represents a time-consuming step, training remains a crucial point in gaining a reasonable confidence with video-assisted surgical technique. PMID:25883451

  19. Using learning analytics to evaluate a video-based lecture series.

    PubMed

    Lau, K H Vincent; Farooque, Pue; Leydon, Gary; Schwartz, Michael L; Sadler, R Mark; Moeller, Jeremy J

    2018-01-01

    The video-based lecture (VBL), an important component of the flipped classroom (FC) and massive open online course (MOOC) approaches to medical education, has primarily been evaluated through direct learner feedback. Evaluation may be enhanced through learner analytics (LA) - analysis of quantitative audience usage data generated by video-sharing platforms. We applied LA to an experimental series of ten VBLs on electroencephalography (EEG) interpretation, uploaded to YouTube in the model of a publicly accessible MOOC. Trends in view count; total percentage of video viewed and audience retention (AR) (percentage of viewers watching at a time point compared to the initial total) were examined. The pattern of average AR decline was characterized using regression analysis, revealing a uniform linear decline in viewership for each video, with no evidence of an optimal VBL length. Segments with transient increases in AR corresponded to those focused on core concepts, indicative of content requiring more detailed evaluation. We propose a model for applying LA at four levels: global, series, video, and feedback. LA may be a useful tool in evaluating a VBL series. Our proposed model combines analytics data and learner self-report for comprehensive evaluation.

  20. Analytical investigation of thermal barrier coatings on advanced power generation gas turbines

    NASA Technical Reports Server (NTRS)

    Amos, D. J.

    1977-01-01

    An analytical investigation of present and advanced gas turbine power generation cycles incorporating thermal barrier turbine component coatings was performed. Approximately 50 parametric points considering simple, recuperated, and combined cycles (including gasification) with gas turbine inlet temperatures from current levels through 1644K (2500 F) were evaluated. The results indicated that thermal barriers would be an attractive means to improve performance and reduce cost of electricity for these cycles. A recommended thermal barrier development program has been defined.

  1. A Comparison of Cigarette- and Hookah-Related Videos on YouTube

    PubMed Central

    Carroll, Mary V.; Shensa, Ariel; Primack, Brian A.

    2013-01-01

    Objective YouTube is now the second most visited site on the Internet. We aimed to compare characteristics of and messages conveyed by cigarette- and hookah-related videos on YouTube. Methods Systematic search procedures yielded 66 cigarette-related and 61 hookah-related videos. After 3 trained qualitative researchers used an iterative approach to develop and refine definitions for the coding of variables, 2 of them independently coded each video for content including positive and negative associations with smoking and major content type. Results Median view counts were 606,884 for cigarettes and 102,307 for hookahs (P<.001). However, the number of comments per 1,000 views was significantly lower for cigarette-related videos than for hookah-related videos (1.6 vs 2.5, P=.003). There was no significant difference in the number of “like” designations per 100 reactions (91 vs. 87, P=.39). Cigarette-related videos were less likely than hookah-related videos to portray tobacco use in a positive light (24% vs. 92%, P<.001). In addition, cigarette-related videos were more likely to be of high production quality (42% vs. 5%, P<.001), to mention short-term consequences (50% vs. 18%, P<.001) and long-term consequences (44% vs. 2%, P<.001) of tobacco use, to contain explicit antismoking messages (39% vs. 0%, P<.001), and to provide specific information on how to quit tobacco use (21% vs. 0%, P<.001). Conclusions Although Internet user–generated videos related to cigarette smoking often acknowledge harmful consequences and provide explicit antismoking messages, hookah-related videos do not. It may be valuable for public health programs to correct common misconceptions regarding hookah use. PMID:22363069

  2. Testing of the Advanced Stirling Radioisotope Generator Engineering Unit at NASA Glenn Research Center

    NASA Technical Reports Server (NTRS)

    Lewandowski, Edward J.

    2013-01-01

    The Advanced Stirling Radioisotope Generator (ASRG) is a high-efficiency generator being developed for potential use on a Discovery 12 space mission. Lockheed Martin designed and fabricated the ASRG Engineering Unit (EU) under contract to the Department of Energy. This unit was delivered to NASA Glenn Research Center in 2008 and has been undergoing extended operation testing to generate long-term performance data for an integrated system. It has also been used for tests to characterize generator operation while varying control parameters and system inputs, both when controlled with an alternating current (AC) bus and with a digital controller. The ASRG EU currently has over 27,000 hours of operation. This paper summarizes all of the tests that have been conducted on the ASRG EU over the past 3 years and provides an overview of the test results and what was learned.

  3. Next-generation sequencing for endocrine cancers: Recent advances and challenges.

    PubMed

    Suresh, Padmanaban S; Venkatesh, Thejaswini; Tsutsumi, Rie; Shetty, Abhishek

    2017-05-01

    Contemporary molecular biology research tools have enriched numerous areas of biomedical research that address challenging diseases, including endocrine cancers (pituitary, thyroid, parathyroid, adrenal, testicular, ovarian, and neuroendocrine cancers). These tools have placed several intriguing clues before the scientific community. Endocrine cancers pose a major challenge in health care and research despite considerable attempts by researchers to understand their etiology. Microarray analyses have provided gene signatures from many cells, tissues, and organs that can differentiate healthy states from diseased ones, and even show patterns that correlate with stages of a disease. Microarray data can also elucidate the responses of endocrine tumors to therapeutic treatments. The rapid progress in next-generation sequencing methods has overcome many of the initial challenges of these technologies, and their advantages over microarray techniques have enabled them to emerge as valuable aids for clinical research applications (prognosis, identification of drug targets, etc.). A comprehensive review describing the recent advances in next-generation sequencing methods and their application in the evaluation of endocrine and endocrine-related cancers is lacking. The main purpose of this review is to illustrate the concepts that collectively constitute our current view of the possibilities offered by next-generation sequencing technological platforms, challenges to relevant applications, and perspectives on the future of clinical genetic testing of patients with endocrine tumors. We focus on recent discoveries in the use of next-generation sequencing methods for clinical diagnosis of endocrine tumors in patients and conclude with a discussion on persisting challenges and future objectives.

  4. Effective Educational Videos: Principles and Guidelines for Maximizing Student Learning from Video Content

    PubMed Central

    Brame, Cynthia J.

    2016-01-01

    Educational videos have become an important part of higher education, providing an important content-delivery tool in many flipped, blended, and online classes. Effective use of video as an educational tool is enhanced when instructors consider three elements: how to manage cognitive load of the video; how to maximize student engagement with the video; and how to promote active learning from the video. This essay reviews literature relevant to each of these principles and suggests practical ways instructors can use these principles when using video as an educational tool. PMID:27789532

  5. Science on TeacherTube: A Mixed Methods Analysis of Teacher Produced Video

    NASA Astrophysics Data System (ADS)

    Chmiel, Margaret (Marjee)

    Increased bandwidth, inexpensive video cameras and easy-to-use video editing software have made social media sites featuring user generated video (UGV) an increasingly popular vehicle for online communication. As such, UGV have come to play a role in education, both formal and informal, but there has been little research on this topic in scholarly literature. In this mixed-methods study, a content and discourse analysis are used to describe the most successful UGV in the science channel of an education-focused site called TeacherTube. The analysis finds that state achievement tests, and their focus on vocabulary and recall-level knowledge, drive much of the content found on TeacherTube.

  6. BESTIA - the next generation ultra-fast CO 2 laser for advanced accelerator research

    DOE PAGES

    Pogorelsky, Igor V.; Babzien, Markus; Ben-Zvi, Ilan; ...

    2015-12-02

    Over the last two decades, BNL’s ATF has pioneered the use of high-peak power CO 2 lasers for research in advanced accelerators and radiation sources. In addition, our recent developments in ion acceleration, Compton scattering, and IFELs have further underscored the benefits from expanding the landscape of strong-field laser interactions deeper into the mid-infrared (MIR) range of wavelengths. This extension validates our ongoing efforts in advancing CO 2 laser technology, which we report here. Our next-generation, multi-terawatt, femtosecond CO 2 laser will open new opportunities for studying ultra-relativistic laser interactions with plasma in the MIR spectral domain, including new regimesmore » in the particle acceleration of ions and electrons.« less

  7. Automatic multiple zebrafish larvae tracking in unconstrained microscopic video conditions.

    PubMed

    Wang, Xiaoying; Cheng, Eva; Burnett, Ian S; Huang, Yushi; Wlodkowic, Donald

    2017-12-14

    The accurate tracking of zebrafish larvae movement is fundamental to research in many biomedical, pharmaceutical, and behavioral science applications. However, the locomotive characteristics of zebrafish larvae are significantly different from adult zebrafish, where existing adult zebrafish tracking systems cannot reliably track zebrafish larvae. Further, the far smaller size differentiation between larvae and the container render the detection of water impurities inevitable, which further affects the tracking of zebrafish larvae or require very strict video imaging conditions that typically result in unreliable tracking results for realistic experimental conditions. This paper investigates the adaptation of advanced computer vision segmentation techniques and multiple object tracking algorithms to develop an accurate, efficient and reliable multiple zebrafish larvae tracking system. The proposed system has been tested on a set of single and multiple adult and larvae zebrafish videos in a wide variety of (complex) video conditions, including shadowing, labels, water bubbles and background artifacts. Compared with existing state-of-the-art and commercial multiple organism tracking systems, the proposed system improves the tracking accuracy by up to 31.57% in unconstrained video imaging conditions. To facilitate the evaluation on zebrafish segmentation and tracking research, a dataset with annotated ground truth is also presented. The software is also publicly accessible.

  8. Energy expended playing video console games: an opportunity to increase children's physical activity?

    PubMed

    Maddison, Ralph; Mhurchu, Cliona Ni; Jull, Andrew; Jiang, Yannan; Prapavessis, Harry; Rodgers, Anthony

    2007-08-01

    This study sought to quantify the energy expenditure and physical activity associated with playing the "new generation" active and nonactive console-based video games in 21 children ages 10-14 years. Energy expenditure (kcal) derived from oxygen consumption (VO2) was continuously assessed while children played nonactive and active console video games. Physical activity was assessed continuously using the Actigraph accelerometer. Significant (p < .001) increases from baseline were found for energy expenditure (129-400%), heart rate (43-84%), and activity counts (122-1288 versus 0-23) when playing the active console video games. Playing active console video games over short periods of time is similar in intensity to light to moderate traditional physical activities such as walking, skipping, and jogging.

  9. Toxicogenomics and Cancer Susceptibility: Advances with Next-Generation Sequencing

    PubMed Central

    Ning, Baitang; Su, Zhenqiang; Mei, Nan; Hong, Huixiao; Deng, Helen; Shi, Leming; Fuscoe, James C.; Tolleson, William H.

    2017-01-01

    The aim of this review is to comprehensively summarize the recent achievements in the field of toxicogenomics and cancer research regarding genetic-environmental interactions in carcinogenesis and detection of genetic aberrations in cancer genomes by next-generation sequencing technology. Cancer is primarily a genetic disease in which genetic factors and environmental stimuli interact to cause genetic and epigenetic aberrations in human cells. Mutations in the germline act as either high-penetrance alleles that strongly increase the risk of cancer development, or as low-penetrance alleles that mildly change an individual’s susceptibility to cancer. Somatic mutations, resulting from either DNA damage induced by exposure to environmental mutagens or from spontaneous errors in DNA replication or repair are involved in the development or progression of the cancer. Induced or spontaneous changes in the epigenome may also drive carcinogenesis. Advances in next-generation sequencing technology provide us opportunities to accurately, economically, and rapidly identify genetic variants, somatic mutations, gene expression profiles, and epigenetic alterations with single-base resolution. Whole genome sequencing, whole exome sequencing, and RNA sequencing of paired cancer and adjacent normal tissue present a comprehensive picture of the cancer genome. These new findings should benefit public health by providing insights in understanding cancer biology, and in improving cancer diagnosis and therapy. PMID:24875441

  10. Integer-Linear-Programing Optimization in Scalable Video Multicast with Adaptive Modulation and Coding in Wireless Networks

    PubMed Central

    Lee, Chaewoo

    2014-01-01

    The advancement in wideband wireless network supports real time services such as IPTV and live video streaming. However, because of the sharing nature of the wireless medium, efficient resource allocation has been studied to achieve a high level of acceptability and proliferation of wireless multimedia. Scalable video coding (SVC) with adaptive modulation and coding (AMC) provides an excellent solution for wireless video streaming. By assigning different modulation and coding schemes (MCSs) to video layers, SVC can provide good video quality to users in good channel conditions and also basic video quality to users in bad channel conditions. For optimal resource allocation, a key issue in applying SVC in the wireless multicast service is how to assign MCSs and the time resources to each SVC layer in the heterogeneous channel condition. We formulate this problem with integer linear programming (ILP) and provide numerical results to show the performance under 802.16 m environment. The result shows that our methodology enhances the overall system throughput compared to an existing algorithm. PMID:25276862

  11. Authentication of digital video evidence

    NASA Astrophysics Data System (ADS)

    Beser, Nicholas D.; Duerr, Thomas E.; Staisiunas, Gregory P.

    2003-11-01

    In response to a requirement from the United States Postal Inspection Service, the Technical Support Working Group tasked The Johns Hopkins University Applied Physics Laboratory (JHU/APL) to develop a technique tha will ensure the authenticity, or integrity, of digital video (DV). Verifiable integrity is needed if DV evidence is to withstand a challenge to its admissibility in court on the grounds that it can be easily edited. Specifically, the verification technique must detect additions, deletions, or modifications to DV and satisfy the two-part criteria pertaining to scientific evidence as articulated in Daubert et al. v. Merrell Dow Pharmaceuticals Inc., 43 F3d (9th Circuit, 1995). JHU/APL has developed a prototype digital video authenticator (DVA) that generates digital signatures based on public key cryptography at the frame level of the DV. Signature generation and recording is accomplished at the same time as DV is recorded by the camcorder. Throughput supports the consumer-grade camcorder data rate of 25 Mbps. The DVA software is implemented on a commercial laptop computer, which is connected to a commercial digital camcorder via the IEEE-1394 serial interface. A security token provides agent identification and the interface to the public key infrastructure (PKI) that is needed for management of the public keys central to DV integrity verification.

  12. Using standardized patients versus video cases for representing clinical problems in problem-based learning

    PubMed Central

    2016-01-01

    Purpose: The quality of problem representation is critical for developing students’ problem-solving abilities in problem-based learning (PBL). This study investigates preclinical students’ experience with standardized patients (SPs) as a problem representation method compared to using video cases in PBL. Methods: A cohort of 99 second-year preclinical students from Inje University College of Medicine (IUCM) responded to a Likert scale questionnaire on their learning experiences after they had experienced both video cases and SPs in PBL. The questionnaire consisted of 14 items with eight subcategories: problem identification, hypothesis generation, motivation, collaborative learning, reflective thinking, authenticity, patient-doctor communication, and attitude toward patients. Results: The results reveal that using SPs led to the preclinical students having significantly positive experiences in boosting patient-doctor communication skills; the perceived authenticity of their clinical situations; development of proper attitudes toward patients; and motivation, reflective thinking, and collaborative learning when compared to using video cases. The SPs also provided more challenges than the video cases during problem identification and hypotheses generation. Conclusion: SPs are more effective than video cases in delivering higher levels of authenticity in clinical problems for PBL. The interaction with SPs engages preclinical students in deeper thinking and discussion; growth of communication skills; development of proper attitudes toward patients; and motivation. Considering the higher cost of SPs compared with video cases, SPs could be used most advantageously during the preclinical period in the IUCM curriculum. PMID:26923094

  13. Secure and Lightweight Cloud-Assisted Video Reporting Protocol over 5G-Enabled Vehicular Networks

    PubMed Central

    2017-01-01

    In the vehicular networks, the real-time video reporting service is used to send the recorded videos in the vehicle to the cloud. However, when facilitating the real-time video reporting service in the vehicular networks, the usage of the fourth generation (4G) long term evolution (LTE) was proved to suffer from latency while the IEEE 802.11p standard does not offer sufficient scalability for a such congested environment. To overcome those drawbacks, the fifth-generation (5G)-enabled vehicular network is considered as a promising technology for empowering the real-time video reporting service. In this paper, we note that security and privacy related issues should also be carefully addressed to boost the early adoption of 5G-enabled vehicular networks. There exist a few research works for secure video reporting service in 5G-enabled vehicular networks. However, their usage is limited because of public key certificates and expensive pairing operations. Thus, we propose a secure and lightweight protocol for cloud-assisted video reporting service in 5G-enabled vehicular networks. Compared to the conventional public key certificates, the proposed protocol achieves entities’ authorization through anonymous credential. Also, by using lightweight security primitives instead of expensive bilinear pairing operations, the proposed protocol minimizes the computational overhead. From the evaluation results, we show that the proposed protocol takes the smaller computation and communication time for the cryptographic primitives than that of the well-known Eiza-Ni-Shi protocol. PMID:28946633

  14. Secure and Lightweight Cloud-Assisted Video Reporting Protocol over 5G-Enabled Vehicular Networks.

    PubMed

    Nkenyereye, Lewis; Kwon, Joonho; Choi, Yoon-Ho

    2017-09-23

    In the vehicular networks, the real-time video reporting service is used to send the recorded videos in the vehicle to the cloud. However, when facilitating the real-time video reporting service in the vehicular networks, the usage of the fourth generation (4G) long term evolution (LTE) was proved to suffer from latency while the IEEE 802.11p standard does not offer sufficient scalability for a such congested environment. To overcome those drawbacks, the fifth-generation (5G)-enabled vehicular network is considered as a promising technology for empowering the real-time video reporting service. In this paper, we note that security and privacy related issues should also be carefully addressed to boost the early adoption of 5G-enabled vehicular networks. There exist a few research works for secure video reporting service in 5G-enabled vehicular networks. However, their usage is limited because of public key certificates and expensive pairing operations. Thus, we propose a secure and lightweight protocol for cloud-assisted video reporting service in 5G-enabled vehicular networks. Compared to the conventional public key certificates, the proposed protocol achieves entities' authorization through anonymous credential. Also, by using lightweight security primitives instead of expensive bilinear pairing operations, the proposed protocol minimizes the computational overhead. From the evaluation results, we show that the proposed protocol takes the smaller computation and communication time for the cryptographic primitives than that of the well-known Eiza-Ni-Shi protocol.

  15. Advancing Patient-Centered Care in Tuberculosis Management: A Mixed-Methods Appraisal of Video Directly Observed Therapy

    PubMed Central

    Holzman, Samuel B; Zenilman, Avi; Shah, Maunank

    2018-01-01

    Abstract Background Directly observed therapy (DOT) remains an integral component of treatment support and adherence monitoring in tuberculosis care. In-person DOT is resource intensive and often burdensome for patients. Video DOT (vDOT) has been proposed as an alternative to increase treatment flexibility and better meet patient-specific needs. Methods We conducted a pragmatic, prospective pilot implementation of vDOT at 3 TB clinics in Maryland. A mixed-methods approach was implemented to assess (1) effectiveness, (2) acceptability, and (3) cost. Medication adherence on vDOT was compared with that of in-person DOT. Interviews and surveys were conducted with patients and providers before and after implementation, with framework analysis utilized to extract salient themes. Last, a cost analysis assessed the economic impacts of vDOT implementation across heterogeneous clinic structures. Results Medication adherence on vDOT was comparable to that of in-person DOT (94% vs 98%, P = .17), with a higher percentage of total treatment doses (inclusive of weekend/holiday self-administration) ultimately observed during the vDOT period (72% vs 66%, P = .03). Video DOT was well received by staff and patients alike, who cited increased treatment flexibility, convenience, and patient privacy. Our cost analysis estimated a savings with vDOT of $1391 per patient for a standard 6-month treatment course. Conclusions Video DOT is an acceptable and important option for measurement of TB treatment adherence and may allow a higher proportion of prescribed treatment doses to be observed, compared with in-person DOT. Video DOT may be cost-saving and should be considered as a component of individualized, patient-centered case management plans. PMID:29732378

  16. Advancing Patient-Centered Care in Tuberculosis Management: A Mixed-Methods Appraisal of Video Directly Observed Therapy.

    PubMed

    Holzman, Samuel B; Zenilman, Avi; Shah, Maunank

    2018-04-01

    Directly observed therapy (DOT) remains an integral component of treatment support and adherence monitoring in tuberculosis care. In-person DOT is resource intensive and often burdensome for patients. Video DOT (vDOT) has been proposed as an alternative to increase treatment flexibility and better meet patient-specific needs. We conducted a pragmatic, prospective pilot implementation of vDOT at 3 TB clinics in Maryland. A mixed-methods approach was implemented to assess (1) effectiveness, (2) acceptability, and (3) cost. Medication adherence on vDOT was compared with that of in-person DOT. Interviews and surveys were conducted with patients and providers before and after implementation, with framework analysis utilized to extract salient themes. Last, a cost analysis assessed the economic impacts of vDOT implementation across heterogeneous clinic structures. Medication adherence on vDOT was comparable to that of in-person DOT (94% vs 98%, P = .17), with a higher percentage of total treatment doses (inclusive of weekend/holiday self-administration) ultimately observed during the vDOT period (72% vs 66%, P = .03). Video DOT was well received by staff and patients alike, who cited increased treatment flexibility, convenience, and patient privacy. Our cost analysis estimated a savings with vDOT of $1391 per patient for a standard 6-month treatment course. Video DOT is an acceptable and important option for measurement of TB treatment adherence and may allow a higher proportion of prescribed treatment doses to be observed, compared with in-person DOT. Video DOT may be cost-saving and should be considered as a component of individualized, patient-centered case management plans.

  17. Advanced Structural Analyses by Third Generation Synchrotron Radiation Powder Diffraction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sakata, M.; Aoyagi, S.; Ogura, T.

    2007-01-19

    Since the advent of the 3rd generation Synchrotron Radiation (SR) sources, such as SPring-8, the capabilities of SR powder diffraction increased greatly not only in an accurate structure refinement but also ab initio structure determination. In this study, advanced structural analyses by 3rd generation SR powder diffraction based on the Large Debye-Scherrer camera installed at BL02B2, SPring-8 is described. Because of high angular resolution and high counting statistics powder data collected at BL02B2, SPring-8, ab initio structure determination can cope with a molecular crystals with 65 atoms including H atoms. For the structure refinements, it is found that a kindmore » of Maximum Entropy Method in which several atoms are omitted in phase calculation become very important to refine structural details of fairy large molecule in a crystal. It should be emphasized that until the unknown structure is refined very precisely, the obtained structure by Genetic Algorithm (GA) or some other ab initio structure determination method using real space structural knowledge, it is not possible to tell whether the structure obtained by the method is correct or not. In order to determine and/or refine crystal structure of rather complicated molecules, we cannot overemphasize the importance of the 3rd generation SR sources.« less

  18. Two-dimensional thermal video analysis of offshore bird and bat flight

    DOE PAGES

    Matzner, Shari; Cullinan, Valerie I.; Duberstein, Corey A.

    2015-09-11

    Thermal infrared video can provide essential information about bird and bat presence and activity for risk assessment studies, but the analysis of recorded video can be time-consuming and may not extract all of the available information. Automated processing makes continuous monitoring over extended periods of time feasible, and maximizes the information provided by video. This is especially important for collecting data in remote locations that are difficult for human observers to access, such as proposed offshore wind turbine sites. We present guidelines for selecting an appropriate thermal camera based on environmental conditions and the physical characteristics of the target animals.more » We developed new video image processing algorithms that automate the extraction of bird and bat flight tracks from thermal video, and that characterize the extracted tracks to support animal identification and behavior inference. The algorithms use a video peak store process followed by background masking and perceptual grouping to extract flight tracks. The extracted tracks are automatically quantified in terms that could then be used to infer animal type and possibly behavior. The developed automated processing generates results that are reproducible and verifiable, and reduces the total amount of video data that must be retained and reviewed by human experts. Finally, we suggest models for interpreting thermal imaging information.« less

  19. Two-dimensional thermal video analysis of offshore bird and bat flight

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matzner, Shari; Cullinan, Valerie I.; Duberstein, Corey A.

    Thermal infrared video can provide essential information about bird and bat presence and activity for risk assessment studies, but the analysis of recorded video can be time-consuming and may not extract all of the available information. Automated processing makes continuous monitoring over extended periods of time feasible, and maximizes the information provided by video. This is especially important for collecting data in remote locations that are difficult for human observers to access, such as proposed offshore wind turbine sites. We present guidelines for selecting an appropriate thermal camera based on environmental conditions and the physical characteristics of the target animals.more » We developed new video image processing algorithms that automate the extraction of bird and bat flight tracks from thermal video, and that characterize the extracted tracks to support animal identification and behavior inference. The algorithms use a video peak store process followed by background masking and perceptual grouping to extract flight tracks. The extracted tracks are automatically quantified in terms that could then be used to infer animal type and possibly behavior. The developed automated processing generates results that are reproducible and verifiable, and reduces the total amount of video data that must be retained and reviewed by human experts. Finally, we suggest models for interpreting thermal imaging information.« less

  20. Converting laserdisc video to digital video: a demonstration project using brain animations.

    PubMed

    Jao, C S; Hier, D B; Brint, S U

    1995-01-01

    Interactive laserdiscs are of limited value in large group learning situations due to the expense of establishing multiple workstations. The authors implemented an alternative to laserdisc video by using indexed digital video combined with an expert system. High-quality video was captured from a laserdisc player and combined with waveform audio into an audio-video-interleave (AVI) file format in the Microsoft Video-for-Windows environment (Microsoft Corp., Seattle, WA). With the use of an expert system, a knowledge-based computer program provided random access to these indexed AVI files. The program can be played on any multimedia computer without the need for laserdiscs. This system offers a high level of interactive video without the overhead and cost of a laserdisc player.