Sample records for enabling video systems

  1. Software-codec-based full motion video conferencing on the PC using visual pattern image sequence coding

    NASA Astrophysics Data System (ADS)

    Barnett, Barry S.; Bovik, Alan C.

    1995-04-01

    This paper presents a real time full motion video conferencing system based on the Visual Pattern Image Sequence Coding (VPISC) software codec. The prototype system hardware is comprised of two personal computers, two camcorders, two frame grabbers, and an ethernet connection. The prototype system software has a simple structure. It runs under the Disk Operating System, and includes a user interface, a video I/O interface, an event driven network interface, and a free running or frame synchronous video codec that also acts as the controller for the video and network interfaces. Two video coders have been tested in this system. Simple implementations of Visual Pattern Image Coding and VPISC have both proven to support full motion video conferencing with good visual quality. Future work will concentrate on expanding this prototype to support the motion compensated version of VPISC, as well as encompassing point-to-point modem I/O and multiple network protocols. The application will be ported to multiple hardware platforms and operating systems. The motivation for developing this prototype system is to demonstrate the practicality of software based real time video codecs. Furthermore, software video codecs are not only cheaper, but are more flexible system solutions because they enable different computer platforms to exchange encoded video information without requiring on-board protocol compatible video codex hardware. Software based solutions enable true low cost video conferencing that fits the `open systems' model of interoperability that is so important for building portable hardware and software applications.

  2. Digital Literacy and Online Video: Undergraduate Students' Use of Online Video for Coursework

    ERIC Educational Resources Information Center

    Tiernan, Peter; Farren, Margaret

    2017-01-01

    This paper investigates how to enable undergraduate students' use of online video for coursework using a customised video retrieval system (VRS), in order to understand digital literacy with online video in practice. This study examines the key areas influencing the use of online video for assignments such as the learning value of video,…

  3. Enabling MPEG-2 video playback in embedded systems through improved data cache efficiency

    NASA Astrophysics Data System (ADS)

    Soderquist, Peter; Leeser, Miriam E.

    1999-01-01

    Digital video decoding, enabled by the MPEG-2 Video standard, is an important future application for embedded systems, particularly PDAs and other information appliances. Many such system require portability and wireless communication capabilities, and thus face severe limitations in size and power consumption. This places a premium on integration and efficiency, and favors software solutions for video functionality over specialized hardware. The processors in most embedded system currently lack the computational power needed to perform video decoding, but a related and equally important problem is the required data bandwidth, and the need to cost-effectively insure adequate data supply. MPEG data sets are very large, and generate significant amounts of excess memory traffic for standard data caches, up to 100 times the amount required for decoding. Meanwhile, cost and power limitations restrict cache sizes in embedded systems. Some systems, including many media processors, eliminate caches in favor of memories under direct, painstaking software control in the manner of digital signal processors. Yet MPEG data has locality which caches can exploit if properly optimized, providing fast, flexible, and automatic data supply. We propose a set of enhancements which target the specific needs of the heterogeneous types within the MPEG decoder working set. These optimizations significantly improve the efficiency of small caches, reducing cache-memory traffic by almost 70 percent, and can make an enhanced 4 KB cache perform better than a standard 1 MB cache. This performance improvement can enable high-resolution, full frame rate video playback in cheaper, smaller system than woudl otherwise be possible.

  4. Task-oriented situation recognition

    NASA Astrophysics Data System (ADS)

    Bauer, Alexander; Fischer, Yvonne

    2010-04-01

    From the advances in computer vision methods for the detection, tracking and recognition of objects in video streams, new opportunities for video surveillance arise: In the future, automated video surveillance systems will be able to detect critical situations early enough to enable an operator to take preventive actions, instead of using video material merely for forensic investigations. However, problems such as limited computational resources, privacy regulations and a constant change in potential threads have to be addressed by a practical automated video surveillance system. In this paper, we show how these problems can be addressed using a task-oriented approach. The system architecture of the task-oriented video surveillance system NEST and an algorithm for the detection of abnormal behavior as part of the system are presented and illustrated for the surveillance of guests inside a video-monitored building.

  5. What Is the Impact of Video Conferencing on the Teaching and Learning of a Foreign Language in Primary Education?

    ERIC Educational Resources Information Center

    Gruson, Brigitte; Barnes, Francoise

    2012-01-01

    Under the French national project "1000 video conferencing systems for primary schools", a growing number of schools are being equipped of video conferencing systems. The assumption underlying this project is that, by putting students in a position to communicate with distant native speakers, it will enable them to improve their oral and…

  6. OceanVideoLab: A Tool for Exploring Underwater Video

    NASA Astrophysics Data System (ADS)

    Ferrini, V. L.; Morton, J. J.; Wiener, C.

    2016-02-01

    Video imagery acquired with underwater vehicles is an essential tool for characterizing seafloor ecosystems and seafloor geology. It is a fundamental component of ocean exploration that facilitates real-time operations, augments multidisciplinary scientific research, and holds tremendous potential for public outreach and engagement. Acquiring, documenting, managing, preserving and providing access to large volumes of video acquired with underwater vehicles presents a variety of data stewardship challenges to the oceanographic community. As a result, only a fraction of underwater video content collected with research submersibles is documented, discoverable and/or viewable online. With more than 1 billion users, YouTube offers infrastructure that can be leveraged to help address some of the challenges associated with sharing underwater video with a broad global audience. Anyone can post content to YouTube, and some oceanographic organizations, such as the Schmidt Ocean Institute, have begun live-streaming video directly from underwater vehicles. OceanVideoLab (oceanvideolab.org) was developed to help improve access to underwater video through simple annotation, browse functionality, and integration with related environmental data. Any underwater video that is publicly accessible on YouTube can be registered with OceanVideoLab by simply providing a URL. It is strongly recommended that a navigational file also be supplied to enable geo-referencing of observations. Once a video is registered, it can be viewed and annotated using a simple user interface that integrates observations with vehicle navigation data if provided. This interface includes an interactive map and a list of previous annotations that allows users to jump to times of specific observations in the video. Future enhancements to OceanVideoLab will include the deployment of a search interface, the development of an application program interface (API) that will drive the search and enable querying of content by other systems/tools, the integration of related environmental data from complementary data systems (e.g. temperature, bathymetry), and the expansion of infrastructure to enable broad crowdsourcing of annotations.

  7. Design of video interface conversion system based on FPGA

    NASA Astrophysics Data System (ADS)

    Zhao, Heng; Wang, Xiang-jun

    2014-11-01

    This paper presents a FPGA based video interface conversion system that enables the inter-conversion between digital and analog video. Cyclone IV series EP4CE22F17C chip from Altera Corporation is used as the main video processing chip, and single-chip is used as the information interaction control unit between FPGA and PC. The system is able to encode/decode messages from the PC. Technologies including video decoding/encoding circuits, bus communication protocol, data stream de-interleaving and de-interlacing, color space conversion and the Camera Link timing generator module of FPGA are introduced. The system converts Composite Video Broadcast Signal (CVBS) from the CCD camera into Low Voltage Differential Signaling (LVDS), which will be collected by the video processing unit with Camera Link interface. The processed video signals will then be inputted to system output board and displayed on the monitor.The current experiment shows that it can achieve high-quality video conversion with minimum board size.

  8. High resolution, high frame rate video technology development plan and the near-term system conceptual design

    NASA Technical Reports Server (NTRS)

    Ziemke, Robert A.

    1990-01-01

    The objective of the High Resolution, High Frame Rate Video Technology (HHVT) development effort is to provide technology advancements to remove constraints on the amount of high speed, detailed optical data recorded and transmitted for microgravity science and application experiments. These advancements will enable the development of video systems capable of high resolution, high frame rate video data recording, processing, and transmission. Techniques such as multichannel image scan, video parameter tradeoff, and the use of dual recording media were identified as methods of making the most efficient use of the near-term technology.

  9. Content-based management service for medical videos.

    PubMed

    Mendi, Engin; Bayrak, Coskun; Cecen, Songul; Ermisoglu, Emre

    2013-01-01

    Development of health information technology has had a dramatic impact to improve the efficiency and quality of medical care. Developing interoperable health information systems for healthcare providers has the potential to improve the quality and equitability of patient-centered healthcare. In this article, we describe an automated content-based medical video analysis and management service that provides convenience and ease in accessing the relevant medical video content without sequential scanning. The system facilitates effective temporal video segmentation and content-based visual information retrieval that enable a more reliable understanding of medical video content. The system is implemented as a Web- and mobile-based service and has the potential to offer a knowledge-sharing platform for the purpose of efficient medical video content access.

  10. Low-latency situational awareness for UxV platforms

    NASA Astrophysics Data System (ADS)

    Berends, David C.

    2012-06-01

    Providing high quality, low latency video from unmanned vehicles through bandwidth-limited communications channels remains a formidable challenge for modern vision system designers. SRI has developed a number of enabling technologies to address this, including the use of SWaP-optimized Systems-on-a-Chip which provide Multispectral Fusion and Contrast Enhancement as well as H.264 video compression. Further, the use of salience-based image prefiltering prior to image compression greatly reduces output video bandwidth by selectively blurring non-important scene regions. Combined with our customization of the VLC open source video viewer for low latency video decoding, SRI developed a prototype high performance, high quality vision system for UxV application in support of very demanding system latency requirements and user CONOPS.

  11. Integrated remotely sensed datasets for disaster management

    NASA Astrophysics Data System (ADS)

    McCarthy, Timothy; Farrell, Ronan; Curtis, Andrew; Fotheringham, A. Stewart

    2008-10-01

    Video imagery can be acquired from aerial, terrestrial and marine based platforms and has been exploited for a range of remote sensing applications over the past two decades. Examples include coastal surveys using aerial video, routecorridor infrastructures surveys using vehicle mounted video cameras, aerial surveys over forestry and agriculture, underwater habitat mapping and disaster management. Many of these video systems are based on interlaced, television standards such as North America's NTSC and European SECAM and PAL television systems that are then recorded using various video formats. This technology has recently being employed as a front-line, remote sensing technology for damage assessment post-disaster. This paper traces the development of spatial video as a remote sensing tool from the early 1980s to the present day. The background to a new spatial-video research initiative based at National University of Ireland, Maynooth, (NUIM) is described. New improvements are proposed and include; low-cost encoders, easy to use software decoders, timing issues and interoperability. These developments will enable specialists and non-specialists collect, process and integrate these datasets within minimal support. This integrated approach will enable decision makers to access relevant remotely sensed datasets quickly and so, carry out rapid damage assessment during and post-disaster.

  12. Efficient data replication for the delivery of high-quality video content over P2P VoD advertising networks

    NASA Astrophysics Data System (ADS)

    Ho, Chien-Peng; Yu, Jen-Yu; Lee, Suh-Yin

    2011-12-01

    Recent advances in modern television systems have had profound consequences for the scalability, stability, and quality of transmitted digital data signals. This is of particular significance for peer-to-peer (P2P) video-on-demand (VoD) related platforms, faced with an immediate and growing demand for reliable service delivery. In response to demands for high-quality video, the key objectives in the construction of the proposed framework were user satisfaction with perceived video quality and the effective utilization of available resources on P2P VoD networks. This study developed a peer-based promoter to support online advertising in P2P VoD networks based on an estimation of video distortion prior to the replication of data stream chunks. The proposed technology enables the recovery of lost video using replicated stream chunks in real time. Load balance is achieved by adjusting the replication level of each candidate group according to the degree-of-distortion, thereby enabling a significant reduction in server load and increased scalability in the P2P VoD system. This approach also promotes the use of advertising as an efficient tool for commercial promotion. Results indicate that the proposed system efficiently satisfies the given fault tolerances.

  13. Automated videography for residential communications

    NASA Astrophysics Data System (ADS)

    Kurtz, Andrew F.; Neustaedter, Carman; Blose, Andrew C.

    2010-02-01

    The current widespread use of webcams for personal video communication over the Internet suggests that opportunities exist to develop video communications systems optimized for domestic use. We discuss both prior and existing technologies, and the results of user studies that indicate potential needs and expectations for people relative to personal video communications. In particular, users anticipate an easily used, high image quality video system, which enables multitasking communications during the course of real-world activities and provides appropriate privacy controls. To address these needs, we propose a potential approach premised on automated capture of user activity. We then describe a method that adapts cinematography principles, with a dual-camera videography system, to automatically control image capture relative to user activity, using semantic or activity-based cues to determine user position and motion. In particular, we discuss an approach to automatically manage shot framing, shot selection, and shot transitions, with respect to one or more local users engaged in real-time, unscripted events, while transmitting the resulting video to a remote viewer. The goal is to tightly frame subjects (to provide more detail), while minimizing subject loss and repeated abrupt shot framing changes in the images as perceived by a remote viewer. We also discuss some aspects of the system and related technologies that we have experimented with thus far. In summary, the method enables users to participate in interactive video-mediated communications while engaged in other activities.

  14. IRIS Connect: Developing Classroom Dialogue and Formative Feedback through Collective Video Reflection. Evaluation Report and Executive Summary

    ERIC Educational Resources Information Center

    Davies, Peter; Perry, Tom; Kirkman, John

    2017-01-01

    IRIS is designed to improve primary school teachers' use of dialogue and feedback through using video technology for collaborative teacher development with a view to improving academic outcomes for pupils. It is based around a video technology system (IRIS Connect) which enables teachers to record, edit, and comment on teaching and learning. In…

  15. Telemetry and Communication IP Video Player

    NASA Technical Reports Server (NTRS)

    OFarrell, Zachary L.

    2011-01-01

    Aegis Video Player is the name of the video over IP system for the Telemetry and Communications group of the Launch Services Program. Aegis' purpose is to display video streamed over a network connection to be viewed during launches. To accomplish this task, a VLC ActiveX plug-in was used in C# to provide the basic capabilities of video streaming. The program was then customized to be used during launches. The VLC plug-in can be configured programmatically to display a single stream, but for this project multiple streams needed to be accessed. To accomplish this, an easy to use, informative menu system was added to the program to enable users to quickly switch between videos. Other features were added to make the player more useful, such as watching multiple videos and watching a video in full screen.

  16. An affordable wearable video system for emergency response training

    NASA Astrophysics Data System (ADS)

    King-Smith, Deen; Mikkilineni, Aravind; Ebert, David; Collins, Timothy; Delp, Edward J.

    2009-02-01

    Many emergency response units are currently faced with restrictive budgets that prohibit their use of advanced technology-based training solutions. Our work focuses on creating an affordable, mobile, state-of-the-art emergency response training solution through the integration of low-cost, commercially available products. The system we have developed consists of tracking, audio, and video capability, coupled with other sensors that can all be viewed through a unified visualization system. In this paper we focus on the video sub-system which helps provide real time tracking and video feeds from the training environment through a system of wearable and stationary cameras. These two camera systems interface with a management system that handles storage and indexing of the video during and after training exercises. The wearable systems enable the command center to have live video and tracking information for each trainee in the exercise. The stationary camera systems provide a fixed point of reference for viewing action during the exercise and consist of a small Linux based portable computer and mountable camera. The video management system consists of a server and database which work in tandem with a visualization application to provide real-time and after action review capability to the training system.

  17. Method and apparatus for reading meters from a video image

    DOEpatents

    Lewis, Trevor J.; Ferguson, Jeffrey J.

    1997-01-01

    A method and system to enable acquisition of data about an environment from one or more meters using video images. One or more meters are imaged by a video camera and the video signal is digitized. Then, each region of the digital image which corresponds to the indicator of the meter is calibrated and the video signal is analyzed to determine the value indicated by each meter indicator. Finally, from the value indicated by each meter indicator in the calibrated region, a meter reading is generated. The method and system offer the advantages of automatic data collection in a relatively non-intrusive manner without making any complicated or expensive electronic connections, and without requiring intensive manpower.

  18. Two schemes for rapid generation of digital video holograms using PC cluster

    NASA Astrophysics Data System (ADS)

    Park, Hanhoon; Song, Joongseok; Kim, Changseob; Park, Jong-Il

    2017-12-01

    Computer-generated holography (CGH), which is a process of generating digital holograms, is computationally expensive. Recently, several methods/systems of parallelizing the process using graphic processing units (GPUs) have been proposed. Indeed, use of multiple GPUs or a personal computer (PC) cluster (each PC with GPUs) enabled great improvements in the process speed. However, extant literature has less often explored systems involving rapid generation of multiple digital holograms and specialized systems for rapid generation of a digital video hologram. This study proposes a system that uses a PC cluster and is able to more efficiently generate a video hologram. The proposed system is designed to simultaneously generate multiple frames and accelerate the generation by parallelizing the CGH computations across a number of frames, as opposed to separately generating each individual frame while parallelizing the CGH computations within each frame. The proposed system also enables the subprocesses for generating each frame to execute in parallel through multithreading. With these two schemes, the proposed system significantly reduced the data communication time for generating a digital hologram when compared with that of the state-of-the-art system.

  19. Onboard Systems Record Unique Videos of Space Missions

    NASA Technical Reports Server (NTRS)

    2010-01-01

    Ecliptic Enterprises Corporation, headquartered in Pasadena, California, provided onboard video systems for rocket and space shuttle launches before it was tasked by Ames Research Center to craft the Data Handling Unit that would control sensor instruments onboard the Lunar Crater Observation and Sensing Satellite (LCROSS) spacecraft. The technological capabilities the company acquired on this project, as well as those gained developing a high-speed video system for monitoring the parachute deployments for the Orion Pad Abort Test Program at Dryden Flight Research Center, have enabled the company to offer high-speed and high-definition video for geosynchronous satellites and commercial space missions, providing remarkable footage that both informs engineers and inspires the imagination of the general public.

  20. 75 FR 43825 - Exemption to Prohibition on Circumvention of Copyright Protection Systems for Access Control...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-27

    ... works such as video games and slide presentations). B. Computer programs that enable wireless telephone... enabling interoperability of such applications, when they have been lawfully obtained, with computer... new printer driver to a computer constitutes a `modification' of the operating system already...

  1. Video change detection for fixed wing UAVs

    NASA Astrophysics Data System (ADS)

    Bartelsen, Jan; Müller, Thomas; Ring, Jochen; Mück, Klaus; Brüstle, Stefan; Erdnüß, Bastian; Lutz, Bastian; Herbst, Theresa

    2017-10-01

    In this paper we proceed the work of Bartelsen et al.1 We present the draft of a process chain for an image based change detection which is designed for videos acquired by fixed wing unmanned aerial vehicles (UAVs). From our point of view, automatic video change detection for aerial images can be useful to recognize functional activities which are typically caused by the deployment of improvised explosive devices (IEDs), e.g. excavations, skid marks, footprints, left-behind tooling equipment, and marker stones. Furthermore, in case of natural disasters, like flooding, imminent danger can be recognized quickly. Due to the necessary flight range, we concentrate on fixed wing UAVs. Automatic change detection can be reduced to a comparatively simple photogrammetric problem when the perspective change between the "before" and "after" image sets is kept as small as possible. Therefore, the aerial image acquisition demands a mission planning with a clear purpose including flight path and sensor configuration. While the latter can be enabled simply by a fixed and meaningful adjustment of the camera, ensuring a small perspective change for "before" and "after" videos acquired by fixed wing UAVs is a challenging problem. Concerning this matter, we have performed tests with an advanced commercial off the shelf (COTS) system which comprises a differential GPS and autopilot system estimating the repetition accuracy of its trajectory. Although several similar approaches have been presented,23 as far as we are able to judge, the limits for this important issue are not estimated so far. Furthermore, we design a process chain to enable the practical utilization of video change detection. It consists of a front-end of a database to handle large amounts of video data, an image processing and change detection implementation, and the visualization of the results. We apply our process chain on the real video data acquired by the advanced COTS fixed wing UAV and synthetic data. For the image processing and change detection, we use the approach of Muller.4 Although it was developed for unmanned ground vehicles (UGVs), it enables a near real time video change detection for aerial videos. Concluding, we discuss the demands on sensor systems in the matter of change detection.

  2. VideoHacking: Automated Tracking and Quantification of Locomotor Behavior with Open Source Software and Off-the-Shelf Video Equipment.

    PubMed

    Conklin, Emily E; Lee, Kathyann L; Schlabach, Sadie A; Woods, Ian G

    2015-01-01

    Differences in nervous system function can result in differences in behavioral output. Measurements of animal locomotion enable the quantification of these differences. Automated tracking of animal movement is less labor-intensive and bias-prone than direct observation, and allows for simultaneous analysis of multiple animals, high spatial and temporal resolution, and data collection over extended periods of time. Here, we present a new video-tracking system built on Python-based software that is free, open source, and cross-platform, and that can analyze video input from widely available video capture devices such as smartphone cameras and webcams. We validated this software through four tests on a variety of animal species, including larval and adult zebrafish (Danio rerio), Siberian dwarf hamsters (Phodopus sungorus), and wild birds. These tests highlight the capacity of our software for long-term data acquisition, parallel analysis of multiple animals, and application to animal species of different sizes and movement patterns. We applied the software to an analysis of the effects of ethanol on thigmotaxis (wall-hugging) behavior on adult zebrafish, and found that acute ethanol treatment decreased thigmotaxis behaviors without affecting overall amounts of motion. The open source nature of our software enables flexibility, customization, and scalability in behavioral analyses. Moreover, our system presents a free alternative to commercial video-tracking systems and is thus broadly applicable to a wide variety of educational settings and research programs.

  3. Method and system for enabling real-time speckle processing using hardware platforms

    NASA Technical Reports Server (NTRS)

    Ortiz, Fernando E. (Inventor); Kelmelis, Eric (Inventor); Durbano, James P. (Inventor); Curt, Peterson F. (Inventor)

    2012-01-01

    An accelerator for the speckle atmospheric compensation algorithm may enable real-time speckle processing of video feeds that may enable the speckle algorithm to be applied in numerous real-time applications. The accelerator may be implemented in various forms, including hardware, software, and/or machine-readable media.

  4. Video System Highlights Hydrogen Fires

    NASA Technical Reports Server (NTRS)

    Youngquist, Robert C.; Gleman, Stuart M.; Moerk, John S.

    1992-01-01

    Video system combines images from visible spectrum and from three bands in infrared spectrum to produce color-coded display in which hydrogen fires distinguished from other sources of heat. Includes linear array of 64 discrete lead selenide mid-infrared detectors operating at room temperature. Images overlaid on black and white image of same scene from standard commercial video camera. In final image, hydrogen fires appear red; carbon-based fires, blue; and other hot objects, mainly green and combinations of green and red. Where no thermal source present, image remains in black and white. System enables high degree of discrimination between hydrogen flames and other thermal emitters.

  5. 13 point video tape quality guidelines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gaunt, R.

    1997-05-01

    Until high definition television (ATV) arrives, in the U.S. we must still contend with the National Television Systems Committee (NTSC) video standard (or PAL or SECAM-depending on your country). NTSC, a 40-year old standard designed for transmission of color video camera images over a small bandwidth, is not well suited for the sharp, full-color images that todays computers are capable of producing. PAL and SECAM also suffers from many of NTSC`s problems, but to varying degrees. Video professionals, when working with computer graphic (CG) images, use two monitors: a computer monitor for producing CGs and an NTSC monitor to viewmore » how a CG will look on video. More often than not, the NTSC image will differ significantly from the CG image, and outputting it to NTSC as an artist works enables the him or her to see the images as others will see it. Below are thirteen guidelines designed to increase the quality of computer graphics recorded onto video tape. Viewing your work in NTSC and attempting to follow the below tips will enable you to create higher quality videos. No video is perfect, so don`t expect to abide by every guideline every time.« less

  6. Method and apparatus for reading meters from a video image

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lewis, T.J.; Ferguson, J.J.

    1995-12-31

    A method and system enable acquisition of data about an environment from one or more meters using video images. One or more meters are imaged by a video camera and the video signal is digitized. Then, each region of the digital image which corresponds to the indicator of the meter is calibrated and the video signal is analyzed to determine the value indicated by each meter indicator. Finally, from the value indicated by each meter indicator in the calibrated region, a meter reading is generated. The method and system offer the advantages of automatic data collection in a relatively non-intrusivemore » manner without making any complicated or expensive electronic connections, and without requiring intensive manpower.« less

  7. Method and apparatus for reading meters from a video image

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lewis, T.J.; Ferguson, J.J.

    1997-09-30

    A method and system to enable acquisition of data about an environment from one or more meters using video images. One or more meters are imaged by a video camera and the video signal is digitized. Then, each region of the digital image which corresponds to the indicator of the meter is calibrated and the video signal is analyzed to determine the value indicated by each meter indicator. Finally, from the value indicated by each meter indicator in the calibrated region, a meter reading is generated. The method and system offer the advantages of automatic data collection in a relativelymore » non-intrusive manner without making any complicated or expensive electronic connections, and without requiring intensive manpower. 1 fig.« less

  8. The National Capital Region closed circuit television video interoperability project.

    PubMed

    Contestabile, John; Patrone, David; Babin, Steven

    2016-01-01

    The National Capital Region (NCR) includes many government jurisdictions and agencies using different closed circuit TV (CCTV) cameras and video management software. Because these agencies often must work together to respond to emergencies and events, a means of providing interoperability for CCTV video is critically needed. Video data from different CCTV systems that are not inherently interoperable is represented in the "data layer." An "integration layer" ingests the data layer source video and normalizes the different video formats. It then aggregates and distributes this video to a "presentation layer" where it can be viewed by almost any application used by other agencies and without any proprietary software. A native mobile video viewing application is also developed that uses the presentation layer to provide video to different kinds of smartphones. The NCR includes Washington, DC, and surrounding counties in Maryland and Virginia. The video sharing architecture allows one agency to see another agency's video in their native viewing application without the need to purchase new CCTV software or systems. A native smartphone application was also developed to enable them to share video via mobile devices even when they use different video management systems. A video sharing architecture has been developed for the NCR that creates an interoperable environment for sharing CCTV video in an efficient and cost effective manner. In addition, it provides the desired capability of sharing video via a native mobile application.

  9. A new look at deep-sea video

    USGS Publications Warehouse

    Chezar, H.; Lee, J.

    1985-01-01

    A deep-towed photographic system with completely self-contained recording instrumentation and power can obtain color-video and still-photographic transects along rough terrane without need for a long electrically conducting cable. Both the video- and still-camera systems utilize relatively inexpensive and proven off-the-shelf hardware adapted for deep-water environments. The small instrument frame makes the towed sled an ideal photographic tool for use on ship or small-boat operations. The system includes a temperature probe and altimeter that relay data acoustically from the sled to the surface ship. This relay enables the operator to monitor simultaneously water temperature and the precise height off the bottom. ?? 1985.

  10. Apply network coding for H.264/SVC multicasting

    NASA Astrophysics Data System (ADS)

    Wang, Hui; Kuo, C.-C. Jay

    2008-08-01

    In a packet erasure network environment, video streaming benefits from error control in two ways to achieve graceful degradation. The first approach is application-level (or the link-level) forward error-correction (FEC) to provide erasure protection. The second error control approach is error concealment at the decoder end to compensate lost packets. A large amount of research work has been done in the above two areas. More recently, network coding (NC) techniques have been proposed for efficient data multicast over networks. It was shown in our previous work that multicast video streaming benefits from NC for its throughput improvement. An algebraic model is given to analyze the performance in this work. By exploiting the linear combination of video packets along nodes in a network and the SVC video format, the system achieves path diversity automatically and enables efficient video delivery to heterogeneous receivers in packet erasure channels. The application of network coding can protect video packets against the erasure network environment. However, the rank defficiency problem of random linear network coding makes the error concealment inefficiently. It is shown by computer simulation that the proposed NC video multicast scheme enables heterogenous receiving according to their capacity constraints. But it needs special designing to improve the video transmission performance when applying network coding.

  11. Enabling a Comprehensive Teaching Strategy: Video Lectures

    ERIC Educational Resources Information Center

    Brecht, H. David; Ogilby, Suzanne M.

    2008-01-01

    This study empirically tests the feasibility and effectiveness of video lectures as a form of video instruction that enables a comprehensive teaching strategy used throughout a traditional classroom course. It examines student use patterns and the videos' effects on student learning, using qualitative and nonparametric statistical analyses of…

  12. 25 CFR 542.43 - What are the minimum internal control standards for surveillance for a Tier C gaming operation?

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... surveillance system that enable surveillance personnel to observe the table games remaining open for play and... recordings and/or digital records shall be provided to the Commission upon request. (x) Video library log. A video library log, or comparable alternative procedure approved by the Tribal gaming regulatory...

  13. 25 CFR 542.43 - What are the minimum internal control standards for surveillance for a Tier C gaming operation?

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... surveillance system that enable surveillance personnel to observe the table games remaining open for play and... recordings and/or digital records shall be provided to the Commission upon request. (x) Video library log. A video library log, or comparable alternative procedure approved by the Tribal gaming regulatory...

  14. Video fingerprinting for copy identification: from research to industry applications

    NASA Astrophysics Data System (ADS)

    Lu, Jian

    2009-02-01

    Research that began a decade ago in video copy detection has developed into a technology known as "video fingerprinting". Today, video fingerprinting is an essential and enabling tool adopted by the industry for video content identification and management in online video distribution. This paper provides a comprehensive review of video fingerprinting technology and its applications in identifying, tracking, and managing copyrighted content on the Internet. The review includes a survey on video fingerprinting algorithms and some fundamental design considerations, such as robustness, discriminability, and compactness. It also discusses fingerprint matching algorithms, including complexity analysis, and approximation and optimization for fast fingerprint matching. On the application side, it provides an overview of a number of industry-driven applications that rely on video fingerprinting. Examples are given based on real-world systems and workflows to demonstrate applications in detecting and managing copyrighted content, and in monitoring and tracking video distribution on the Internet.

  15. A system for endobronchial video analysis

    NASA Astrophysics Data System (ADS)

    Byrnes, Patrick D.; Higgins, William E.

    2017-03-01

    Image-guided bronchoscopy is a critical component in the treatment of lung cancer and other pulmonary disorders. During bronchoscopy, a high-resolution endobronchial video stream facilitates guidance through the lungs and allows for visual inspection of a patient's airway mucosal surfaces. Despite the detailed information it contains, little effort has been made to incorporate recorded video into the clinical workflow. Follow-up procedures often required in cancer assessment or asthma treatment could significantly benefit from effectively parsed and summarized video. Tracking diagnostic regions of interest (ROIs) could potentially better equip physicians to detect early airway-wall cancer or improve asthma treatments, such as bronchial thermoplasty. To address this need, we have developed a system for the postoperative analysis of recorded endobronchial video. The system first parses an input video stream into endoscopic shots, derives motion information, and selects salient representative key frames. Next, a semi-automatic method for CT-video registration creates data linkages between a CT-derived airway-tree model and the input video. These data linkages then enable the construction of a CT-video chest model comprised of a bronchoscopy path history (BPH) - defining all airway locations visited during a procedure - and texture-mapping information for rendering registered video frames onto the airwaytree model. A suite of analysis tools is included to visualize and manipulate the extracted data. Video browsing and retrieval is facilitated through a video table of contents (TOC) and a search query interface. The system provides a variety of operational modes and additional functionality, including the ability to define regions of interest. We demonstrate the potential of our system using two human case study examples.

  16. Concept-oriented indexing of video databases: toward semantic sensitive retrieval and browsing.

    PubMed

    Fan, Jianping; Luo, Hangzai; Elmagarmid, Ahmed K

    2004-07-01

    Digital video now plays an important role in medical education, health care, telemedicine and other medical applications. Several content-based video retrieval (CBVR) systems have been proposed in the past, but they still suffer from the following challenging problems: semantic gap, semantic video concept modeling, semantic video classification, and concept-oriented video database indexing and access. In this paper, we propose a novel framework to make some advances toward the final goal to solve these problems. Specifically, the framework includes: 1) a semantic-sensitive video content representation framework by using principal video shots to enhance the quality of features; 2) semantic video concept interpretation by using flexible mixture model to bridge the semantic gap; 3) a novel semantic video-classifier training framework by integrating feature selection, parameter estimation, and model selection seamlessly in a single algorithm; and 4) a concept-oriented video database organization technique through a certain domain-dependent concept hierarchy to enable semantic-sensitive video retrieval and browsing.

  17. VID-R and SCAN: Tools and Methods for the Automated Analysis of Visual Records.

    ERIC Educational Resources Information Center

    Ekman, Paul; And Others

    The VID-R (Visual Information Display and Retrieval) system that enables computer-aided analysis of visual records is composed of a film-to-television chain, two videotape recorders with complete remote control of functions, a video-disc recorder, three high-resolution television monitors, a teletype, a PDP-8, a video and audio interface, three…

  18. Enabling Collaboration and Video Assessment: Exposing Trends in Science Preservice Teachers' Assessments

    ERIC Educational Resources Information Center

    Borowczak, Mike; Burrows, Andrea C.

    2016-01-01

    This article details a new, free resource for continuous video assessment named YouDemo. The tool enables real time rating of uploaded YouTube videos for use in science, technology, engineering, and mathematics (STEM) education and beyond. The authors discuss trends of preservice science teachers' assessments of self- and peer-created videos using…

  19. Privacy enabling technology for video surveillance

    NASA Astrophysics Data System (ADS)

    Dufaux, Frédéric; Ouaret, Mourad; Abdeljaoued, Yousri; Navarro, Alfonso; Vergnenègre, Fabrice; Ebrahimi, Touradj

    2006-05-01

    In this paper, we address the problem privacy in video surveillance. We propose an efficient solution based on transformdomain scrambling of regions of interest in a video sequence. More specifically, the sign of selected transform coefficients is flipped during encoding. We address more specifically the case of Motion JPEG 2000. Simulation results show that the technique can be successfully applied to conceal information in regions of interest in the scene while providing with a good level of security. Furthermore, the scrambling is flexible and allows adjusting the amount of distortion introduced. This is achieved with a small impact on coding performance and negligible computational complexity increase. In the proposed video surveillance system, heterogeneous clients can remotely access the system through the Internet or 2G/3G mobile phone network. Thanks to the inherently scalable Motion JPEG 2000 codestream, the server is able to adapt the resolution and bandwidth of the delivered video depending on the usage environment of the client.

  20. Methods and new approaches to the calculation of physiological parameters by videodensitometry

    NASA Technical Reports Server (NTRS)

    Kedem, D.; Londstrom, D. P.; Rhea, T. C., Jr.; Nelson, J. H.; Price, R. R.; Smith, C. W.; Graham, T. P., Jr.; Brill, A. B.; Kedem, D.

    1976-01-01

    A complex system featuring a video-camera connected to a video disk, cine (medical motion picture) camera and PDP-9 computer with various input/output facilities has been developed. This system enables the performance of quantitative analysis of various functions recorded in clinical studies. Several studies are described, such as heart chamber volume calculations, left ventricle ejection fraction, blood flow through the lungs and also the possibility of obtaining information about blood flow and constrictions in small cross-section vessels

  1. VIDANN: a video annotation system.

    PubMed

    De Clercq, A; Buysse, A; Roeyers, H; Ickes, W; Ponnet, K; Verhofstadt, L

    2001-05-01

    VIDANN is a computer program that allows participants to watch a video on a standard TV and to write their annotations (thought/feeling entries) on paper attached to a writing tablet. The system is designed as a Microsoft ActiveX module. It can be further adapted by the individual researcher through the use of a VBScript. All data, including the participant's handwriting, are stored in an XML database. An accompanying Wizard has been designed that enables researchers to generate VBScripts for standard configurations.

  2. Statistical Relational Learning (SRL) as an Enabling Technology for Data Acquisition and Data Fusion in Video

    DTIC Science & Technology

    2013-05-02

    REPORT Statistical Relational Learning ( SRL ) as an Enabling Technology for Data Acquisition and Data Fusion in Video 14. ABSTRACT 16. SECURITY...particular, it is important to reason about which portions of video require expensive analysis and storage. This project aims to make these...inferences using new and existing tools from Statistical Relational Learning ( SRL ). SRL is a recently emerging technology that enables the effective 1

  3. Web Audio/Video Streaming Tool

    NASA Technical Reports Server (NTRS)

    Guruvadoo, Eranna K.

    2003-01-01

    In order to promote NASA-wide educational outreach program to educate and inform the public of space exploration, NASA, at Kennedy Space Center, is seeking efficient ways to add more contents to the web by streaming audio/video files. This project proposes a high level overview of a framework for the creation, management, and scheduling of audio/video assets over the web. To support short-term goals, the prototype of a web-based tool is designed and demonstrated to automate the process of streaming audio/video files. The tool provides web-enabled users interfaces to manage video assets, create publishable schedules of video assets for streaming, and schedule the streaming events. These operations are performed on user-defined and system-derived metadata of audio/video assets stored in a relational database while the assets reside on separate repository. The prototype tool is designed using ColdFusion 5.0.

  4. Development and Evaluation of Sensor Concepts for Ageless Aerospace Vehicles: Report 6 - Development and Demonstration of a Self-Organizing Diagnostic System for Structural Health Monitoring

    NASA Technical Reports Server (NTRS)

    Batten, Adam; Edwards, Graeme; Gerasimov, Vadim; Hoschke, Nigel; Isaacs, Peter; Lewis, Chris; Moore, Richard; Oppolzer, Florien; Price, Don; Prokopenko, Mikhail; hide

    2010-01-01

    This report describes a significant advance in the capability of the CSIRO/NASA structural health monitoring Concept Demonstrator (CD). The main thrust of the work has been the development of a mobile robotic agent, and the hardware and software modifications and developments required to enable the demonstrator to operate as a single, self-organizing, multi-agent system. This single-robot system is seen as the forerunner of a system in which larger numbers of small robots perform inspection and repair tasks cooperatively, by self-organization. While the goal of demonstrating self-organized damage diagnosis was not fully achieved in the time available, much of the work required for the final element that enables the robot to point the video camera and transmit an image has been completed. A demonstration video of the CD and robotic systems operating will be made and forwarded to NASA.

  5. Standardized access, display, and retrieval of medical video

    NASA Astrophysics Data System (ADS)

    Bellaire, Gunter; Steines, Daniel; Graschew, Georgi; Thiel, Andreas; Bernarding, Johannes; Tolxdorff, Thomas; Schlag, Peter M.

    1999-05-01

    The system presented here enhances documentation and data- secured, second-opinion facilities by integrating video sequences into DICOM 3.0. We present an implementation for a medical video server extended by a DICOM interface. Security mechanisms conforming with DICOM are integrated to enable secure internet access. Digital video documents of diagnostic and therapeutic procedures should be examined regarding the clip length and size necessary for second opinion and manageable with today's hardware. Image sources relevant for this paper include 3D laparoscope, 3D surgical microscope, 3D open surgery camera, synthetic video, and monoscopic endoscopes, etc. The global DICOM video concept and three special workplaces of distinct applications are described. Additionally, an approach is presented to analyze the motion of the endoscopic camera for future automatic video-cutting. Digital stereoscopic video sequences are especially in demand for surgery . Therefore DSVS are also integrated into the DICOM video concept. Results are presented describing the suitability of stereoscopic display techniques for the operating room.

  6. Annotation of UAV surveillance video

    NASA Astrophysics Data System (ADS)

    Howlett, Todd; Robertson, Mark A.; Manthey, Dan; Krol, John

    2004-08-01

    Significant progress toward the development of a video annotation capability is presented in this paper. Research and development of an object tracking algorithm applicable for UAV video is described. Object tracking is necessary for attaching the annotations to the objects of interest. A methodology and format is defined for encoding video annotations using the SMPTE Key-Length-Value encoding standard. This provides the following benefits: a non-destructive annotation, compliance with existing standards, video playback in systems that are not annotation enabled and support for a real-time implementation. A model real-time video annotation system is also presented, at a high level, using the MPEG-2 Transport Stream as the transmission medium. This work was accomplished to meet the Department of Defense"s (DoD"s) need for a video annotation capability. Current practices for creating annotated products are to capture a still image frame, annotate it using an Electric Light Table application, and then pass the annotated image on as a product. That is not adequate for reporting or downstream cueing. It is too slow and there is a severe loss of information. This paper describes a capability for annotating directly on the video.

  7. Hierarchical video surveillance architecture: a chassis for video big data analytics and exploration

    NASA Astrophysics Data System (ADS)

    Ajiboye, Sola O.; Birch, Philip; Chatwin, Christopher; Young, Rupert

    2015-03-01

    There is increasing reliance on video surveillance systems for systematic derivation, analysis and interpretation of the data needed for predicting, planning, evaluating and implementing public safety. This is evident from the massive number of surveillance cameras deployed across public locations. For example, in July 2013, the British Security Industry Association (BSIA) reported that over 4 million CCTV cameras had been installed in Britain alone. The BSIA also reveal that only 1.5% of these are state owned. In this paper, we propose a framework that allows access to data from privately owned cameras, with the aim of increasing the efficiency and accuracy of public safety planning, security activities, and decision support systems that are based on video integrated surveillance systems. The accuracy of results obtained from government-owned public safety infrastructure would improve greatly if privately owned surveillance systems `expose' relevant video-generated metadata events, such as triggered alerts and also permit query of a metadata repository. Subsequently, a police officer, for example, with an appropriate level of system permission can query unified video systems across a large geographical area such as a city or a country to predict the location of an interesting entity, such as a pedestrian or a vehicle. This becomes possible with our proposed novel hierarchical architecture, the Fused Video Surveillance Architecture (FVSA). At the high level, FVSA comprises of a hardware framework that is supported by a multi-layer abstraction software interface. It presents video surveillance systems as an adapted computational grid of intelligent services, which is integration-enabled to communicate with other compatible systems in the Internet of Things (IoT).

  8. Fighting in a Contested Space Environment: Training Marines for Operations with Degraded or Denied Space-Enabled Capabilities

    DTIC Science & Technology

    2015-06-01

    System UFG Ulchi Freedom Guardian UFO UHF Follow-On System UHF Ultra-High Frequency URE User Range Error VTC Video Teleconference WGS Wideband...in the UHF band; two legacy systems, Fleet Satellite Communication System (FLTSATCOM) and UHF Follow-on ( UFO ), and the new constellation being

  9. The Successful Development of an Automated Rendezvous and Capture (AR&C) System for the National Aeronautics and Space Administration

    NASA Technical Reports Server (NTRS)

    Roe, Fred D.; Howard, Richard T.

    2003-01-01

    During the 1990's, the Marshall Space Flight Center (MSFC) conducted pioneering research in the development of an automated rendezvous and capture/docking (AR&C) system for U.S. space vehicles. Development and demonstration of a rendezvous sensor was identified early in the AR&C Program as the critical enabling technology that allows automated proximity operations and docking. A first generation rendezvous sensor, the Video Guidance Sensor (VGS), was developed and successfully flown on STS-87 and STS-95, proving the concept of a video- based sensor. A ground demonstration of the entire system and software was successfully tested. Advances in both video and signal processing technologies and the lessons learned from the two successful flight experiments provided a baseline for the development, by the MSFC, of a new generation of video based rendezvous sensor. The Advanced Video Guidance Sensor (AGS) has greatly increased performance and additional capability for longer-range operation with a new target designed as a direct replacement for existing ISS hemispherical reflectors.

  10. Sub-component modeling for face image reconstruction in video communications

    NASA Astrophysics Data System (ADS)

    Shiell, Derek J.; Xiao, Jing; Katsaggelos, Aggelos K.

    2008-08-01

    Emerging communications trends point to streaming video as a new form of content delivery. These systems are implemented over wired systems, such as cable or ethernet, and wireless networks, cell phones, and portable game systems. These communications systems require sophisticated methods of compression and error-resilience encoding to enable communications across band-limited and noisy delivery channels. Additionally, the transmitted video data must be of high enough quality to ensure a satisfactory end-user experience. Traditionally, video compression makes use of temporal and spatial coherence to reduce the information required to represent an image. In many communications systems, the communications channel is characterized by a probabilistic model which describes the capacity or fidelity of the channel. The implication is that information is lost or distorted in the channel, and requires concealment on the receiving end. We demonstrate a generative model based transmission scheme to compress human face images in video, which has the advantages of a potentially higher compression ratio, while maintaining robustness to errors and data corruption. This is accomplished by training an offline face model and using the model to reconstruct face images on the receiving end. We propose a sub-component AAM modeling the appearance of sub-facial components individually, and show face reconstruction results under different types of video degradation using a weighted and non-weighted version of the sub-component AAM.

  11. Secure and Lightweight Cloud-Assisted Video Reporting Protocol over 5G-Enabled Vehicular Networks

    PubMed Central

    2017-01-01

    In the vehicular networks, the real-time video reporting service is used to send the recorded videos in the vehicle to the cloud. However, when facilitating the real-time video reporting service in the vehicular networks, the usage of the fourth generation (4G) long term evolution (LTE) was proved to suffer from latency while the IEEE 802.11p standard does not offer sufficient scalability for a such congested environment. To overcome those drawbacks, the fifth-generation (5G)-enabled vehicular network is considered as a promising technology for empowering the real-time video reporting service. In this paper, we note that security and privacy related issues should also be carefully addressed to boost the early adoption of 5G-enabled vehicular networks. There exist a few research works for secure video reporting service in 5G-enabled vehicular networks. However, their usage is limited because of public key certificates and expensive pairing operations. Thus, we propose a secure and lightweight protocol for cloud-assisted video reporting service in 5G-enabled vehicular networks. Compared to the conventional public key certificates, the proposed protocol achieves entities’ authorization through anonymous credential. Also, by using lightweight security primitives instead of expensive bilinear pairing operations, the proposed protocol minimizes the computational overhead. From the evaluation results, we show that the proposed protocol takes the smaller computation and communication time for the cryptographic primitives than that of the well-known Eiza-Ni-Shi protocol. PMID:28946633

  12. Secure and Lightweight Cloud-Assisted Video Reporting Protocol over 5G-Enabled Vehicular Networks.

    PubMed

    Nkenyereye, Lewis; Kwon, Joonho; Choi, Yoon-Ho

    2017-09-23

    In the vehicular networks, the real-time video reporting service is used to send the recorded videos in the vehicle to the cloud. However, when facilitating the real-time video reporting service in the vehicular networks, the usage of the fourth generation (4G) long term evolution (LTE) was proved to suffer from latency while the IEEE 802.11p standard does not offer sufficient scalability for a such congested environment. To overcome those drawbacks, the fifth-generation (5G)-enabled vehicular network is considered as a promising technology for empowering the real-time video reporting service. In this paper, we note that security and privacy related issues should also be carefully addressed to boost the early adoption of 5G-enabled vehicular networks. There exist a few research works for secure video reporting service in 5G-enabled vehicular networks. However, their usage is limited because of public key certificates and expensive pairing operations. Thus, we propose a secure and lightweight protocol for cloud-assisted video reporting service in 5G-enabled vehicular networks. Compared to the conventional public key certificates, the proposed protocol achieves entities' authorization through anonymous credential. Also, by using lightweight security primitives instead of expensive bilinear pairing operations, the proposed protocol minimizes the computational overhead. From the evaluation results, we show that the proposed protocol takes the smaller computation and communication time for the cryptographic primitives than that of the well-known Eiza-Ni-Shi protocol.

  13. Television animation store: Recording pictures on a parallel transfer magnetic disc

    NASA Astrophysics Data System (ADS)

    Durey, A. J.

    1984-12-01

    The recording and replaying of digital video signals using a computer-type magnetic disc-drive as part of an electronic rostrum camera animation system is described. The system was developed to enable picture sequences to be generated directly as television signals, instead of using cine film. The characteristics of the disc-drive are described together with data processing, error protection and signal synchronization systems, which enable digital television YUV component signals, sampled at 12 MHz, 4 MHz and 4 MHz respectively, to be recorded and replayed in real time.

  14. Identifying sports videos using replay, text, and camera motion features

    NASA Astrophysics Data System (ADS)

    Kobla, Vikrant; DeMenthon, Daniel; Doermann, David S.

    1999-12-01

    Automated classification of digital video is emerging as an important piece of the puzzle in the design of content management systems for digital libraries. The ability to classify videos into various classes such as sports, news, movies, or documentaries, increases the efficiency of indexing, browsing, and retrieval of video in large databases. In this paper, we discuss the extraction of features that enable identification of sports videos directly from the compressed domain of MPEG video. These features include detecting the presence of action replays, determining the amount of scene text in vide, and calculating various statistics on camera and/or object motion. The features are derived from the macroblock, motion,and bit-rate information that is readily accessible from MPEG video with very minimal decoding, leading to substantial gains in processing speeds. Full-decoding of selective frames is required only for text analysis. A decision tree classifier built using these features is able to identify sports clips with an accuracy of about 93 percent.

  15. Emerging University Student Experiences of Learning Technologies across the Asia Pacific

    ERIC Educational Resources Information Center

    Barrett, B. F. D.; Higa, C.; Ellis, R. A.

    2012-01-01

    Three hundred students across eight countries and eleven higher education institutions in the Asia Pacific Region participated in two courses on climate change and disaster management that were supported by learning technologies: a satellite-enabled video-conferencing system and a learning management system. Evaluation of the student experience…

  16. Vroom: designing an augmented environment for remote collaboration in digital cinema production

    NASA Astrophysics Data System (ADS)

    Margolis, Todd; Cornish, Tracy

    2013-03-01

    As media technologies become increasingly affordable, compact and inherently networked, new generations of telecollaborative platforms continue to arise which integrate these new affordances. Virtual reality has been primarily concerned with creating simulations of environments that can transport participants to real or imagined spaces that replace the "real world". Meanwhile Augmented Reality systems have evolved to interleave objects from Virtual Reality environments into the physical landscape. Perhaps now there is a new class of systems that reverse this precept to enhance dynamic media landscapes and immersive physical display environments to enable intuitive data exploration through collaboration. Vroom (Virtual Room) is a next-generation reconfigurable tiled display environment in development at the California Institute for Telecommunications and Information Technology (Calit2) at the University of California, San Diego. Vroom enables freely scalable digital collaboratories, connecting distributed, high-resolution visualization resources for collaborative work in the sciences, engineering and the arts. Vroom transforms a physical space into an immersive media environment with large format interactive display surfaces, video teleconferencing and spatialized audio built on a highspeed optical network backbone. Vroom enables group collaboration for local and remote participants to share knowledge and experiences. Possible applications include: remote learning, command and control, storyboarding, post-production editorial review, high resolution video playback, 3D visualization, screencasting and image, video and multimedia file sharing. To support these various scenarios, Vroom features support for multiple user interfaces (optical tracking, touch UI, gesture interface, etc.), support for directional and spatialized audio, giga-pixel image interactivity, 4K video streaming, 3D visualization and telematic production. This paper explains the design process that has been utilized to make Vroom an accessible and intuitive immersive environment for remote collaboration specifically for digital cinema production.

  17. Bioluminescent system for dynamic imaging of cell and animal behavior

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hara-Miyauchi, Chikako; Laboratory for Cell Function Dynamics, Brain Science Institute, RIKEN, Saitama 351-0198; Department of Biophysics and Biochemistry, Graduate School of Health Care Sciences, Tokyo Medical and Dental University, Tokyo 113-8510

    2012-03-09

    Highlights: Black-Right-Pointing-Pointer We combined a yellow variant of GFP and firefly luciferase to make ffLuc-cp156. Black-Right-Pointing-Pointer ffLuc-cp156 showed improved photon yield in cultured cells and transgenic mice. Black-Right-Pointing-Pointer ffLuc-cp156 enabled video-rate bioluminescence imaging of freely-moving animals. Black-Right-Pointing-Pointer ffLuc-cp156 mice enabled tracking real-time drug delivery in conscious animals. -- Abstract: The current utility of bioluminescence imaging is constrained by a low photon yield that limits temporal sensitivity. Here, we describe an imaging method that uses a chemiluminescent/fluorescent protein, ffLuc-cp156, which consists of a yellow variant of Aequorea GFP and firefly luciferase. We report an improvement in photon yield by over threemore » orders of magnitude over current bioluminescent systems. We imaged cellular movement at high resolution including neuronal growth cones and microglial cell protrusions. Transgenic ffLuc-cp156 mice enabled video-rate bioluminescence imaging of freely moving animals, which may provide a reliable assay for drug distribution in behaving animals for pre-clinical studies.« less

  18. Smart sensing surveillance video system

    NASA Astrophysics Data System (ADS)

    Hsu, Charles; Szu, Harold

    2016-05-01

    An intelligent video surveillance system is able to detect and identify abnormal and alarming situations by analyzing object movement. The Smart Sensing Surveillance Video (S3V) System is proposed to minimize video processing and transmission, thus allowing a fixed number of cameras to be connected on the system, and making it suitable for its applications in remote battlefield, tactical, and civilian applications including border surveillance, special force operations, airfield protection, perimeter and building protection, and etc. The S3V System would be more effective if equipped with visual understanding capabilities to detect, analyze, and recognize objects, track motions, and predict intentions. In addition, alarm detection is performed on the basis of parameters of the moving objects and their trajectories, and is performed using semantic reasoning and ontologies. The S3V System capabilities and technologies have great potential for both military and civilian applications, enabling highly effective security support tools for improving surveillance activities in densely crowded environments. It would be directly applicable to solutions for emergency response personnel, law enforcement, and other homeland security missions, as well as in applications requiring the interoperation of sensor networks with handheld or body-worn interface devices.

  19. Army AL&T, October-December 2008

    DTIC Science & Technology

    2008-12-01

    during the WIN-T technology demonstration Nov. 8, 2007, at Naval Air Engineering Station , Lakehurst, NJ. (U.S. Army photo by Russ Messeroll.) 16 OCTOBER...worldwide communications architecture, enabling connectivity from the global backbone to regional networks to posts/camps/ stations , and, lastly, to...Force Tracker. • Tacticomp™ wireless and Global Positioning System(GPS)-enabled hand-held computer. • One Station Remote Video Terminal. • Counter

  20. Constructible Authentic Representations: Designing Video Games That Enable Players to Utilize Knowledge Developed In-Game to Reason about Science

    ERIC Educational Resources Information Center

    Holbert, Nathan R.; Wilensky, Uri

    2014-01-01

    While video games have become a source of excitement for educational designers, creating informal game experiences that players can draw on when thinking and reasoning in non-game contexts has proved challenging. In this paper we present a design principle for creating educational video games that enables players to draw on knowledge resources…

  1. A practical implementation of free viewpoint video system for soccer games

    NASA Astrophysics Data System (ADS)

    Suenaga, Ryo; Suzuki, Kazuyoshi; Tezuka, Tomoyuki; Panahpour Tehrani, Mehrdad; Takahashi, Keita; Fujii, Toshiaki

    2015-03-01

    In this paper, we present a free viewpoint video generation system with billboard representation for soccer games. Free viewpoint video generation is a technology that enables users to watch 3-D objects from their desired viewpoints. Practical implementation of free viewpoint video for sports events is highly demanded. However, a commercially acceptable system has not yet been developed. The main obstacles are insufficient user-end quality of the synthesized images and highly complex procedures that sometimes require manual operations. In this work, we aim to develop a commercially acceptable free viewpoint video system with a billboard representation. A supposed scenario is that soccer games during the day can be broadcasted in 3-D, even in the evening of the same day. Our work is still ongoing. However, we have already developed several techniques to support our goal. First, we captured an actual soccer game at an official stadium where we used 20 full-HD professional cameras. Second, we have implemented several tools for free viewpoint video generation as follow. In order to facilitate free viewpoint video generation, all cameras should be calibrated. We calibrated all cameras using checker board images and feature points on the field (cross points of the soccer field lines). We extract each player region from captured images manually. The background region is estimated by observing chrominance changes of each pixel in temporal domain (automatically). Additionally, we have developed a user interface for visualizing free viewpoint video generation using a graphic library (OpenGL), which is suitable for not only commercialized TV sets but also devices such as smartphones. However, practical system has not yet been completed and our study is still ongoing.

  2. Oversampling in virtual visual sensors as a means to recover higher modes of vibration

    NASA Astrophysics Data System (ADS)

    Shariati, Ali; Schumacher, Thomas

    2015-03-01

    Vibration-based structural health monitoring (SHM) techniques require modal information from the monitored structure in order to estimate the location and severity of damage. Natural frequencies also provide useful information to calibrate finite element models. There are several types of physical sensors that can measure the response over a range of frequencies. For most of those sensors however, accessibility, limitation of measurement points, wiring, and high system cost represent major challenges. Recent optical sensing approaches offer advantages such as easy access to visible areas, distributed sensing capabilities, and comparatively inexpensive data recording while having no wiring issues. In this research we propose a novel methodology to measure natural frequencies of structures using digital video cameras based on virtual visual sensors (VVS). In our initial study where we worked with commercially available inexpensive digital video cameras we found that for multiple degrees of freedom systems it is difficult to detect all of the natural frequencies simultaneously due to low quantization resolution. In this study we show how oversampling enabled by the use of high-end high-frame-rate video cameras enable recovering all of the three natural frequencies from a three story lab-scale structure.

  3. Transformation of an uncertain video search pipeline to a sketch-based visual analytics loop.

    PubMed

    Legg, Philip A; Chung, David H S; Parry, Matthew L; Bown, Rhodri; Jones, Mark W; Griffiths, Iwan W; Chen, Min

    2013-12-01

    Traditional sketch-based image or video search systems rely on machine learning concepts as their core technology. However, in many applications, machine learning alone is impractical since videos may not be semantically annotated sufficiently, there may be a lack of suitable training data, and the search requirements of the user may frequently change for different tasks. In this work, we develop a visual analytics systems that overcomes the shortcomings of the traditional approach. We make use of a sketch-based interface to enable users to specify search requirement in a flexible manner without depending on semantic annotation. We employ active machine learning to train different analytical models for different types of search requirements. We use visualization to facilitate knowledge discovery at the different stages of visual analytics. This includes visualizing the parameter space of the trained model, visualizing the search space to support interactive browsing, visualizing candidature search results to support rapid interaction for active learning while minimizing watching videos, and visualizing aggregated information of the search results. We demonstrate the system for searching spatiotemporal attributes from sports video to identify key instances of the team and player performance.

  4. Chemistry of cometary meteoroids from video-tape records of meteor spectra

    NASA Technical Reports Server (NTRS)

    Millman, P. M.

    1982-01-01

    The chemistry of the cometary meteoroids was studied by closed circuit television observing systems. Vidicon cameras produce basic data on standard video tape and enable the recording of the spectra of faint shower meteors, consequently the chemical study is extended to smaller particles and we have a larger data bank than is available from the more conventional method of recording meteor spectra by photography. The two main problems in using video tape meteor spectrum records are: (1) the video tape recording has a much lower resolution than the photographic technique; (2) video tape is relatively new type of data storage in astronomy and the methods of quantitative photometry have not yet been fully developed in the various fields where video tape is used. The use of the most detailed photographic meteor spectra to calibrate the video tape records and to make positive identification of the more prominent chemical elements appearing in the spectra may solve the low resolution problem. Progress in the development of standard photometric techniques for the analysis of video tape records of meteor spectra is reported.

  5. Proxy-assisted multicasting of video streams over mobile wireless networks

    NASA Astrophysics Data System (ADS)

    Nguyen, Maggie; Pezeshkmehr, Layla; Moh, Melody

    2005-03-01

    This work addresses the challenge of providing seamless multimedia services to mobile users by proposing a proxy-assisted multicast architecture for delivery of video streams. We propose a hybrid system of streaming proxies, interconnected by an application-layer multicast tree, where each proxy acts as a cluster head to stream out content to its stationary and mobile users. The architecture is based on our previously proposed Enhanced-NICE protocol, which uses an application-layer multicast tree to deliver layered video streams to multiple heterogeneous receivers. We targeted the study on placements of streaming proxies to enable efficient delivery of live and on-demand video, supporting both stationary and mobile users. The simulation results are evaluated and compared with two other baseline scenarios: one with a centralized proxy system serving the entire population and one with mini-proxies each to serve its local users. The simulations are implemented using the J-SIM simulator. The results show that even though proxies in the hybrid scenario experienced a slightly longer delay, they had the lowest drop rate of video content. This finding illustrates the significance of task sharing in multiple proxies. The resulted load balancing among proxies has provided a better video quality delivered to a larger audience.

  6. Dynamic quantitative phase images of pond life, insect wings, and in vitro cell cultures

    NASA Astrophysics Data System (ADS)

    Creath, Katherine

    2010-08-01

    This paper presents images and data of live biological samples taken with a novel Linnik interference microscope. The specially designed optical system enables instantaneous and 3D video measurements of dynamic motions within and among live cells without the need for contrast agents. This "label-free", vibration insensitive imaging system enables measurement of biological objects in reflection using harmless light levels with current magnifications of 10X (NA 0.3) and 20X (NA 0.5) and wavelengths of 660 nm and 785 nm over fields of view from several hundred microns up to a millimeter. At the core of the instrument is a phasemeasurement camera (PMC) enabling simultaneous measurement of multiple interference patterns utilizing a pixelated phase mask taking advantage of the polarization properties of light. Utilizing this technology enables the creation of phase image movies in real time at video rates so that dynamic motions and volumetric changes can be tracked. Objects are placed on a reflective surface in liquid under a coverslip. Phase values are converted to optical thickness data enabling volumetric, motion and morphological studies. Data from a number of different mud puddle organisms such as paramecium, flagellates and rotifers will be presented, as will measurements of flying ant wings and cultures of human breast cancer cells. These data highlight examples of monitoring different biological processes and motions. The live presentation features 4D phase movies of these examples.

  7. Remote high-definition rotating video enables fast spatial survey of marine underwater macrofauna and habitats.

    PubMed

    Pelletier, Dominique; Leleu, Kévin; Mallet, Delphine; Mou-Tham, Gérard; Hervé, Gilles; Boureau, Matthieu; Guilpart, Nicolas

    2012-01-01

    Observing spatial and temporal variations of marine biodiversity from non-destructive techniques is central for understanding ecosystem resilience, and for monitoring and assessing conservation strategies, e.g. Marine Protected Areas. Observations are generally obtained through Underwater Visual Censuses (UVC) conducted by divers. The problems inherent to the presence of divers have been discussed in several papers. Video techniques are increasingly used for observing underwater macrofauna and habitat. Most video techniques that do not need the presence of a diver use baited remote systems. In this paper, we present an original video technique which relies on a remote unbaited rotating remote system including a high definition camera. The system is set on the sea floor to record images. These are then analysed at the office to quantify biotic and abiotic sea bottom cover, and to identify and count fish species and other species like marine turtles. The technique was extensively tested in a highly diversified coral reef ecosystem in the South Lagoon of New Caledonia, based on a protocol covering both protected and unprotected areas in major lagoon habitats. The technique enabled to detect and identify a large number of species, and in particular fished species, which were not disturbed by the system. Habitat could easily be investigated through the images. A large number of observations could be carried out per day at sea. This study showed the strong potential of this non obtrusive technique for observing both macrofauna and habitat. It offers a unique spatial coverage and can be implemented at sea at a reasonable cost by non-expert staff. As such, this technique is particularly interesting for investigating and monitoring coastal biodiversity in the light of current conservation challenges and increasing monitoring needs.

  8. Aid for the Visually Impaired

    NASA Technical Reports Server (NTRS)

    1989-01-01

    Viewstar is a video system that magnifies and focuses words so partially sighted people can read or type from printed or written copy. Invented by Dr. Leonard Weinstein, a Langley engineer, the device enables Sandra Raven, Weinstein's stepdaughter, who is legally blind, to work as a clerk typist. Weinstein has also developed other magnification systems for individual needs.

  9. An Analysis of Biometric Technology as an Enabler to Information Assurance

    DTIC Science & Technology

    2005-03-01

    29 Facial Recognition ................................................................................................ 30...al., 2003) Facial Recognition Facial recognition systems are gaining momentum as of late. The reason for this is that facial recognition systems...the traffic camera on the street corner, video technology is everywhere. There are a couple of different methods currently being used for facial

  10. Multipoint Multimedia Conferencing System with Group Awareness Support and Remote Management

    ERIC Educational Resources Information Center

    Osawa, Noritaka; Asai, Kikuo

    2008-01-01

    A multipoint, multimedia conferencing system called FocusShare is described that uses IPv6/IPv4 multicasting for real-time collaboration, enabling video, audio, and group awareness information to be shared. Multiple telepointers provide group awareness information and make it easy to share attention and intention. In addition to pointing with the…

  11. Developing assessment system for wireless capsule endoscopy videos based on event detection

    NASA Astrophysics Data System (ADS)

    Chen, Ying-ju; Yasen, Wisam; Lee, Jeongkyu; Lee, Dongha; Kim, Yongho

    2009-02-01

    Along with the advancing of technology in wireless and miniature camera, Wireless Capsule Endoscopy (WCE), the combination of both, enables a physician to diagnose patient's digestive system without actually perform a surgical procedure. Although WCE is a technical breakthrough that allows physicians to visualize the entire small bowel noninvasively, the video viewing time takes 1 - 2 hours. This is very time consuming for the gastroenterologist. Not only it sets a limit on the wide application of this technology but also it incurs considerable amount of cost. Therefore, it is important to automate such process so that the medical clinicians only focus on interested events. As an extension from our previous work that characterizes the motility of digestive tract in WCE videos, we propose a new assessment system for energy based events detection (EG-EBD) to classify the events in WCE videos. For the system, we first extract general features of a WCE video that can characterize the intestinal contractions in digestive organs. Then, the event boundaries are identified by using High Frequency Content (HFC) function. The segments are classified into WCE event by special features. In this system, we focus on entering duodenum, entering cecum, and active bleeding. This assessment system can be easily extended to discover more WCE events, such as detailed organ segmentation and more diseases, by using new special features. In addition, the system provides a score for every WCE image for each event. Using the event scores, the system helps a specialist to speedup the diagnosis process.

  12. HiSeasNet: Oceanographic Ships Join the Grid

    NASA Astrophysics Data System (ADS)

    Berger, Jonathan; Orcutt, John; Foley, Steven; Bohlen, Steven

    2006-05-01

    HiSeasNet, the communications network providing full-period Internet access for the U.S. academic ocean research fleet, is an enabling technology that is changing the way oceanography is done in the 21st century. With the installation in March 2006 of a system on the research vessel (R/V) Seward Johnson and the planned installation on the R/V Marcus Langseth later this year, all but two of the Universities National Oceanographic Laboratories System (UNOLS) fleet of large/global and intermediate/ocean vessels will be equipped with HiSeasNet capability. HiSeasNet is a full-service Internet Protocol (IP) satellite network utilizing Cisco technology. In addition to the familiar IP services-such as e-mail, telnet, ssh, rlogin, Web traffic, and ftp-HiSeasNet can move real-time audio and video traffic across the satellite links. Phone systems onboard research ships can be connected to their home institutions' phone exchanges. Video teleconferencing with the current 96 kilobits per second circuits supports compressed video frame rates at about 10 frames per second, allowing for effective conversations and demonstrations with ship-to-shore video.

  13. What do we do with all this video? Better understanding public engagement for image and video annotation

    NASA Astrophysics Data System (ADS)

    Wiener, C.; Miller, A.; Zykov, V.

    2016-12-01

    Advanced robotic vehicles are increasingly being used by oceanographic research vessels to enable more efficient and widespread exploration of the ocean, particularly the deep ocean. With cutting-edge capabilities mounted onto robotic vehicles, data at high resolutions is being generated more than ever before, enabling enhanced data collection and the potential for broader participation. For example, high resolution camera technology not only improves visualization of the ocean environment, but also expands the capacity to engage participants remotely through increased use of telepresence and virtual reality techniques. Schmidt Ocean Institute is a private, non-profit operating foundation established to advance the understanding of the world's oceans through technological advancement, intelligent observation and analysis, and open sharing of information. Telepresence-enabled research is an important component of Schmidt Ocean Institute's science research cruises, which this presentation will highlight. Schmidt Ocean Institute is one of the only research programs that make their entire underwater vehicle dive series available online, creating a collection of video that enables anyone to follow deep sea research in real time. We encourage students, educators and the general public to take advantage of freely available dive videos. Additionally, other SOI-supported internet platforms, have engaged the public in image and video annotation activities. Examples of these new online platforms, which utilize citizen scientists to annotate scientific image and video data will be provided. This presentation will include an introduction to SOI-supported video and image tagging citizen science projects, real-time robot tracking, live ship-to-shore communications, and an array of outreach activities that enable scientists to interact with the public and explore the ocean in fascinating detail.

  14. Integrating distributed multimedia systems and interactive television networks

    NASA Astrophysics Data System (ADS)

    Shvartsman, Alex A.

    1996-01-01

    Recent advances in networks, storage and video delivery systems are about to make commercial deployment of interactive multimedia services over digital television networks a reality. The emerging components individually have the potential to satisfy the technical requirements in the near future. However, no single vendor is offering a complete end-to-end commercially-deployable and scalable interactive multimedia applications systems over digital/analog television systems. Integrating a large set of maturing sub-assemblies and interactive multimedia applications is a major task in deploying such systems. Here we deal with integration issues, requirements and trade-offs in building delivery platforms and applications for interactive television services. Such integration efforts must overcome lack of standards, and deal with unpredictable development cycles and quality problems of leading- edge technology. There are also the conflicting goals of optimizing systems for video delivery while enabling highly interactive distributed applications. It is becoming possible to deliver continuous video streams from specific sources, but it is difficult and expensive to provide the ability to rapidly switch among multiple sources of video and data. Finally, there is the ever- present challenge of integrating and deploying expensive systems whose scalability and extensibility is limited, while ensuring some resiliency in the face of inevitable changes. This proceedings version of the paper is an extended abstract.

  15. Traffic Monitor

    NASA Technical Reports Server (NTRS)

    1995-01-01

    Intelligent Vision Systems, Inc. (InVision) needed image acquisition technology that was reliable in bad weather for its TDS-200 Traffic Detection System. InVision researchers used information from NASA Tech Briefs and assistance from Johnson Space Center to finish the system. The NASA technology used was developed for Earth-observing imaging satellites: charge coupled devices, in which silicon chips convert light directly into electronic or digital images. The TDS-200 consists of sensors mounted above traffic on poles or span wires, enabling two sensors to view an intersection; a "swing and sway" feature to compensate for movement of the sensors; a combination of electronic shutter and gain control; and sensor output to an image digital signal processor, still frame video and optionally live video.

  16. Video capture of clinical care to enhance patient safety

    PubMed Central

    Weinger, M; Gonzales, D; Slagle, J; Syeed, M

    2004-01-01

    

 Experience from other domains suggests that videotaping and analyzing actual clinical care can provide valuable insights for enhancing patient safety through improvements in the process of care. Methods are described for the videotaping and analysis of clinical care using a high quality portable multi-angle digital video system that enables simultaneous capture of vital signs and time code synchronization of all data streams. An observer can conduct clinician performance assessment (such as workload measurements or behavioral task analysis) either in real time (during videotaping) or while viewing previously recorded videotapes. Supplemental data are synchronized with the video record and stored electronically in a hierarchical database. The video records are transferred to DVD, resulting in a small, cheap, and accessible archive. A number of technical and logistical issues are discussed, including consent of patients and clinicians, maintaining subject privacy and confidentiality, and data security. Using anesthesiology as a test environment, over 270 clinical cases (872 hours) have been successfully videotaped and processed using the system. PMID:15069222

  17. MedlinePlus FAQ: Is audio description available for videos on MedlinePlus?

    MedlinePlus

    ... audiodescription.html Question: Is audio description available for videos on MedlinePlus? To use the sharing features on ... page, please enable JavaScript. Answer: Audio description of videos helps make the content of videos accessible to ...

  18. Application of M-JPEG compression hardware to dynamic stimulus production.

    PubMed

    Mulligan, J B

    1997-01-01

    Inexpensive circuit boards have appeared on the market which transform a normal micro-computer's disk drive into a video disk capable of playing extended video sequences in real time. This technology enables the performance of experiments which were previously impossible, or at least prohibitively expensive. The new technology achieves this capability using special-purpose hardware to compress and decompress individual video frames, enabling a video stream to be transferred over relatively low-bandwidth disk interfaces. This paper will describe the use of such devices for visual psychophysics and present the technical issues that must be considered when evaluating individual products.

  19. Syntax-directed content analysis of videotext: application to a map detection recognition system

    NASA Astrophysics Data System (ADS)

    Aradhye, Hrishikesh; Herson, James A.; Myers, Gregory

    2003-01-01

    Video is an increasingly important and ever-growing source of information to the intelligence and homeland defense analyst. A capability to automatically identify the contents of video imagery would enable the analyst to index relevant foreign and domestic news videos in a convenient and meaningful way. To this end, the proposed system aims to help determine the geographic focus of a news story directly from video imagery by detecting and geographically localizing political maps from news broadcasts, using the results of videotext recognition in lieu of a computationally expensive, scale-independent shape recognizer. Our novel method for the geographic localization of a map is based on the premise that the relative placement of text superimposed on a map roughly corresponds to the geographic coordinates of the locations the text represents. Our scheme extracts and recognizes videotext, and iteratively identifies the geographic area, while allowing for OCR errors and artistic freedom. The fast and reliable recognition of such maps by our system may provide valuable context and supporting evidence for other sources, such as speech recognition transcripts. The concepts of syntax-directed content analysis of videotext presented here can be extended to other content analysis systems.

  20. Enhanced Video-Oculography System

    NASA Technical Reports Server (NTRS)

    Moore, Steven T.; MacDougall, Hamish G.

    2009-01-01

    A previously developed video-oculography system has been enhanced for use in measuring vestibulo-ocular reflexes of a human subject in a centrifuge, motor vehicle, or other setting. The system as previously developed included a lightweight digital video camera mounted on goggles. The left eye was illuminated by an infrared light-emitting diode via a dichroic mirror, and the camera captured images of the left eye in infrared light. To extract eye-movement data, the digitized video images were processed by software running in a laptop computer. Eye movements were calibrated by having the subject view a target pattern, fixed with respect to the subject s head, generated by a goggle-mounted laser with a diffraction grating. The system as enhanced includes a second camera for imaging the scene from the subject s perspective, and two inertial measurement units (IMUs) for measuring linear accelerations and rates of rotation for computing head movements. One IMU is mounted on the goggles, the other on the centrifuge or vehicle frame. All eye-movement and head-motion data are time-stamped. In addition, the subject s point of regard is superimposed on each scene image to enable analysis of patterns of gaze in real time.

  1. Camera Control and Geo-Registration for Video Sensor Networks

    NASA Astrophysics Data System (ADS)

    Davis, James W.

    With the use of large video networks, there is a need to coordinate and interpret the video imagery for decision support systems with the goal of reducing the cognitive and perceptual overload of human operators. We present computer vision strategies that enable efficient control and management of cameras to effectively monitor wide-coverage areas, and examine the framework within an actual multi-camera outdoor urban video surveillance network. First, we construct a robust and precise camera control model for commercial pan-tilt-zoom (PTZ) video cameras. In addition to providing a complete functional control mapping for PTZ repositioning, the model can be used to generate wide-view spherical panoramic viewspaces for the cameras. Using the individual camera control models, we next individually map the spherical panoramic viewspace of each camera to a large aerial orthophotograph of the scene. The result provides a unified geo-referenced map representation to permit automatic (and manual) video control and exploitation of cameras in a coordinated manner. The combined framework provides new capabilities for video sensor networks that are of significance and benefit to the broad surveillance/security community.

  2. Display Sharing: An Alternative Paradigm

    NASA Technical Reports Server (NTRS)

    Brown, Michael A.

    2010-01-01

    The current Johnson Space Center (JSC) Mission Control Center (MCC) Video Transport System (VTS) provides flight controllers and management the ability to meld raw video from various sources with telemetry to improve situational awareness. However, maintaining a separate infrastructure for video delivery and integration of video content with data adds significant complexity and cost to the system. When considering alternative architectures for a VTS, the current system's ability to share specific computer displays in their entirety to other locations, such as large projector systems, flight control rooms, and back supporting rooms throughout the facilities and centers must be incorporated into any new architecture. Internet Protocol (IP)-based systems also support video delivery and integration. IP-based systems generally have an advantage in terms of cost and maintainability. Although IP-based systems are versatile, the task of sharing a computer display from one workstation to another can be time consuming for an end-user and inconvenient to administer at a system level. The objective of this paper is to present a prototype display sharing enterprise solution. Display sharing is a system which delivers image sharing across the LAN while simultaneously managing bandwidth, supporting encryption, enabling recovery and resynchronization following a loss of signal, and, minimizing latency. Additional critical elements will include image scaling support, multi -sharing, ease of initial integration and configuration, integration with desktop window managers, collaboration tools, host and recipient controls. This goal of this paper is to summarize the various elements of an IP-based display sharing system that can be used in today's control center environment.

  3. Automation of Vapor-Diffusion Growth of Protein Crystals

    NASA Technical Reports Server (NTRS)

    Hamrick, David T.; Bray, Terry L.

    2005-01-01

    Some improvements have been made in a system of laboratory equipment developed previously for studying the crystallization of proteins from solution by use of dynamically controlled flows of dry gas. The improvements involve mainly (1) automation of dispensing of liquids for starting experiments, (2) automatic control of drying of protein solutions during the experiments, and (3) provision for automated acquisition of video images for monitoring experiments in progress and for post-experiment analysis. The automation of dispensing of liquids was effected by adding an automated liquid-handling robot that can aspirate source solutions and dispense them in either a hanging-drop or a sitting-drop configuration, whichever is specified, in each of 48 experiment chambers. A video camera of approximately the size and shape of a lipstick dispenser was added to a mobile stage that is part of the robot, in order to enable automated acquisition of images in each experiment chamber. The experiment chambers were redesigned to enable the use of sitting drops, enable backlighting of each specimen, and facilitate automation.

  4. Video-Game-Like Engine for Depicting Spacecraft Trajectories

    NASA Technical Reports Server (NTRS)

    Upchurch, Paul R.

    2009-01-01

    GoView is a video-game-like software engine, written in the C and C++ computing languages, that enables real-time, three-dimensional (3D)-appearing visual representation of spacecraft and trajectories (1) from any perspective; (2) at any spatial scale from spacecraft to Solar-system dimensions; (3) in user-selectable time scales; (4) in the past, present, and/or future; (5) with varying speeds; and (6) forward or backward in time. GoView constructs an interactive 3D world by use of spacecraft-mission data from pre-existing engineering software tools. GoView can also be used to produce distributable application programs for depicting NASA orbital missions on personal computers running the Windows XP, Mac OsX, and Linux operating systems. GoView enables seamless rendering of Cartesian coordinate spaces with programmable graphics hardware, whereas prior programs for depicting spacecraft trajectories variously require non-Cartesian coordinates and/or are not compatible with programmable hardware. GoView incorporates an algorithm for nonlinear interpolation between arbitrary reference frames, whereas the prior programs are restricted to special classes of inertial and non-inertial reference frames. Finally, whereas the prior programs present complex user interfaces requiring hours of training, the GoView interface provides guidance, enabling use without any training.

  5. Video streaming into the mainstream.

    PubMed

    Garrison, W

    2001-12-01

    Changes in Internet technology are making possible the delivery of a richer mixture of media through data streaming. High-quality, dynamic content, such as video and audio, can be incorporated into Websites simply, flexibly and interactively. Technologies such as G3 mobile communication, ADSL, cable and satellites enable new ways of delivering medical services, information and learning. Systems such as Quicktime, Windows Media and Real Video provide reliable data streams as video-on-demand and users can tailor the experience to their own interests. The Learning Development Centre at the University of Portsmouth have used streaming technologies together with e-learning tools such as dynamic HTML, Flash, 3D objects and online assessment successfully to deliver on-line course content in economics and earth science. The Lifesign project--to develop, catalogue and stream health sciences media for teaching--is described and future medical applications are discussed.

  6. Simple video format for mobile applications

    NASA Astrophysics Data System (ADS)

    Smith, John R.; Miao, Zhourong; Li, Chung-Sheng

    2000-04-01

    With the advent of pervasive computing, there is a growing demand for enabling multimedia applications on mobile devices. Large numbers of pervasive computing devices, such as personal digital assistants (PDAs), hand-held computer (HHC), smart phones, portable audio players, automotive computing devices, and wearable computers are gaining access to online information sources. However, the pervasive computing devices are often constrained along a number of dimensions, such as processing power, local storage, display size and depth, connectivity, and communication bandwidth, which makes it difficult to access rich image and video content. In this paper, we report on our initial efforts in designing a simple scalable video format with low-decoding and transcoding complexity for pervasive computing. The goal is to enable image and video access for mobile applications such as electronic catalog shopping, video conferencing, remote surveillance and video mail using pervasive computing devices.

  7. User interface using a 3D model for video surveillance

    NASA Astrophysics Data System (ADS)

    Hata, Toshihiko; Boh, Satoru; Tsukada, Akihiro; Ozaki, Minoru

    1998-02-01

    These days fewer people, who must carry out their tasks quickly and precisely, are required in industrial surveillance and monitoring applications such as plant control or building security. Utilizing multimedia technology is a good approach to meet this need, and we previously developed Media Controller, which is designed for the applications and provides realtime recording and retrieval of digital video data in a distributed environment. In this paper, we propose a user interface for such a distributed video surveillance system in which 3D models of buildings and facilities are connected to the surveillance video. A novel method of synchronizing camera field data with each frame of a video stream is considered. This method records and reads the camera field data similarity to the video data and transmits it synchronously with the video stream. This enables the user interface to have such useful functions as comprehending the camera field immediately and providing clues when visibility is poor, for not only live video but also playback video. We have also implemented and evaluated the display function which makes surveillance video and 3D model work together using Media Controller with Java and Virtual Reality Modeling Language employed for multi-purpose and intranet use of 3D model.

  8. Bring It to the Pitch: Combining Video and Movement Data to Enhance Team Sport Analysis.

    PubMed

    Stein, Manuel; Janetzko, Halldor; Lamprecht, Andreas; Breitkreutz, Thorsten; Zimmermann, Philipp; Goldlucke, Bastian; Schreck, Tobias; Andrienko, Gennady; Grossniklaus, Michael; Keim, Daniel A

    2018-01-01

    Analysts in professional team sport regularly perform analysis to gain strategic and tactical insights into player and team behavior. Goals of team sport analysis regularly include identification of weaknesses of opposing teams, or assessing performance and improvement potential of a coached team. Current analysis workflows are typically based on the analysis of team videos. Also, analysts can rely on techniques from Information Visualization, to depict e.g., player or ball trajectories. However, video analysis is typically a time-consuming process, where the analyst needs to memorize and annotate scenes. In contrast, visualization typically relies on an abstract data model, often using abstract visual mappings, and is not directly linked to the observed movement context anymore. We propose a visual analytics system that tightly integrates team sport video recordings with abstract visualization of underlying trajectory data. We apply appropriate computer vision techniques to extract trajectory data from video input. Furthermore, we apply advanced trajectory and movement analysis techniques to derive relevant team sport analytic measures for region, event and player analysis in the case of soccer analysis. Our system seamlessly integrates video and visualization modalities, enabling analysts to draw on the advantages of both analysis forms. Several expert studies conducted with team sport analysts indicate the effectiveness of our integrated approach.

  9. Adaptive maritime video surveillance

    NASA Astrophysics Data System (ADS)

    Gupta, Kalyan Moy; Aha, David W.; Hartley, Ralph; Moore, Philip G.

    2009-05-01

    Maritime assets such as ports, harbors, and vessels are vulnerable to a variety of near-shore threats such as small-boat attacks. Currently, such vulnerabilities are addressed predominantly by watchstanders and manual video surveillance, which is manpower intensive. Automatic maritime video surveillance techniques are being introduced to reduce manpower costs, but they have limited functionality and performance. For example, they only detect simple events such as perimeter breaches and cannot predict emerging threats. They also generate too many false alerts and cannot explain their reasoning. To overcome these limitations, we are developing the Maritime Activity Analysis Workbench (MAAW), which will be a mixed-initiative real-time maritime video surveillance tool that uses an integrated supervised machine learning approach to label independent and coordinated maritime activities. It uses the same information to predict anomalous behavior and explain its reasoning; this is an important capability for watchstander training and for collecting performance feedback. In this paper, we describe MAAW's functional architecture, which includes the following pipeline of components: (1) a video acquisition and preprocessing component that detects and tracks vessels in video images, (2) a vessel categorization and activity labeling component that uses standard and relational supervised machine learning methods to label maritime activities, and (3) an ontology-guided vessel and maritime activity annotator to enable subject matter experts (e.g., watchstanders) to provide feedback and supervision to the system. We report our findings from a preliminary system evaluation on river traffic video.

  10. Indexing and retrieval of MPEG compressed video

    NASA Astrophysics Data System (ADS)

    Kobla, Vikrant; Doermann, David S.

    1998-04-01

    To keep pace with the increased popularity of digital video as an archival medium, the development of techniques for fast and efficient analysis of ideo streams is essential. In particular, solutions to the problems of storing, indexing, browsing, and retrieving video data from large multimedia databases are necessary to a low access to these collections. Given that video is often stored efficiently in a compressed format, the costly overhead of decompression can be reduced by analyzing the compressed representation directly. In earlier work, we presented compressed domain parsing techniques which identified shots, subshots, and scenes. In this article, we present efficient key frame selection, feature extraction, indexing, and retrieval techniques that are directly applicable to MPEG compressed video. We develop a frame type independent representation which normalizes spatial and temporal features including frame type, frame size, macroblock encoding, and motion compensation vectors. Features for indexing are derived directly from this representation and mapped to a low- dimensional space where they can be accessed using standard database techniques. Spatial information is used as primary index into the database and temporal information is used to rank retrieved clips and enhance the robustness of the system. The techniques presented enable efficient indexing, querying, and retrieval of compressed video as demonstrated by our system which typically takes a fraction of a second to retrieve similar video scenes from a database, with over 95 percent recall.

  11. Mobile Video in Everyday Social Interactions

    NASA Astrophysics Data System (ADS)

    Reponen, Erika; Lehikoinen, Jaakko; Impiö, Jussi

    Video recording has become a spontaneous everyday activity for many people, thanks to the video capabilities of modern mobile phones. Internet connectivity of mobile phones enables fluent sharing of captured material even real-time, which makes video an up-and-coming everyday interaction medium. In this article we discuss the effect of the video camera in the social environment, everyday life situations, mainly based on a study where four groups of people used digital video cameras in their normal settings. We also reflect on another study of ours, relating to real-time mobile video communication and discuss future views. The aim of our research is to understand the possibilities in the domain of mobile video. Live and delayed sharing seem to have their special characteristics, live video being used as a virtual window between places whereas delayed video usage has more scope for good-quality content. While this novel way of interacting via mobile video enables new social patterns, it also raises new concerns for privacy and trust between participating persons in all roles, largely due to the widely spreading possibilities of videos. Video in a social situation affects cameramen (who record), targets (who are recorded), passers-by (who are unintentionally in the situation), and the audience (who follow the videos or recording situations) but also the other way around, the participants affect the video by their varying and evolving personal and communicational motivations for recording.

  12. ARINC 818 specification revisions enable new avionics architectures

    NASA Astrophysics Data System (ADS)

    Grunwald, Paul

    2014-06-01

    The ARINC 818 Avionics Digital Video Bus is the standard for cockpit video that has gained wide acceptance in both the commercial and military cockpits. The Boeing 787, A350XWB, A400M, KC-46A, and many other aircraft use it. The ARINC 818 specification, which was initially release in 2006, has recently undergone a major update to address new avionics architectures and capabilities. Over the seven years since its release, projects have gone beyond the specification due to the complexity of new architectures and desired capabilities, such as video switching, bi-directional communication, data-only paths, and camera and sensor control provisions. The ARINC 818 specification was revised in 2013, and ARINC 818-2 was approved in November 2013. The revisions to the ARINC 818-2 specification enable switching, stereo and 3-D provisions, color sequential implementations, regions of interest, bi-directional communication, higher link rates, data-only transmission, and synchronization signals. This paper discusses each of the new capabilities and the impact on avionics and display architectures, especially when integrating large area displays, stereoscopic displays, multiple displays, and systems that include a large number of sensors.

  13. Multiple Frequency Audio Signal Communication as a Mechanism for Neurophysiology and Video Data Synchronization

    PubMed Central

    Topper, Nicholas C.; Burke, S.N.; Maurer, A.P.

    2014-01-01

    BACKGROUND Current methods for aligning neurophysiology and video data are either prepackaged, requiring the additional purchase of a software suite, or use a blinking LED with a stationary pulse-width and frequency. These methods lack significant user interface for adaptation, are expensive, or risk a misalignment of the two data streams. NEW METHOD A cost-effective means to obtain high-precision alignment of behavioral and neurophysiological data is obtained by generating an audio-pulse embedded with two domains of information, a low-frequency binary-counting signal and a high, randomly changing frequency. This enabled the derivation of temporal information while maintaining enough entropy in the system for algorithmic alignment. RESULTS The sample to frame index constructed using the audio input correlation method described in this paper enables video and data acquisition to be aligned at a sub-frame level of precision. COMPARISONS WITH EXISTING METHOD Traditionally, a synchrony pulse is recorded on-screen via a flashing diode. The higher sampling rate of the audio input of the camcorder enables the timing of an event to be detected with greater precision. CONCLUSIONS While On-line analysis and synchronization using specialized equipment may be the ideal situation in some cases, the method presented in the current paper presents a viable, low cost alternative, and gives the flexibility to interface with custom off-line analysis tools. Moreover, the ease of constructing and implements this set-up presented in the current paper makes it applicable to a wide variety of applications that require video recording. PMID:25256648

  14. Multiple frequency audio signal communication as a mechanism for neurophysiology and video data synchronization.

    PubMed

    Topper, Nicholas C; Burke, Sara N; Maurer, Andrew Porter

    2014-12-30

    Current methods for aligning neurophysiology and video data are either prepackaged, requiring the additional purchase of a software suite, or use a blinking LED with a stationary pulse-width and frequency. These methods lack significant user interface for adaptation, are expensive, or risk a misalignment of the two data streams. A cost-effective means to obtain high-precision alignment of behavioral and neurophysiological data is obtained by generating an audio-pulse embedded with two domains of information, a low-frequency binary-counting signal and a high, randomly changing frequency. This enabled the derivation of temporal information while maintaining enough entropy in the system for algorithmic alignment. The sample to frame index constructed using the audio input correlation method described in this paper enables video and data acquisition to be aligned at a sub-frame level of precision. Traditionally, a synchrony pulse is recorded on-screen via a flashing diode. The higher sampling rate of the audio input of the camcorder enables the timing of an event to be detected with greater precision. While on-line analysis and synchronization using specialized equipment may be the ideal situation in some cases, the method presented in the current paper presents a viable, low cost alternative, and gives the flexibility to interface with custom off-line analysis tools. Moreover, the ease of constructing and implements this set-up presented in the current paper makes it applicable to a wide variety of applications that require video recording. Copyright © 2014 Elsevier B.V. All rights reserved.

  15. Person detection, tracking and following using stereo camera

    NASA Astrophysics Data System (ADS)

    Wang, Xiaofeng; Zhang, Lilian; Wang, Duo; Hu, Xiaoping

    2018-04-01

    Person detection, tracking and following is a key enabling technology for mobile robots in many human-robot interaction applications. In this article, we present a system which is composed of visual human detection, video tracking and following. The detection is based on YOLO(You only look once), which applies a single convolution neural network(CNN) to the full image, thus can predict bounding boxes and class probabilities directly in one evaluation. Then the bounding box provides initial person position in image to initialize and train the KCF(Kernelized Correlation Filter), which is a video tracker based on discriminative classifier. At last, by using a stereo 3D sparse reconstruction algorithm, not only the position of the person in the scene is determined, but also it can elegantly solve the problem of scale ambiguity in the video tracker. Extensive experiments are conducted to demonstrate the effectiveness and robustness of our human detection and tracking system.

  16. Increased ISR operator capability utilizing a centralized 360° full motion video display

    NASA Astrophysics Data System (ADS)

    Andryc, K.; Chamberlain, J.; Eagleson, T.; Gottschalk, G.; Kowal, B.; Kuzdeba, P.; LaValley, D.; Myers, E.; Quinn, S.; Rose, M.; Rusiecki, B.

    2012-06-01

    In many situations, the difference between success and failure comes down to taking the right actions quickly. While the myriad of electronic sensors available today can provide data quickly, it may overload the operator; where only a contextualized centralized display of information and intuitive human interface can help to support the quick and effective decisions needed. If these decisions are to result in quick actions, then the operator must be able to understand all of the data of his environment. In this paper we present a novel approach in contextualizing multi-sensor data onto a full motion video real-time 360 degree imaging display. The system described could function as a primary display system for command and control in security, military and observation posts. It has the ability to process and enable interactive control of multiple other sensor systems. It enhances the value of these other sensors by overlaying their information on a panorama of the surroundings. Also, it can be used to interface to other systems including: auxiliary electro-optical systems, aerial video, contact management, Hostile Fire Indicators (HFI), and Remote Weapon Stations (RWS).

  17. Performance Evaluation of Peer-to-Peer Progressive Download in Broadband Access Networks

    NASA Astrophysics Data System (ADS)

    Shibuya, Megumi; Ogishi, Tomohiko; Yamamoto, Shu

    P2P (Peer-to-Peer) file sharing architectures have scalable and cost-effective features. Hence, the application of P2P architectures to media streaming is attractive and expected to be an alternative to the current video streaming using IP multicast or content delivery systems because the current systems require expensive network infrastructures and large scale centralized cache storage systems. In this paper, we investigate the P2P progressive download enabling Internet video streaming services. We demonstrated the capability of the P2P progressive download in both laboratory test network as well as in the Internet. Through the experiments, we clarified the contribution of the FTTH links to the P2P progressive download in the heterogeneous access networks consisting of FTTH and ADSL links. We analyzed the cause of some download performance degradation occurred in the experiment and discussed about the effective methods to provide the video streaming service using P2P progressive download in the current heterogeneous networks.

  18. Video coding for next-generation surveillance systems

    NASA Astrophysics Data System (ADS)

    Klasen, Lena M.; Fahlander, Olov

    1997-02-01

    Video is used as recording media in surveillance system and also more frequently by the Swedish Police Force. Methods for analyzing video using an image processing system have recently been introduced at the Swedish National Laboratory of Forensic Science, and new methods are in focus in a research project at Linkoping University, Image Coding Group. The accuracy of the result of those forensic investigations often depends on the quality of the video recordings, and one of the major problems when analyzing videos from crime scenes is the poor quality of the recordings. Enhancing poor image quality might add manipulative or subjective effects and does not seem to be the right way of getting reliable analysis results. The surveillance system in use today is mainly based on video techniques, VHS or S-VHS, and the weakest link is the video cassette recorder, (VCR). Multiplexers for selecting one of many camera outputs for recording is another problem as it often filters the video signal, and recording is limited to only one of the available cameras connected to the VCR. A way to get around the problem of poor recording is to simultaneously record all camera outputs digitally. It is also very important to build such a system bearing in mind that image processing analysis methods becomes more important as a complement to the human eye. Using one or more cameras gives a large amount of data, and the need for data compression is more than obvious. Crime scenes often involve persons or moving objects, and the available coding techniques are more or less useful. Our goal is to propose a possible system, being the best compromise with respect to what needs to be recorded, movements in the recorded scene, loss of information and resolution etc., to secure the efficient recording of the crime and enable forensic analysis. The preventative effective of having a well functioning surveillance system and well established image analysis methods is not to be neglected. Aspects of this next generation of digital surveillance systems are discussed in this paper.

  19. Compressed-domain video indexing techniques using DCT and motion vector information in MPEG video

    NASA Astrophysics Data System (ADS)

    Kobla, Vikrant; Doermann, David S.; Lin, King-Ip; Faloutsos, Christos

    1997-01-01

    Development of various multimedia applications hinges on the availability of fast and efficient storage, browsing, indexing, and retrieval techniques. Given that video is typically stored efficiently in a compressed format, if we can analyze the compressed representation directly, we can avoid the costly overhead of decompressing and operating at the pixel level. Compressed domain parsing of video has been presented in earlier work where a video clip is divided into shots, subshots, and scenes. In this paper, we describe key frame selection, feature extraction, and indexing and retrieval techniques that are directly applicable to MPEG compressed video. We develop a frame-type independent representation of the various types of frames present in an MPEG video in which al frames can be considered equivalent. Features are derived from the available DCT, macroblock, and motion vector information and mapped to a low-dimensional space where they can be accessed with standard database techniques. The spatial information is used as primary index while the temporal information is used to enhance the robustness of the system during the retrieval process. The techniques presented enable fast archiving, indexing, and retrieval of video. Our operational prototype typically takes a fraction of a second to retrieve similar video scenes from our database, with over 95% success.

  20. Empirical evaluation of H.265/HEVC-based dynamic adaptive video streaming over HTTP (HEVC-DASH)

    NASA Astrophysics Data System (ADS)

    Irondi, Iheanyi; Wang, Qi; Grecos, Christos

    2014-05-01

    Real-time HTTP streaming has gained global popularity for delivering video content over Internet. In particular, the recent MPEG-DASH (Dynamic Adaptive Streaming over HTTP) standard enables on-demand, live, and adaptive Internet streaming in response to network bandwidth fluctuations. Meanwhile, emerging is the new-generation video coding standard, H.265/HEVC (High Efficiency Video Coding) promises to reduce the bandwidth requirement by 50% at the same video quality when compared with the current H.264/AVC standard. However, little existing work has addressed the integration of the DASH and HEVC standards, let alone empirical performance evaluation of such systems. This paper presents an experimental HEVC-DASH system, which is a pull-based adaptive streaming solution that delivers HEVC-coded video content through conventional HTTP servers where the client switches to its desired quality, resolution or bitrate based on the available network bandwidth. Previous studies in DASH have focused on H.264/AVC, whereas we present an empirical evaluation of the HEVC-DASH system by implementing a real-world test bed, which consists of an Apache HTTP Server with GPAC, an MP4Client (GPAC) with open HEVC-based DASH client and a NETEM box in the middle emulating different network conditions. We investigate and analyze the performance of HEVC-DASH by exploring the impact of various network conditions such as packet loss, bandwidth and delay on video quality. Furthermore, we compare the Intra and Random Access profiles of HEVC coding with the Intra profile of H.264/AVC when the correspondingly encoded video is streamed with DASH. Finally, we explore the correlation among the quality metrics and network conditions, and empirically establish under which conditions the different codecs can provide satisfactory performance.

  1. Live HDR video streaming on commodity hardware

    NASA Astrophysics Data System (ADS)

    McNamee, Joshua; Hatchett, Jonathan; Debattista, Kurt; Chalmers, Alan

    2015-09-01

    High Dynamic Range (HDR) video provides a step change in viewing experience, for example the ability to clearly see the soccer ball when it is kicked from the shadow of the stadium into sunshine. To achieve the full potential of HDR video, so-called true HDR, it is crucial that all the dynamic range that was captured is delivered to the display device and tone mapping is confined only to the display. Furthermore, to ensure widespread uptake of HDR imaging, it should be low cost and available on commodity hardware. This paper describes an end-to-end HDR pipeline for capturing, encoding and streaming high-definition HDR video in real-time using off-the-shelf components. All the lighting that is captured by HDR-enabled consumer cameras is delivered via the pipeline to any display, including HDR displays and even mobile devices with minimum latency. The system thus provides an integrated HDR video pipeline that includes everything from capture to post-production, archival and storage, compression, transmission, and display.

  2. MPEG-7 based video annotation and browsing

    NASA Astrophysics Data System (ADS)

    Hoeynck, Michael; Auweiler, Thorsten; Wellhausen, Jens

    2003-11-01

    The huge amount of multimedia data produced worldwide requires annotation in order to enable universal content access and to provide content-based search-and-retrieval functionalities. Since manual video annotation can be time consuming, automatic annotation systems are required. We review recent approaches to content-based indexing and annotation of videos for different kind of sports and describe our approach to automatic annotation of equestrian sports videos. We especially concentrate on MPEG-7 based feature extraction and content description, where we apply different visual descriptors for cut detection. Further, we extract the temporal positions of single obstacles on the course by analyzing MPEG-7 edge information. Having determined single shot positions as well as the visual highlights, the information is jointly stored with meta-textual information in an MPEG-7 description scheme. Based on this information, we generate content summaries which can be utilized in a user-interface in order to provide content-based access to the video stream, but further for media browsing on a streaming server.

  3. Video-based measurements for wireless capsule endoscope tracking

    NASA Astrophysics Data System (ADS)

    Spyrou, Evaggelos; Iakovidis, Dimitris K.

    2014-01-01

    The wireless capsule endoscope is a swallowable medical device equipped with a miniature camera enabling the visual examination of the gastrointestinal (GI) tract. It wirelessly transmits thousands of images to an external video recording system, while its location and orientation are being tracked approximately by external sensor arrays. In this paper we investigate a video-based approach to tracking the capsule endoscope without requiring any external equipment. The proposed method involves extraction of speeded up robust features from video frames, registration of consecutive frames based on the random sample consensus algorithm, and estimation of the displacement and rotation of interest points within these frames. The results obtained by the application of this method on wireless capsule endoscopy videos indicate its effectiveness and improved performance over the state of the art. The findings of this research pave the way for a cost-effective localization and travel distance measurement of capsule endoscopes in the GI tract, which could contribute in the planning of more accurate surgical interventions.

  4. PeakVizor: Visual Analytics of Peaks in Video Clickstreams from Massive Open Online Courses.

    PubMed

    Chen, Qing; Chen, Yuanzhe; Liu, Dongyu; Shi, Conglei; Wu, Yingcai; Qu, Huamin

    2016-10-01

    Massive open online courses (MOOCs) aim to facilitate open-access and massive-participation education. These courses have attracted millions of learners recently. At present, most MOOC platforms record the web log data of learner interactions with course videos. Such large amounts of multivariate data pose a new challenge in terms of analyzing online learning behaviors. Previous studies have mainly focused on the aggregate behaviors of learners from a summative view; however, few attempts have been made to conduct a detailed analysis of such behaviors. To determine complex learning patterns in MOOC video interactions, this paper introduces a comprehensive visualization system called PeakVizor. This system enables course instructors and education experts to analyze the "peaks" or the video segments that generate numerous clickstreams. The system features three views at different levels: the overview with glyphs to display valuable statistics regarding the peaks detected; the flow view to present spatio-temporal information regarding the peaks; and the correlation view to show the correlation between different learner groups and the peaks. Case studies and interviews conducted with domain experts have demonstrated the usefulness and effectiveness of PeakVizor, and new findings about learning behaviors in MOOC platforms have been reported.

  5. Goddard In The Galaxy [Music Video

    NASA Image and Video Library

    2014-07-14

    This video highlights the many ways NASA Goddard Space Flight Center explores the universe. So crank up your speakers and let the music be your guide. "My Songs Know What You Did In The Dark (Light Em Up)" Performed by Fall Out Boy Courtesy of Island Def Jam Music Group under license from Universal Music Enterprises Download the video here: svs.gsfc.nasa.gov/goto?11378 NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  6. A Framework of Simple Event Detection in Surveillance Video

    NASA Astrophysics Data System (ADS)

    Xu, Weiguang; Zhang, Yafei; Lu, Jianjiang; Tian, Yulong; Wang, Jiabao

    Video surveillance is playing more and more important role in people's social life. Real-time alerting of threaten events and searching interesting content in stored large scale video footage needs human operator to pay full attention on monitor for long time. The labor intensive mode has limit the effectiveness and efficiency of the system. A framework of simple event detection is presented advance the automation of video surveillance. An improved inner key point matching approach is used to compensate motion of background in real-time; frame difference are used to detect foreground; HOG based classifiers are used to classify foreground object into people and car; mean-shift is used to tracking the recognized objects. Events are detected based on predefined rules. The maturity of the algorithms guarantee the robustness of the framework, and the improved approach and the easily checked rules enable the framework to work in real-time. Future works to be done are also discussed.

  7. Digital Video Revisited: Storytelling, Conferencing, Remixing

    ERIC Educational Resources Information Center

    Godwin-Jones, Robert

    2012-01-01

    Five years ago in the February, 2007, issue of LLT, I wrote about developments in digital video of potential interest to language teachers. Since then, there have been major changes in options for video capture, editing, and delivery. One of the most significant has been the rise in popularity of video-based storytelling, enabled largely by…

  8. Using Informal Education through Music Video Creation

    ERIC Educational Resources Information Center

    Cayari, Christopher

    2014-01-01

    Music video creation provides students a new way to express themselves and become better performers and consumers of media. This article provides a new perspective on Lucy Green's informal music pedagogy by enabling students to create music videos in music classrooms; thus, students are able to create music videos that informally develop…

  9. Photogrammetric Applications of Immersive Video Cameras

    NASA Astrophysics Data System (ADS)

    Kwiatek, K.; Tokarczyk, R.

    2014-05-01

    The paper investigates immersive videography and its application in close-range photogrammetry. Immersive video involves the capture of a live-action scene that presents a 360° field of view. It is recorded simultaneously by multiple cameras or microlenses, where the principal point of each camera is offset from the rotating axis of the device. This issue causes problems when stitching together individual frames of video separated from particular cameras, however there are ways to overcome it and applying immersive cameras in photogrammetry provides a new potential. The paper presents two applications of immersive video in photogrammetry. At first, the creation of a low-cost mobile mapping system based on Ladybug®3 and GPS device is discussed. The amount of panoramas is much too high for photogrammetric purposes as the base line between spherical panoramas is around 1 metre. More than 92 000 panoramas were recorded in one Polish region of Czarny Dunajec and the measurements from panoramas enable the user to measure the area of outdoors (adverting structures) and billboards. A new law is being created in order to limit the number of illegal advertising structures in the Polish landscape and immersive video recorded in a short period of time is a candidate for economical and flexible measurements off-site. The second approach is a generation of 3d video-based reconstructions of heritage sites based on immersive video (structure from immersive video). A mobile camera mounted on a tripod dolly was used to record the interior scene and immersive video, separated into thousands of still panoramas, was converted from video into 3d objects using Agisoft Photoscan Professional. The findings from these experiments demonstrated that immersive photogrammetry seems to be a flexible and prompt method of 3d modelling and provides promising features for mobile mapping systems.

  10. Interactive Video Listening Comprehension in Foreign Language Instruction: Development and Evaluation.

    ERIC Educational Resources Information Center

    Fischer, Robert

    The report details development, at Southwest Texas State University and later at Pennsylvania State University, of a computer authoring system ("Libra") enabling foreign language faculty to develop multimedia lessons focusing on listening comprehension. Staff at Southwest Texas State University first developed a Macintosh version of the…

  11. Robust media processing on programmable power-constrained systems

    NASA Astrophysics Data System (ADS)

    McVeigh, Jeff

    2005-03-01

    To achieve consumer-level quality, media systems must process continuous streams of audio and video data while maintaining exacting tolerances on sampling rate, jitter, synchronization, and latency. While it is relatively straightforward to design fixed-function hardware implementations to satisfy worst-case conditions, there is a growing trend to utilize programmable multi-tasking solutions for media applications. The flexibility of these systems enables support for multiple current and future media formats, which can reduce design costs and time-to-market. This paper provides practical engineering solutions to achieve robust media processing on such systems, with specific attention given to power-constrained platforms. The techniques covered in this article utilize the fundamental concepts of algorithm and software optimization, software/hardware partitioning, stream buffering, hierarchical prioritization, and system resource and power management. A novel enhancement to dynamically adjust processor voltage and frequency based on buffer fullness to reduce system power consumption is examined in detail. The application of these techniques is provided in a case study of a portable video player implementation based on a general-purpose processor running a non real-time operating system that achieves robust playback of synchronized H.264 video and MP3 audio from local storage and streaming over 802.11.

  12. Research Instruments

    NASA Technical Reports Server (NTRS)

    1992-01-01

    The GENETI-SCANNER, newest product of Perceptive Scientific Instruments, Inc. (PSI), rapidly scans slides, locates, digitizes, measures and classifies specific objects and events in research and diagnostic applications. Founded by former NASA employees, PSI's primary product line is based on NASA image processing technology. The instruments karyotype - a process employed in analysis and classification of chromosomes - using a video camera mounted on a microscope. Images are digitized, enabling chromosome image enhancement. The system enables karyotyping to be done significantly faster, increasing productivity and lowering costs. Product is no longer being manufactured.

  13. In vivo estimation of target registration errors during augmented reality laparoscopic surgery.

    PubMed

    Thompson, Stephen; Schneider, Crispin; Bosi, Michele; Gurusamy, Kurinchi; Ourselin, Sébastien; Davidson, Brian; Hawkes, David; Clarkson, Matthew J

    2018-06-01

    Successful use of augmented reality for laparoscopic surgery requires that the surgeon has a thorough understanding of the likely accuracy of any overlay. Whilst the accuracy of such systems can be estimated in the laboratory, it is difficult to extend such methods to the in vivo clinical setting. Herein we describe a novel method that enables the surgeon to estimate in vivo errors during use. We show that the method enables quantitative evaluation of in vivo data gathered with the SmartLiver image guidance system. The SmartLiver system utilises an intuitive display to enable the surgeon to compare the positions of landmarks visible in both a projected model and in the live video stream. From this the surgeon can estimate the system accuracy when using the system to locate subsurface targets not visible in the live video. Visible landmarks may be either point or line features. We test the validity of the algorithm using an anatomically representative liver phantom, applying simulated perturbations to achieve clinically realistic overlay errors. We then apply the algorithm to in vivo data. The phantom results show that using projected errors of surface features provides a reliable predictor of subsurface target registration error for a representative human liver shape. Applying the algorithm to in vivo data gathered with the SmartLiver image-guided surgery system shows that the system is capable of accuracies around 12 mm; however, achieving this reliably remains a significant challenge. We present an in vivo quantitative evaluation of the SmartLiver image-guided surgery system, together with a validation of the evaluation algorithm. This is the first quantitative in vivo analysis of an augmented reality system for laparoscopic surgery.

  14. Using Image Analysis to Explore Changes In Bacterial Mat Coverage at the Base of a Hydrothermal Vent within the Caldera of Axial Seamount

    NASA Astrophysics Data System (ADS)

    Knuth, F.; Crone, T. J.; Marburg, A.

    2017-12-01

    The Ocean Observatories Initiative's (OOI) Cabled Array is delivering real-time high-definition video data from an HD video camera (CAMHD), installed at the Mushroom hydrothermal vent in the ASHES hydrothermal vent field within the caldera of Axial Seamount, an active submarine volcano located approximately 450 kilometers off the coast of Washington at a depth of 1,542 m. Every three hours the camera pans, zooms and focuses in on nine distinct scenes of scientific interest across the vent, producing 14-minute-long videos during each run. This standardized video sampling routine enables scientists to programmatically analyze the content of the video using automated image analysis techniques. Each scene-specific time series dataset can service a wide range of scientific investigations, including the estimation of bacterial flux into the system by quantifying chemosynthetic bacterial clusters (floc) present in the water column, relating periodicity in hydrothermal vent fluid flow to earth tides, measuring vent chimney growth in response to changing hydrothermal fluid flow rates, or mapping the patterns of fauna colonization, distribution and composition across the vent over time. We are currently investigating the seventh scene in the sampling routine, focused on the bacterial mat covering the seafloor at the base of the vent. We quantify the change in bacterial mat coverage over time using image analysis techniques, and examine the relationship between mat coverage, fluid flow processes, episodic chimney collapse events, and other processes observed by Cabled Array instrumentation. This analysis is being conducted using cloud-enabled computer vision processing techniques, programmatic image analysis, and time-lapse video data collected over the course of the first CAMHD deployment, from November 2015 to July 2016.

  15. Race and Emotion in Computer-Based HIV Prevention Videos for Emergency Department Patients

    ERIC Educational Resources Information Center

    Aronson, Ian David; Bania, Theodore C.

    2011-01-01

    Computer-based video provides a valuable tool for HIV prevention in hospital emergency departments. However, the type of video content and protocol that will be most effective remain underexplored and the subject of debate. This study employs a new and highly replicable methodology that enables comparisons of multiple video segments, each based on…

  16. Video-Stimulated Accounts: Young Children Accounting for Interactional Matters in Front of Peers

    ERIC Educational Resources Information Center

    Theobald, Maryanne

    2012-01-01

    Research in the early years places increasing importance on participatory methods to engage children. The playback of video-recording to stimulate conversation is a research method that enables children's accounts to be heard and attends to a participatory view. During video-stimulated sessions, participants watch an extract of video-recording of…

  17. Knowledge-based understanding of aerial surveillance video

    NASA Astrophysics Data System (ADS)

    Cheng, Hui; Butler, Darren

    2006-05-01

    Aerial surveillance has long been used by the military to locate, monitor and track the enemy. Recently, its scope has expanded to include law enforcement activities, disaster management and commercial applications. With the ever-growing amount of aerial surveillance video acquired daily, there is an urgent need for extracting actionable intelligence in a timely manner. Furthermore, to support high-level video understanding, this analysis needs to go beyond current approaches and consider the relationships, motivations and intentions of the objects in the scene. In this paper we propose a system for interpreting aerial surveillance videos that automatically generates a succinct but meaningful description of the observed regions, objects and events. For a given video, the semantics of important regions and objects, and the relationships between them, are summarised into a semantic concept graph. From this, a textual description is derived that provides new search and indexing options for aerial video and enables the fusion of aerial video with other information modalities, such as human intelligence, reports and signal intelligence. Using a Mixture-of-Experts video segmentation algorithm an aerial video is first decomposed into regions and objects with predefined semantic meanings. The objects are then tracked and coerced into a semantic concept graph and the graph is summarized spatially, temporally and semantically using ontology guided sub-graph matching and re-writing. The system exploits domain specific knowledge and uses a reasoning engine to verify and correct the classes, identities and semantic relationships between the objects. This approach is advantageous because misclassifications lead to knowledge contradictions and hence they can be easily detected and intelligently corrected. In addition, the graph representation highlights events and anomalies that a low-level analysis would overlook.

  18. Field-Sequential Color Converter

    NASA Technical Reports Server (NTRS)

    Studer, Victor J.

    1989-01-01

    Electronic conversion circuit enables display of signals from field-sequential color-television camera on color video camera. Designed for incorporation into color-television monitor on Space Shuttle, circuit weighs less, takes up less space, and consumes less power than previous conversion equipment. Incorporates state-of-art memory devices, also used in terrestrial stationary or portable closed-circuit television systems.

  19. System for clinical photometric stereo endoscopy

    NASA Astrophysics Data System (ADS)

    Durr, Nicholas J.; González, Germán.; Lim, Daryl; Traverso, Giovanni; Nishioka, Norman S.; Vakoc, Benjamin J.; Parot, Vicente

    2014-02-01

    Photometric stereo endoscopy is a technique that captures information about the high-spatial-frequency topography of the field of view simultaneously with a conventional color image. Here we describe a system that will enable photometric stereo endoscopy to be clinically evaluated in the large intestine of human patients. The clinical photometric stereo endoscopy system consists of a commercial gastroscope, a commercial video processor, an image capturing and processing unit, custom synchronization electronics, white light LEDs, a set of four fibers with diffusing tips, and an alignment cap. The custom pieces that come into contact with the patient are composed of biocompatible materials that can be sterilized before use. The components can then be assembled in the endoscopy suite before use. The resulting endoscope has the same outer diameter as a conventional colonoscope (14 mm), plugs into a commercial video processor, captures topography and color images at 15 Hz, and displays the conventional color image to the gastroenterologist in real-time. We show that this system can capture a color and topographical video in a tubular colon phantom, demonstrating robustness to complex geometries and motion. The reported system is suitable for in vivo evaluation of photometric stereo endoscopy in the human large intestine.

  20. ViCoMo: visual context modeling for scene understanding in video surveillance

    NASA Astrophysics Data System (ADS)

    Creusen, Ivo M.; Javanbakhti, Solmaz; Loomans, Marijn J. H.; Hazelhoff, Lykele B.; Roubtsova, Nadejda; Zinger, Svitlana; de With, Peter H. N.

    2013-10-01

    The use of contextual information can significantly aid scene understanding of surveillance video. Just detecting people and tracking them does not provide sufficient information to detect situations that require operator attention. We propose a proof-of-concept system that uses several sources of contextual information to improve scene understanding in surveillance video. The focus is on two scenarios that represent common video surveillance situations, parking lot surveillance and crowd monitoring. In the first scenario, a pan-tilt-zoom (PTZ) camera tracking system is developed for parking lot surveillance. Context is provided by the traffic sign recognition system to localize regular and handicapped parking spot signs as well as license plates. The PTZ algorithm has the ability to selectively detect and track persons based on scene context. In the second scenario, a group analysis algorithm is introduced to detect groups of people. Contextual information is provided by traffic sign recognition and region labeling algorithms and exploited for behavior understanding. In both scenarios, decision engines are used to interpret and classify the output of the subsystems and if necessary raise operator alerts. We show that using context information enables the automated analysis of complicated scenarios that were previously not possible using conventional moving object classification techniques.

  1. Transforming Education Research Through Open Video Data Sharing.

    PubMed

    Gilmore, Rick O; Adolph, Karen E; Millman, David S; Gordon, Andrew

    2016-01-01

    Open data sharing promises to accelerate the pace of discovery in the developmental and learning sciences, but significant technical, policy, and cultural barriers have limited its adoption. As a result, most research on learning and development remains shrouded in a culture of isolation. Data sharing is the rare exception (Gilmore, 2016). Many researchers who study teaching and learning in classroom, laboratory, museum, and home contexts use video as a primary source of raw research data. Unlike other measures, video captures the complexity, richness, and diversity of behavior. Moreover, because video is self-documenting, it presents significant potential for reuse. However, the potential for reuse goes largely unrealized because videos are rarely shared. Research videos contain information about participants' identities making the materials challenging to share. The large size of video files, diversity of formats, and incompatible software tools pose technical challenges. The Databrary (databrary.org) digital library enables researchers who study learning and development to store, share, stream, and annotate videos. In this article, we describe how Databrary has overcome barriers to sharing research videos and associated data and metadata. Databrary has developed solutions for respecting participants' privacy; for storing, streaming, and sharing videos; and for managing videos and associated metadata. The Databrary experience suggests ways that videos and other identifiable data collected in the context of educational research might be shared. Open data sharing enabled by Databrary can serve as a catalyst for a truly multidisciplinary science of learning.

  2. Transforming Education Research Through Open Video Data Sharing

    PubMed Central

    Gilmore, Rick O.; Adolph, Karen E.; Millman, David S.; Gordon, Andrew

    2016-01-01

    Open data sharing promises to accelerate the pace of discovery in the developmental and learning sciences, but significant technical, policy, and cultural barriers have limited its adoption. As a result, most research on learning and development remains shrouded in a culture of isolation. Data sharing is the rare exception (Gilmore, 2016). Many researchers who study teaching and learning in classroom, laboratory, museum, and home contexts use video as a primary source of raw research data. Unlike other measures, video captures the complexity, richness, and diversity of behavior. Moreover, because video is self-documenting, it presents significant potential for reuse. However, the potential for reuse goes largely unrealized because videos are rarely shared. Research videos contain information about participants’ identities making the materials challenging to share. The large size of video files, diversity of formats, and incompatible software tools pose technical challenges. The Databrary (databrary.org) digital library enables researchers who study learning and development to store, share, stream, and annotate videos. In this article, we describe how Databrary has overcome barriers to sharing research videos and associated data and metadata. Databrary has developed solutions for respecting participants’ privacy; for storing, streaming, and sharing videos; and for managing videos and associated metadata. The Databrary experience suggests ways that videos and other identifiable data collected in the context of educational research might be shared. Open data sharing enabled by Databrary can serve as a catalyst for a truly multidisciplinary science of learning. PMID:28042361

  3. Backwards compatible high dynamic range video compression

    NASA Astrophysics Data System (ADS)

    Dolzhenko, Vladimir; Chesnokov, Vyacheslav; Edirisinghe, Eran A.

    2014-02-01

    This paper presents a two layer CODEC architecture for high dynamic range video compression. The base layer contains the tone mapped video stream encoded with 8 bits per component which can be decoded using conventional equipment. The base layer content is optimized for rendering on low dynamic range displays. The enhancement layer contains the image difference, in perceptually uniform color space, between the result of inverse tone mapped base layer content and the original video stream. Prediction of the high dynamic range content reduces the redundancy in the transmitted data while still preserves highlights and out-of-gamut colors. Perceptually uniform colorspace enables using standard ratedistortion optimization algorithms. We present techniques for efficient implementation and encoding of non-uniform tone mapping operators with low overhead in terms of bitstream size and number of operations. The transform representation is based on human vision system model and suitable for global and local tone mapping operators. The compression techniques include predicting the transform parameters from previously decoded frames and from already decoded data for current frame. Different video compression techniques are compared: backwards compatible and non-backwards compatible using AVC and HEVC codecs.

  4. [MODERN INSTRUMENTS FOR EAR, NOSE AND THROAT RENDERING AND EVALUATION IN RESEARCHES ON RUSSIAN SEGMENT OF THE INTERNATIONAL SPACE STATION].

    PubMed

    Popova, I I; Orlov, O I; Matsnev, E I; Revyakin, Yu G

    2016-01-01

    The paper reports the results of testing some diagnostic video systems enabling digital rendering of TNT teeth and jaws. The authors substantiate the criteria of choosing and integration of imaging systems in future on Russian segment of the International space station kit LOR developed for examination and download of high-quality images of cosmonauts' TNT, parodentium and teeth.

  5. Citrus Inventory

    NASA Technical Reports Server (NTRS)

    1994-01-01

    An aerial color infrared (CIR) mapping system developed by Kennedy Space Center enables Florida's Charlotte County to accurately appraise its citrus groves while reducing appraisal costs. The technology was further advanced by development of a dual video system making it possible to simultaneously view images of the same area and detect changes. An image analysis system automatically surveys and photo interprets grove images as well as automatically counts trees and reports totals. The system, which saves both time and money, has potential beyond citrus grove valuation.

  6. Integrating Digital Video Technology in the Classroom

    ERIC Educational Resources Information Center

    Lim, Jon; Pellett, Heidi Henschel; Pellett, Tracy

    2009-01-01

    Digital video technology can be a powerful tool for teaching and learning. It enables students to develop a variety of skills including research, communication, decision-making, problem-solving, and other higher-order critical-thinking skills. In addition, digital video technology has the potential to enrich university classroom curricula, enhance…

  7. Direct endoscopic video registration for sinus surgery

    NASA Astrophysics Data System (ADS)

    Mirota, Daniel; Taylor, Russell H.; Ishii, Masaru; Hager, Gregory D.

    2009-02-01

    Advances in computer vision have made possible robust 3D reconstruction of monocular endoscopic video. These reconstructions accurately represent the visible anatomy and, once registered to pre-operative CT data, enable a navigation system to track directly through video eliminating the need for an external tracking system. Video registration provides the means for a direct interface between an endoscope and a navigation system and allows a shorter chain of rigid-body transformations to be used to solve the patient/navigation-system registration. To solve this registration step we propose a new 3D-3D registration algorithm based on Trimmed Iterative Closest Point (TrICP)1 and the z-buffer algorithm.2 The algorithm takes as input a 3D point cloud of relative scale with the origin at the camera center, an isosurface from the CT, and an initial guess of the scale and location. Our algorithm utilizes only the visible polygons of the isosurface from the current camera location during each iteration to minimize the search area of the target region and robustly reject outliers of the reconstruction. We present example registrations in the sinus passage applicable to both sinus surgery and transnasal surgery. To evaluate our algorithm's performance we compare it to registration via Optotrak and present closest distance point to surface error. We show our algorithm has a mean closest distance error of .2268mm.

  8. Affordable multisensor digital video architecture for 360° situational awareness displays

    NASA Astrophysics Data System (ADS)

    Scheiner, Steven P.; Khan, Dina A.; Marecki, Alexander L.; Berman, David A.; Carberry, Dana

    2011-06-01

    One of the major challenges facing today's military ground combat vehicle operations is the ability to achieve and maintain full-spectrum situational awareness while under armor (i.e. closed hatch). Thus, the ability to perform basic tasks such as driving, maintaining local situational awareness, surveillance, and targeting will require a high-density array of real time information be processed, distributed, and presented to the vehicle operators and crew in near real time (i.e. low latency). Advances in display and sensor technologies are providing never before seen opportunities to supply large amounts of high fidelity imagery and video to the vehicle operators and crew in real time. To fully realize the advantages of these emerging display and sensor technologies, an underlying digital architecture must be developed that is capable of processing these large amounts of video and data from separate sensor systems and distributing it simultaneously within the vehicle to multiple vehicle operators and crew. This paper will examine the systems and software engineering efforts required to overcome these challenges and will address development of an affordable, integrated digital video architecture. The approaches evaluated will enable both current and future ground combat vehicle systems the flexibility to readily adopt emerging display and sensor technologies, while optimizing the Warfighter Machine Interface (WMI), minimizing lifecycle costs, and improve the survivability of the vehicle crew working in closed-hatch systems during complex ground combat operations.

  9. A Video Game Platform for Exploring Satellite and In-Situ Data Streams

    NASA Astrophysics Data System (ADS)

    Cai, Y.

    2014-12-01

    Exploring spatiotemporal patterns of moving objects are essential to Earth Observation missions, such as tracking, modeling and predicting movement of clouds, dust, plumes and harmful algal blooms. Those missions involve high-volume, multi-source, and multi-modal imagery data analysis. Analytical models intend to reveal inner structure, dynamics, and relationship of things. However, they are not necessarily intuitive to humans. Conventional scientific visualization methods are intuitive but limited by manual operations, such as area marking, measurement and alignment of multi-source data, which are expensive and time-consuming. A new development of video analytics platform has been in progress, which integrates the video game engine with satellite and in-situ data streams. The system converts Earth Observation data into articulated objects that are mapped from a high-dimensional space to a 3D space. The object tracking and augmented reality algorithms highlight the objects' features in colors, shapes and trajectories, creating visual cues for observing dynamic patterns. The head and gesture tracker enable users to navigate the data space interactively. To validate our design, we have used NASA SeaWiFS satellite images of oceanographic remote sensing data and NOAA's in-situ cell count data. Our study demonstrates that the video game system can reduce the size and cost of traditional CAVE systems in two to three orders of magnitude. This system can also be used for satellite mission planning and public outreaching.

  10. Subtitle Synchronization across Multiple Screens and Devices

    PubMed Central

    Rodriguez-Alsina, Aitor; Talavera, Guillermo; Orero, Pilar; Carrabina, Jordi

    2012-01-01

    Ambient Intelligence is a new paradigm in which environments are sensitive and responsive to the presence of people. This is having an increasing importance in multimedia applications, which frequently rely on sensors to provide useful information to the user. In this context, multimedia applications must adapt and personalize both content and interfaces in order to reach acceptable levels of context-specific quality of service for the user, and enable the content to be available anywhere and at any time. The next step is to make content available to everybody in order to overcome the existing access barriers to content for users with specific needs, or else to adapt to different platforms, hence making content fully usable and accessible. Appropriate access to video content, for instance, is not always possible due to the technical limitations of traditional video packaging, transmission and presentation. This restricts the flexibility of subtitles and audio-descriptions to be adapted to different devices, contexts and users. New Web standards built around HTML5 enable more featured applications with better adaptation and personalization facilities, and thus would seem more suitable for accessible AmI environments. This work presents a video subtitling system that enables the customization, adaptation and synchronization of subtitles across different devices and multiple screens. The benefits of HTML5 applications for building the solution are analyzed along with their current platform support. Moreover, examples of the use of the application in three different cases are presented. Finally, the user experience of the solution is evaluated. PMID:23012513

  11. Subtitle synchronization across multiple screens and devices.

    PubMed

    Rodriguez-Alsina, Aitor; Talavera, Guillermo; Orero, Pilar; Carrabina, Jordi

    2012-01-01

    Ambient Intelligence is a new paradigm in which environments are sensitive and responsive to the presence of people. This is having an increasing importance in multimedia applications, which frequently rely on sensors to provide useful information to the user. In this context, multimedia applications must adapt and personalize both content and interfaces in order to reach acceptable levels of context-specific quality of service for the user, and enable the content to be available anywhere and at any time. The next step is to make content available to everybody in order to overcome the existing access barriers to content for users with specific needs, or else to adapt to different platforms, hence making content fully usable and accessible. Appropriate access to video content, for instance, is not always possible due to the technical limitations of traditional video packaging, transmission and presentation. This restricts the flexibility of subtitles and audio-descriptions to be adapted to different devices, contexts and users. New Web standards built around HTML5 enable more featured applications with better adaptation and personalization facilities, and thus would seem more suitable for accessible AmI environments. This work presents a video subtitling system that enables the customization, adaptation and synchronization of subtitles across different devices and multiple screens. The benefits of HTML5 applications for building the solution are analyzed along with their current platform support. Moreover, examples of the use of the application in three different cases are presented. Finally, the user experience of the solution is evaluated.

  12. Complete thoracoscopic lobectomy for cancer: comparative study of three-dimensional high-definition with two-dimensional high-definition video systems †.

    PubMed

    Bagan, Patrick; De Dominicis, Florence; Hernigou, Jacques; Dakhil, Bassel; Zaimi, Rym; Pricopi, Ciprian; Le Pimpec Barthes, Françoise; Berna, Pascal

    2015-06-01

    Common video systems for video-assisted thoracic surgery (VATS) provide the surgeon a two-dimensional (2D) image. This study aimed to evaluate performances of a new three-dimensional high definition (3D-HD) system in comparison with a two-dimensional high definition (2D-HD) system when conducting a complete thoracoscopic lobectomy (CTL). This multi-institutional comparative study trialled two video systems: 2D-HD and 3D-HD video systems used to conduct the same type of CTL. The inclusion criteria were T1N0M0 non-small-cell lung carcinoma (NSCLC) in the left lower lobe and suitable for thoracoscopic resection. The CTL was performed by the same surgeon using either a 3D-HD or 2D-HD system. Eighteen patients with NSCLC were included in the study between January and December 2013: 14 males, 4 females, with a median age of 65.6 years (range: 49-81). The patients were randomized before inclusion into two groups: to undergo surgery with the use of a 2D-HD or 3D-HD system. We compared operating time, the drainage duration, hospital stay and the N upstaging rate from the definitive histology. The use of the 3D-HD system significantly reduced the surgical time (by 17%). However, chest-tube drainage, hospital stay, the number of lymph-node stations and upstaging were similar in both groups. The main finding was that 3D-HD system significantly reduced the surgical time needed to complete the lobectomy. Thus, future integration of 3D-HD systems should improve thoracoscopic surgery, and enable more complex resections to be performed. It will also help advance the field of endoscopically assisted surgery. © The Author 2015. Published by Oxford University Press on behalf of the European Association for Cardio-Thoracic Surgery. All rights reserved.

  13. Model-Based Analysis of Flow-Mediated Dilation and Intima-Media Thickness

    PubMed Central

    Bartoli, G.; Menegaz, G.; Lisi, M.; Di Stolfo, G.; Dragoni, S.; Gori, T.

    2008-01-01

    We present an end-to-end system for the automatic measurement of flow-mediated dilation (FMD) and intima-media thickness (IMT) for the assessment of the arterial function. The video sequences are acquired from a B-mode echographic scanner. A spline model (deformable template) is fitted to the data to detect the artery boundaries and track them all along the video sequence. The a priori knowledge about the image features and its content is exploited. Preprocessing is performed to improve both the visual quality of video frames for visual inspection and the performance of the segmentation algorithm without affecting the accuracy of the measurements. The system allows real-time processing as well as a high level of interactivity with the user. This is obtained by a graphical user interface (GUI) enabling the cardiologist to supervise the whole process and to eventually reset the contour extraction at any point in time. The system was validated and the accuracy, reproducibility, and repeatability of the measurements were assessed with extensive in vivo experiments. Jointly with the user friendliness, low cost, and robustness, this makes the system suitable for both research and daily clinical use. PMID:19360110

  14. Optical tweezers with 2.5 kHz bandwidth video detection for single-colloid electrophoresis

    NASA Astrophysics Data System (ADS)

    Otto, Oliver; Gutsche, Christof; Kremer, Friedrich; Keyser, Ulrich F.

    2008-02-01

    We developed an optical tweezers setup to study the electrophoretic motion of colloids in an external electric field. The setup is based on standard components for illumination and video detection. Our video based optical tracking of the colloid motion has a time resolution of 0.2ms, resulting in a bandwidth of 2.5kHz. This enables calibration of the optical tweezers by Brownian motion without applying a quadrant photodetector. We demonstrate that our system has a spatial resolution of 0.5nm and a force sensitivity of 20fN using a Fourier algorithm to detect periodic oscillations of the trapped colloid caused by an external ac field. The electrophoretic mobility and zeta potential of a single colloid can be extracted in aqueous solution avoiding screening effects common for usual bulk measurements.

  15. Performance evaluation of a two detector camera for real-time video.

    PubMed

    Lochocki, Benjamin; Gambín-Regadera, Adrián; Artal, Pablo

    2016-12-20

    Single pixel imaging can be the preferred method over traditional 2D-array imaging in spectral ranges where conventional cameras are not available. However, when it comes to real-time video imaging, single pixel imaging cannot compete with the framerates of conventional cameras, especially when high-resolution images are desired. Here we evaluate the performance of an imaging approach using two detectors simultaneously. First, we present theoretical results on how low SNR affects final image quality followed by experimentally determined results. Obtained video framerates were doubled compared to state of the art systems, resulting in a framerate from 22 Hz for a 32×32 resolution to 0.75 Hz for a 128×128 resolution image. Additionally, the two detector imaging technique enables the acquisition of images with a resolution of 256×256 in less than 3 s.

  16. Scalable Adaptive Graphics Environment (SAGE) Software for the Visualization of Large Data Sets on a Video Wall

    NASA Technical Reports Server (NTRS)

    Jedlovec, Gary; Srikishen, Jayanthi; Edwards, Rita; Cross, David; Welch, Jon; Smith, Matt

    2013-01-01

    The use of collaborative scientific visualization systems for the analysis, visualization, and sharing of "big data" available from new high resolution remote sensing satellite sensors or four-dimensional numerical model simulations is propelling the wider adoption of ultra-resolution tiled display walls interconnected by high speed networks. These systems require a globally connected and well-integrated operating environment that provides persistent visualization and collaboration services. This abstract and subsequent presentation describes a new collaborative visualization system installed for NASA's Shortterm Prediction Research and Transition (SPoRT) program at Marshall Space Flight Center and its use for Earth science applications. The system consists of a 3 x 4 array of 1920 x 1080 pixel thin bezel video monitors mounted on a wall in a scientific collaboration lab. The monitors are physically and virtually integrated into a 14' x 7' for video display. The display of scientific data on the video wall is controlled by a single Alienware Aurora PC with a 2nd Generation Intel Core 4.1 GHz processor, 32 GB memory, and an AMD Fire Pro W600 video card with 6 mini display port connections. Six mini display-to-dual DVI cables are used to connect the 12 individual video monitors. The open source Scalable Adaptive Graphics Environment (SAGE) windowing and media control framework, running on top of the Ubuntu 12 Linux operating system, allows several users to simultaneously control the display and storage of high resolution still and moving graphics in a variety of formats, on tiled display walls of any size. The Ubuntu operating system supports the open source Scalable Adaptive Graphics Environment (SAGE) software which provides a common environment, or framework, enabling its users to access, display and share a variety of data-intensive information. This information can be digital-cinema animations, high-resolution images, high-definition video-teleconferences, presentation slides, documents, spreadsheets or laptop screens. SAGE is cross-platform, community-driven, open-source visualization and collaboration middleware that utilizes shared national and international cyberinfrastructure for the advancement of scientific research and education.

  17. Scalable Adaptive Graphics Environment (SAGE) Software for the Visualization of Large Data Sets on a Video Wall

    NASA Astrophysics Data System (ADS)

    Jedlovec, G.; Srikishen, J.; Edwards, R.; Cross, D.; Welch, J. D.; Smith, M. R.

    2013-12-01

    The use of collaborative scientific visualization systems for the analysis, visualization, and sharing of 'big data' available from new high resolution remote sensing satellite sensors or four-dimensional numerical model simulations is propelling the wider adoption of ultra-resolution tiled display walls interconnected by high speed networks. These systems require a globally connected and well-integrated operating environment that provides persistent visualization and collaboration services. This abstract and subsequent presentation describes a new collaborative visualization system installed for NASA's Short-term Prediction Research and Transition (SPoRT) program at Marshall Space Flight Center and its use for Earth science applications. The system consists of a 3 x 4 array of 1920 x 1080 pixel thin bezel video monitors mounted on a wall in a scientific collaboration lab. The monitors are physically and virtually integrated into a 14' x 7' for video display. The display of scientific data on the video wall is controlled by a single Alienware Aurora PC with a 2nd Generation Intel Core 4.1 GHz processor, 32 GB memory, and an AMD Fire Pro W600 video card with 6 mini display port connections. Six mini display-to-dual DVI cables are used to connect the 12 individual video monitors. The open source Scalable Adaptive Graphics Environment (SAGE) windowing and media control framework, running on top of the Ubuntu 12 Linux operating system, allows several users to simultaneously control the display and storage of high resolution still and moving graphics in a variety of formats, on tiled display walls of any size. The Ubuntu operating system supports the open source Scalable Adaptive Graphics Environment (SAGE) software which provides a common environment, or framework, enabling its users to access, display and share a variety of data-intensive information. This information can be digital-cinema animations, high-resolution images, high-definition video-teleconferences, presentation slides, documents, spreadsheets or laptop screens. SAGE is cross-platform, community-driven, open-source visualization and collaboration middleware that utilizes shared national and international cyberinfrastructure for the advancement of scientific research and education.

  18. 47 CFR 15.250 - Operation of wideband systems within the band 5925-7250 MHz.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 50 MHz. The video bandwidth of the measurement instrument shall not be less than RBW. If RBW is...) Emissions from digital circuitry used to enable the operation of the transmitter may comply with the limits... from digital circuitry contained within the transmitter and the emissions are not intended to be...

  19. Hardware and software improvements to a low-cost horizontal parallax holographic video monitor.

    PubMed

    Henrie, Andrew; Codling, Jesse R; Gneiting, Scott; Christensen, Justin B; Awerkamp, Parker; Burdette, Mark J; Smalley, Daniel E

    2018-01-01

    Displays capable of true holographic video have been prohibitively expensive and difficult to build. With this paper, we present a suite of modularized hardware components and software tools needed to build a HoloMonitor with basic "hacker-space" equipment, highlighting improvements that have enabled the total materials cost to fall to $820, well below that of other holographic displays. It is our hope that the current level of simplicity, development, design flexibility, and documentation will enable the lay engineer, programmer, and scientist to relatively easily replicate, modify, and build upon our designs, bringing true holographic video to the masses.

  20. Enabling MEMS technologies for communications systems

    NASA Astrophysics Data System (ADS)

    Lubecke, Victor M.; Barber, Bradley P.; Arney, Susanne

    2001-11-01

    Modern communications demands have been steadily growing not only in size, but sophistication. Phone calls over copper wires have evolved into high definition video conferencing over optical fibers, and wireless internet browsing. The technology used to meet these demands is under constant pressure to provide increased capacity, speed, and efficiency, all with reduced size and cost. Various MEMS technologies have shown great promise for meeting these challenges by extending the performance of conventional circuitry and introducing radical new systems approaches. A variety of strategic MEMS structures including various cost-effective free-space optics and high-Q RF components are described, along with related practical implementation issues. These components are rapidly becoming essential for enabling the development of progressive new communications systems technologies including all-optical networks, and low cost multi-system wireless terminals and basestations.

  1. Long Term Activity Analysis in Surveillance Video Archives

    ERIC Educational Resources Information Center

    Chen, Ming-yu

    2010-01-01

    Surveillance video recording is becoming ubiquitous in daily life for public areas such as supermarkets, banks, and airports. The rate at which surveillance video is being generated has accelerated demand for machine understanding to enable better content-based search capabilities. Analyzing human activity is one of the key tasks to understand and…

  2. Facilitating Digital Video Production in the Language Arts Curriculum

    ERIC Educational Resources Information Center

    McKenney, Susan; Voogt, Joke

    2011-01-01

    Two studies were conducted to facilitate the development of feasible support for the process of integrating digital video making activities in the primary school language arts curriculum. The first study explored which teaching supports would be necessary to enable primary school children to create digital video as a means of fostering…

  3. Video game play, child diet, and physical activity behavior change: A randomized clinical trial

    USDA-ARS?s Scientific Manuscript database

    Video games designed to promote behavior change are a promising venue to enable children to learn healthier behaviors. The purpose is to evaluate the outcome from playing "Escape from Diab" (Diab) and "Nanoswarm: Invasion from Inner Space" (Nano) video games on children's diet, physical activity, an...

  4. Efficient management and promotion of utilization of the video information acquired by observation

    NASA Astrophysics Data System (ADS)

    Kitayama, T.; Tanaka, K.; Shimabukuro, R.; Hase, H.; Ogido, M.; Nakamura, M.; Saito, H.; Hanafusa, Y.; Sonoda, A.

    2012-12-01

    In Japan Agency for Marine-Earth Science and Technology (JAMSTEC), the deep sea videos are made from the research by JAMSTEC submersibles in 1982, and the information on the huge deep-sea that will reach more 4,000 dives (ca. 24,700 tapes) by the present are opened to public via the Internet since 2002. The deep-sea videos is important because the time variation of deep-sea environment with difficult investigation and collection and growth of the living thing in extreme environment can be checked. Moreover, with development of video technique, the advanced analysis of an investigation image is attained. For grasp of deep sea environment, especially the utility value of the image is high. In JAMSTEC's Data Research Center for Marine-Earth Sciences (DrC), collection of the video are obtained by dive investigation of JAMSTEC, preservation, quality control, and open to public are performed. It is our big subject that the huge video information which utility value has expanded managed efficiently and promotion of use. In this announcement, the present measure is introduced about these subjects . The videos recorded on a tape or various media onboard are collected, and the backup and encoding for preventing the loss and degradation are performed. The video inside of a hard disk has the large file size. Then, we use the Linear Tape File System (LTFS) which attracts attention with image management engineering these days. Cost does not start compared with the usual disk backup, but correspondence years can also save the video data for a long time, and the operatively of a file is not different from a disk. The video that carried out the transcode to offer is archived by disk storage, and offer according to a use is possible for it. For the promotion of utilization of the video, the video public presentation system was reformed completely from November, 2011 to "JAMSTEC E-library of Deep Sea Images (http:// www.godac.jamstec.go.jp/jedi/)" This new system has preparing various searches (e.g. Search by map, Tree, Icon, Keyword et al.). The video annotation is enabled with the same interface, and the usability of use and management is raised. Moreover, In the "Biological Information System for Marine Life : BISMaL (http://www.godac.jamstec.go.jp/bismal/e/index.html)" which is a data system for biodiversity information, particularly in biogeographic data of marine organisms, based on photography position information, the visualization of living thing distribution, the life list of a deep sea living thing, and the deep sea video were used, and aim at the contribution to biodiversity grasp. Future, aiming at the accuracy improvement of the information given to the video by Work support of the comment registration by automatic recognition of an image and Development of a comment registration tool onboard, it aims at offering higher quality information.

  5. Interactive CT-Video Registration for the Continuous Guidance of Bronchoscopy

    PubMed Central

    Merritt, Scott A.; Khare, Rahul; Bascom, Rebecca

    2014-01-01

    Bronchoscopy is a major step in lung cancer staging. To perform bronchoscopy, the physician uses a procedure plan, derived from a patient’s 3D computed-tomography (CT) chest scan, to navigate the bronchoscope through the lung airways. Unfortunately, physicians vary greatly in their ability to perform bronchoscopy. As a result, image-guided bronchoscopy systems, drawing upon the concept of CT-based virtual bronchoscopy (VB), have been proposed. These systems attempt to register the bronchoscope’s live position within the chest to a CT-based virtual chest space. Recent methods, which register the bronchoscopic video to CT-based endoluminal airway renderings, show promise but do not enable continuous real-time guidance. We present a CT-video registration method inspired by computer-vision innovations in the fields of image alignment and image-based rendering. In particular, motivated by the Lucas–Kanade algorithm, we propose an inverse-compositional framework built around a gradient-based optimization procedure. We next propose an implementation of the framework suitable for image-guided bronchoscopy. Laboratory tests, involving both single frames and continuous video sequences, demonstrate the robustness and accuracy of the method. Benchmark timing tests indicate that the method can run continuously at 300 frames/s, well beyond the real-time bronchoscopic video rate of 30 frames/s. This compares extremely favorably to the ≥1 s/frame speeds of other methods and indicates the method’s potential for real-time continuous registration. A human phantom study confirms the method’s efficacy for real-time guidance in a controlled setting, and, hence, points the way toward the first interactive CT-video registration approach for image-guided bronchoscopy. Along this line, we demonstrate the method’s efficacy in a complete guidance system by presenting a clinical study involving lung cancer patients. PMID:23508260

  6. The Modular Optical Underwater Survey System

    PubMed Central

    Amin, Ruhul; Richards, Benjamin L.; Misa, William F. X. E.; Taylor, Jeremy C.; Miller, Dianna R.; Rollo, Audrey K.; Demarke, Christopher; Ossolinski, Justin E.; Reardon, Russell T.; Koyanagi, Kyle H.

    2017-01-01

    The Pacific Islands Fisheries Science Center deploys the Modular Optical Underwater Survey System (MOUSS) to estimate the species-specific, size-structured abundance of commercially-important fish species in Hawaii and the Pacific Islands. The MOUSS is an autonomous stereo-video camera system designed for the in situ visual sampling of fish assemblages. This system is rated to 500 m and its low-light, stereo-video cameras enable identification, counting, and sizing of individuals at a range of 0.5–10 m. The modular nature of MOUSS allows for the efficient and cost-effective use of various imaging sensors, power systems, and deployment platforms. The MOUSS is in use for surveys in Hawaii, the Gulf of Mexico, and Southern California. In Hawaiian waters, the system can effectively identify individuals to a depth of 250 m using only ambient light. In this paper, we describe the MOUSS’s application in fisheries research, including the design, calibration, analysis techniques, and deployment mechanism. PMID:29019962

  7. Evolution of the 3-dimensional video system for facial motion analysis: ten years' experiences and recent developments.

    PubMed

    Tzou, Chieh-Han John; Pona, Igor; Placheta, Eva; Hold, Alina; Michaelidou, Maria; Artner, Nicole; Kropatsch, Walter; Gerber, Hans; Frey, Manfred

    2012-08-01

    Since the implementation of the computer-aided system for assessing facial palsy in 1999 by Frey et al (Plast Reconstr Surg. 1999;104:2032-2039), no similar system that can make an objective, three-dimensional, quantitative analysis of facial movements has been marketed. This system has been in routine use since its launch, and it has proven to be reliable, clinically applicable, and therapeutically accurate. With the cooperation of international partners, more than 200 patients were analyzed. Recent developments in computer vision--mostly in the area of generative face models, applying active--appearance models (and extensions), optical flow, and video-tracking-have been successfully incorporated to automate the prototype system. Further market-ready development and a business partner will be needed to enable the production of this system to enhance clinical methodology in diagnostic and prognostic accuracy as a personalized therapy concept, leading to better results and higher quality of life for patients with impaired facial function.

  8. Reduction in Fall Rate in Dementia Managed Care Through Video Incident Review: Pilot Study

    PubMed Central

    Netscher, George; Agrawal, Pulkit; Tabb Noyce, Lynn; Bayen, Alexandre

    2017-01-01

    Background Falls of individuals with dementia are frequent, dangerous, and costly. Early detection and access to the history of a fall is crucial for efficient care and secondary prevention in cognitively impaired individuals. However, most falls remain unwitnessed events. Furthermore, understanding why and how a fall occurred is a challenge. Video capture and secure transmission of real-world falls thus stands as a promising assistive tool. Objective The objective of this study was to analyze how continuous video monitoring and review of falls of individuals with dementia can support better quality of care. Methods A pilot observational study (July-September 2016) was carried out in a Californian memory care facility. Falls were video-captured (24×7), thanks to 43 wall-mounted cameras (deployed in all common areas and in 10 out of 40 private bedrooms of consenting residents and families). Video review was provided to facility staff, thanks to a customized mobile device app. The outcome measures were the count of residents’ falls happening in the video-covered areas, the acceptability of video recording, the analysis of video review, and video replay possibilities for care practice. Results Over 3 months, 16 falls were video-captured. A drop in fall rate was observed in the last month of the study. Acceptability was good. Video review enabled screening for the severity of falls and fall-related injuries. Video replay enabled identifying cognitive-behavioral deficiencies and environmental circumstances contributing to the fall. This allowed for secondary prevention in high-risk multi-faller individuals and for updated facility care policies regarding a safer living environment for all residents. Conclusions Video monitoring offers high potential to support conventional care in memory care facilities. PMID:29042342

  9. Feature Quantization and Pooling for Videos

    DTIC Science & Technology

    2014-05-01

    does not score high on this metric. The exceptions are videos where objects move - for exam- ple, the ice skaters (“ice”) and the tennis player , tracked...convincing me that my future path should include a PhD. Martial and Fernando, your energy is exceptional! Its influence can be seen in the burning...3.17 BMW enables Interpretation of similar regions across videos ( tennis ). . . . . . . 50 3.18 Common Motion Words across videos with large camera

  10. Enabling technology for human collaboration.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Murphy, Tim Andrew; Jones, Wendell Bruce; Warner, David Jay

    2003-11-01

    This report summarizes the results of a five-month LDRD late start project which explored the potential of enabling technology to improve the performance of small groups. The purpose was to investigate and develop new methods to assist groups working in high consequence, high stress, ambiguous and time critical situations, especially those for which it is impractical to adequately train or prepare. A testbed was constructed for exploratory analysis of a small group engaged in tasks with high cognitive and communication performance requirements. The system consisted of five computer stations, four with special devices equipped to collect physiologic, somatic, audio andmore » video data. Test subjects were recruited and engaged in a cooperative video game. Each team member was provided with a sensor array for physiologic and somatic data collection while playing the video game. We explored the potential for real-time signal analysis to provide information that enables emergent and desirable group behavior and improved task performance. The data collected in this study included audio, video, game scores, physiological, somatic, keystroke, and mouse movement data. The use of self-organizing maps (SOMs) was explored to search for emergent trends in the physiological data as it correlated with the video, audio and game scores. This exploration resulted in the development of two approaches for analysis, to be used concurrently, an individual SOM and a group SOM. The individual SOM was trained using the unique data of each person, and was used to monitor the effectiveness and stress level of each member of the group. The group SOM was trained using the data of the entire group, and was used to monitor the group effectiveness and dynamics. Results suggested that both types of SOMs were required to adequately track evolutions and shifts in group effectiveness. Four subjects were used in the data collection and development of these tools. This report documents a proof of concept study, and its observations are preliminary. Its main purpose is to demonstrate the potential for the tools developed here to improve the effectiveness of groups, and to suggest possible hypotheses for future exploration.« less

  11. Data recording and playback on video tape--a multi-channel analog interface for a digital audio processor system.

    PubMed

    Blaettler, M; Bruegger, A; Forster, I C; Lehareinger, Y

    1988-03-01

    The design of an analog interface to a digital audio signal processor (DASP)-video cassette recorder (VCR) system is described. The complete system represents a low-cost alternative to both FM instrumentation tape recorders and multi-channel chart recorders. The interface or DASP input-output unit described in this paper enables the recording and playback of up to 12 analog channels with a maximum of 12 bit resolution and a bandwidth of 2 kHz per channel. Internal control and timing in the recording component of the interface is performed using ROMs which can be reprogrammed to suit different analog-to-digital converter hardware. Improvement in the bandwidth specifications is possible by connecting channels in parallel. A parallel 16 bit data output port is provided for direct transfer of the digitized data to a computer.

  12. Normalized Metadata Generation for Human Retrieval Using Multiple Video Surveillance Cameras.

    PubMed

    Jung, Jaehoon; Yoon, Inhye; Lee, Seungwon; Paik, Joonki

    2016-06-24

    Since it is impossible for surveillance personnel to keep monitoring videos from a multiple camera-based surveillance system, an efficient technique is needed to help recognize important situations by retrieving the metadata of an object-of-interest. In a multiple camera-based surveillance system, an object detected in a camera has a different shape in another camera, which is a critical issue of wide-range, real-time surveillance systems. In order to address the problem, this paper presents an object retrieval method by extracting the normalized metadata of an object-of-interest from multiple, heterogeneous cameras. The proposed metadata generation algorithm consists of three steps: (i) generation of a three-dimensional (3D) human model; (ii) human object-based automatic scene calibration; and (iii) metadata generation. More specifically, an appropriately-generated 3D human model provides the foot-to-head direction information that is used as the input of the automatic calibration of each camera. The normalized object information is used to retrieve an object-of-interest in a wide-range, multiple-camera surveillance system in the form of metadata. Experimental results show that the 3D human model matches the ground truth, and automatic calibration-based normalization of metadata enables a successful retrieval and tracking of a human object in the multiple-camera video surveillance system.

  13. Normalized Metadata Generation for Human Retrieval Using Multiple Video Surveillance Cameras

    PubMed Central

    Jung, Jaehoon; Yoon, Inhye; Lee, Seungwon; Paik, Joonki

    2016-01-01

    Since it is impossible for surveillance personnel to keep monitoring videos from a multiple camera-based surveillance system, an efficient technique is needed to help recognize important situations by retrieving the metadata of an object-of-interest. In a multiple camera-based surveillance system, an object detected in a camera has a different shape in another camera, which is a critical issue of wide-range, real-time surveillance systems. In order to address the problem, this paper presents an object retrieval method by extracting the normalized metadata of an object-of-interest from multiple, heterogeneous cameras. The proposed metadata generation algorithm consists of three steps: (i) generation of a three-dimensional (3D) human model; (ii) human object-based automatic scene calibration; and (iii) metadata generation. More specifically, an appropriately-generated 3D human model provides the foot-to-head direction information that is used as the input of the automatic calibration of each camera. The normalized object information is used to retrieve an object-of-interest in a wide-range, multiple-camera surveillance system in the form of metadata. Experimental results show that the 3D human model matches the ground truth, and automatic calibration-based normalization of metadata enables a successful retrieval and tracking of a human object in the multiple-camera video surveillance system. PMID:27347961

  14. Using WorldWide Telescope in Observing, Research and Presentation

    NASA Astrophysics Data System (ADS)

    Roberts, Douglas A.; Fay, J.

    2014-01-01

    WorldWide Telescope (WWT) is free software that enables researchers to interactively explore observational data using a user-friendly interface. Reference, all-sky datasets and pointed observations are available as layers along with the ability to easily overlay additional FITS images and catalog data. Connections to the Astrophysics Data System (ADS) are included which enable visual investigation using WWT to drive document searches in ADS. WWT can be used to capture and share visual exploration with colleagues during observational planning and analysis. Finally, researchers can use WorldWide Telescope to create videos for professional, education and outreach presentations. I will conclude with an example of how I have used WWT in a research project. Specifically, I will discuss how WorldWide Telescope helped our group to prepare for radio observations and following them, in the analysis of multi-wavelength data taken in the inner parsec of the Galaxy. A concluding video will show how WWT brought together disparate datasets in a unified interactive visualization environment.

  15. Co-Located Collaborative Learning Video Game with Single Display Groupware

    ERIC Educational Resources Information Center

    Infante, Cristian; Weitz, Juan; Reyes, Tomas; Nussbaum, Miguel; Gomez, Florencia; Radovic, Darinka

    2010-01-01

    Role Game is a co-located CSCL video game played by three students sitting at one machine sharing a single screen, each with their own input device. Inspired by video console games, Role Game enables students to learn by doing, acquiring social abilities and mastering subject matter in a context of co-located collaboration. After describing the…

  16. Why Video Games Can Be a Good Fit for Formative Assessment

    ERIC Educational Resources Information Center

    Bauer, Malcolm; Wylie, Caroline; Jackson, Tanner; Mislevy, Bob; Hoffman-John, Erin; John, Michael; Corrigan, Seth

    2017-01-01

    This paper explores the relation between formative assessment principles and their analogues in video games that game designers have been developing over the past 35 years. We identify important parallels between the two that should enable effective and efficient use of well-designed video games in the classroom as part of an overall learning…

  17. Video Recorded Feedback for Self Regulation of Prospective Music Teachers in Piano Lessons

    ERIC Educational Resources Information Center

    Deniz, Jale

    2012-01-01

    The main purpose of the study is enabling the prospective teachers to make self-regulations by video recording their piano performances with their instructors and feedbacks of their instructors and detect the views of specific students concerning these video records. The research was carried out during 2008-2009 academic year in Marmara…

  18. Choosing Documentaries that Make a Difference

    ERIC Educational Resources Information Center

    Wilson, Robert D.

    2004-01-01

    Students pay attention to videos that will help them in life and enable them to get better grades. When watching documentaries, they want relevance versus canned presentations with actors. So do you, for that matter. Because many students go home to a steady diet of television, rock videos, video games and box office movies--all of them well--made…

  19. Duckneglect: video-games based neglect rehabilitation.

    PubMed

    Mainetti, R; Sedda, A; Ronchetti, M; Bottini, G; Borghese, N A

    2013-01-01

    Video-games are becoming a common tool to guide patients through rehabilitation because of their power of motivating and engaging their users. Video-games may also be integrated into an infrastructure that allows patients, discharged from the hospital, to continue intensive rehabilitation at home under remote monitoring by the hospital itself, as suggested by the recently funded Rewire project. Goal of this work is to describe a novel low cost platform, based on video-games, targeted to neglect rehabilitation. The patient is guided to explore his neglected hemispace by a set of specifically designed games that ask him to reach targets, with an increasing level of difficulties. Visual and auditory cues helped the patient in the task and are progressively removed. A controlled randomization of scenarios, targets and distractors, a balanced reward system and music played in the background, all contribute to make rehabilitation more attractive, thus enabling intensive prolonged treatment. Results from our first patient, who underwent rehabilitation for half an hour, for five days a week for one month, showed on one side a very positive attitude of the patient towards the platform for the whole period, on the other side a significant improvement was obtained. Importantly, this amelioration was confirmed at a follow up evaluation five months after the last rehabilitation session and generalized to everyday life activities. Such a system could well be integrated into a home based rehabilitation system.

  20. Automatic Camera Control System for a Distant Lecture with Videoing a Normal Classroom.

    ERIC Educational Resources Information Center

    Suganuma, Akira; Nishigori, Shuichiro

    The growth of a communication network technology enables students to take part in a distant lecture. Although many lectures are conducted in universities by using Web contents, normal lectures using a blackboard are still held. The latter style lecture is good for a teacher's dynamic explanation. A way to modify it for a distant lecture is to…

  1. Live animal myelin histomorphometry of the spinal cord with video-rate multimodal nonlinear microendoscopy

    NASA Astrophysics Data System (ADS)

    Bélanger, Erik; Crépeau, Joël; Laffray, Sophie; Vallée, Réal; De Koninck, Yves; Côté, Daniel

    2012-02-01

    In vivo imaging of cellular dynamics can be dramatically enabling to understand the pathophysiology of nervous system diseases. To fully exploit the power of this approach, the main challenges have been to minimize invasiveness and maximize the number of concurrent optical signals that can be combined to probe the interplay between multiple cellular processes. Label-free coherent anti-Stokes Raman scattering (CARS) microscopy, for example, can be used to follow demyelination in neurodegenerative diseases or after trauma, but myelin imaging alone is not sufficient to understand the complex sequence of events that leads to the appearance of lesions in the white matter. A commercially available microendoscope is used here to achieve minimally invasive, video-rate multimodal nonlinear imaging of cellular processes in live mouse spinal cord. The system allows for simultaneous CARS imaging of myelin sheaths and two-photon excitation fluorescence microendoscopy of microglial cells and axons. Morphometric data extraction at high spatial resolution is also described, with a technique for reducing motion-related imaging artifacts. Despite its small diameter, the microendoscope enables high speed multimodal imaging over wide areas of tissue, yet at resolution sufficient to quantify subtle differences in myelin thickness and microglial motility.

  2. Super Resolution Algorithm for CCTVs

    NASA Astrophysics Data System (ADS)

    Gohshi, Seiichi

    2015-03-01

    Recently, security cameras and CCTV systems have become an important part of our daily lives. The rising demand for such systems has created business opportunities in this field, especially in big cities. Analogue CCTV systems are being replaced by digital systems, and HDTV CCTV has become quite common. HDTV CCTV can achieve images with high contrast and decent quality if they are clicked in daylight. However, the quality of an image clicked at night does not always have sufficient contrast and resolution because of poor lighting conditions. CCTV systems depend on infrared light at night to compensate for insufficient lighting conditions, thereby producing monochrome images and videos. However, these images and videos do not have high contrast and are blurred. We propose a nonlinear signal processing technique that significantly improves visual and image qualities (contrast and resolution) of low-contrast infrared images. The proposed method enables the use of infrared cameras for various purposes such as night shot and poor lighting environments under poor lighting conditions.

  3. Algorithm for Video Summarization of Bronchoscopy Procedures

    PubMed Central

    2011-01-01

    Background The duration of bronchoscopy examinations varies considerably depending on the diagnostic and therapeutic procedures used. It can last more than 20 minutes if a complex diagnostic work-up is included. With wide access to videobronchoscopy, the whole procedure can be recorded as a video sequence. Common practice relies on an active attitude of the bronchoscopist who initiates the recording process and usually chooses to archive only selected views and sequences. However, it may be important to record the full bronchoscopy procedure as documentation when liability issues are at stake. Furthermore, an automatic recording of the whole procedure enables the bronchoscopist to focus solely on the performed procedures. Video recordings registered during bronchoscopies include a considerable number of frames of poor quality due to blurry or unfocused images. It seems that such frames are unavoidable due to the relatively tight endobronchial space, rapid movements of the respiratory tract due to breathing or coughing, and secretions which occur commonly in the bronchi, especially in patients suffering from pulmonary disorders. Methods The use of recorded bronchoscopy video sequences for diagnostic, reference and educational purposes could be considerably extended with efficient, flexible summarization algorithms. Thus, the authors developed a prototype system to create shortcuts (called summaries or abstracts) of bronchoscopy video recordings. Such a system, based on models described in previously published papers, employs image analysis methods to exclude frames or sequences of limited diagnostic or education value. Results The algorithm for the selection or exclusion of specific frames or shots from video sequences recorded during bronchoscopy procedures is based on several criteria, including automatic detection of "non-informative", frames showing the branching of the airways and frames including pathological lesions. Conclusions The paper focuses on the challenge of generating summaries of bronchoscopy video recordings. PMID:22185344

  4. Next Generation Integrated Environment for Collaborative Work Across Internets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harvey B. Newman

    2009-02-24

    We are now well-advanced in our development, prototyping and deployment of a high performance next generation Integrated Environment for Collaborative Work. The system, aimed at using the capability of ESnet and Internet2 for rapid data exchange, is based on the Virtual Room Videoconferencing System (VRVS) developed by Caltech. The VRVS system has been chosen by the Internet2 Digital Video (I2-DV) Initiative as a preferred foundation for the development of advanced video, audio and multimedia collaborative applications by the Internet2 community. Today, the system supports high-end, broadcast-quality interactivity, while enabling a wide variety of clients (Mbone, H.323) to participate in themore » same conference by running different standard protocols in different contexts with different bandwidth connection limitations, has a fully Web-integrated user interface, developers and administrative APIs, a widely scalable video network topology based on both multicast domains and unicast tunnels, and demonstrated multiplatform support. This has led to its rapidly expanding production use for national and international scientific collaborations in more than 60 countries. We are also in the process of creating a 'testbed video network' and developing the necessary middleware to support a set of new and essential requirements for rapid data exchange, and a high level of interactivity in large-scale scientific collaborations. These include a set of tunable, scalable differentiated network services adapted to each of the data streams associated with a large number of collaborative sessions, policy-based and network state-based resource scheduling, authentication, and optional encryption to maintain confidentiality of inter-personal communications. High performance testbed video networks will be established in ESnet and Internet2 to test and tune the implementation, using a few target application-sets.« less

  5. Construction of a multimodal CT-video chest model

    NASA Astrophysics Data System (ADS)

    Byrnes, Patrick D.; Higgins, William E.

    2014-03-01

    Bronchoscopy enables a number of minimally invasive chest procedures for diseases such as lung cancer and asthma. For example, using the bronchoscope's continuous video stream as a guide, a physician can navigate through the lung airways to examine general airway health, collect tissue samples, or administer a disease treatment. In addition, physicians can now use new image-guided intervention (IGI) systems, which draw upon both three-dimensional (3D) multi-detector computed tomography (MDCT) chest scans and bronchoscopic video, to assist with bronchoscope navigation. Unfortunately, little use is made of the acquired video stream, a potentially invaluable source of information. In addition, little effort has been made to link the bronchoscopic video stream to the detailed anatomical information given by a patient's 3D MDCT chest scan. We propose a method for constructing a multimodal CT-video model of the chest. After automatically computing a patient's 3D MDCT-based airway-tree model, the method next parses the available video data to generate a positional linkage between a sparse set of key video frames and airway path locations. Next, a fusion/mapping of the video's color mucosal information and MDCT-based endoluminal surfaces is performed. This results in the final multimodal CT-video chest model. The data structure constituting the model provides a history of those airway locations visited during bronchoscopy. It also provides for quick visual access to relevant sections of the airway wall by condensing large portions of endoscopic video into representative frames containing important structural and textural information. When examined with a set of interactive visualization tools, the resulting fused data structure provides a rich multimodal data source. We demonstrate the potential of the multimodal model with both phantom and human data.

  6. Piloting Telepresence-Enabled Education and Outreach Programs from a UNOLS Ship - Live Interactive Broadcasts from the R/V Endeavor

    NASA Astrophysics Data System (ADS)

    Pereira, M.; Coleman, D.; Donovan, S.; Sanders, R.; Gingras, A.; DeCiccio, A.; Bilbo, E.

    2016-02-01

    The University of Rhode Island's R/V Endeavor was recently equipped with a new satellite telecommunication system and a telepresence system to enable live ship-to-shore broadcasts and remote user participation through the Inner Space Center. The Rhode Island Endeavor Program, which provides state-funded ship time to support local oceanographic research and education, funded a 5-day cruise off the Rhode Island coast that involved a multidisciplinary team of scientists, engineers, students, educators and video producers. Using two remotely operated vehicle (ROV) systems, several dives were conducted to explore various shipwrecks including the German WWII submarine U-853. During the cruise, a team of URI ocean engineers supported ROV operations and performed engineering tests of a new manipulator. Colleagues from the United States Coast Guard Academy operated a small ROV to collect imagery and environmental data around the wreck sites. Additionally, a team of engineers and oceanographers from URI tested a new acoustic sound source and small acoustic receivers developed for a fish tracking experiment. The video producers worked closely with the participating scientists, students and two high school science teachers to communicate the oceanographic research during live educational broadcasts streamed into Rhode Island classrooms, to the public Internet, and directly to Rhode Island Public Television. This work contributed to increasing awareness of possible career pathways for the Rhode Island K-12 population, taught about active oceanographic research projects, and engaged the public in scientific adventures at sea. The interactive nature of the broadcasts included live responses to questions submitted online and live updates and feedback using social media tools. This project characterizes the power of telepresence and video broadcasting to engage diverse learners and exemplifies innovative ways to utilize social media and the Internet to draw a varied audience.

  7. Real-time 3D visualization of volumetric video motion sensor data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carlson, J.; Stansfield, S.; Shawver, D.

    1996-11-01

    This paper addresses the problem of improving detection, assessment, and response capabilities of security systems. Our approach combines two state-of-the-art technologies: volumetric video motion detection (VVMD) and virtual reality (VR). This work capitalizes on the ability of VVMD technology to provide three-dimensional (3D) information about the position, shape, and size of intruders within a protected volume. The 3D information is obtained by fusing motion detection data from multiple video sensors. The second component involves the application of VR technology to display information relating to the sensors and the sensor environment. VR technology enables an operator, or security guard, to bemore » immersed in a 3D graphical representation of the remote site. VVMD data is transmitted from the remote site via ordinary telephone lines. There are several benefits to displaying VVMD information in this way. Because the VVMD system provides 3D information and because the sensor environment is a physical 3D space, it seems natural to display this information in 3D. Also, the 3D graphical representation depicts essential details within and around the protected volume in a natural way for human perception. Sensor information can also be more easily interpreted when the operator can `move` through the virtual environment and explore the relationships between the sensor data, objects and other visual cues present in the virtual environment. By exploiting the powerful ability of humans to understand and interpret 3D information, we expect to improve the means for visualizing and interpreting sensor information, allow a human operator to assess a potential threat more quickly and accurately, and enable a more effective response. This paper will detail both the VVMD and VR technologies and will discuss a prototype system based upon their integration.« less

  8. Group tele-immersion:enabling natural interactions between groups at distant sites.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Christine L.; Stewart, Corbin; Nashel, Andrew

    2005-08-01

    We present techniques and a system for synthesizing views for video teleconferencing between small groups. In place of replicating one-to-one systems for each pair of users, we create a single unified display of the remote group. Instead of performing dense 3D scene computation, we use more cameras and trade-off storage and hardware for computation. While it is expensive to directly capture a scene from all possible viewpoints, we have observed that the participants viewpoints usually remain at a constant height (eye level) during video teleconferencing. Therefore, we can restrict the possible viewpoint to be within a virtual plane without sacrificingmore » much of the realism, and in cloning so we significantly reduce the number of required cameras. Based on this observation, we have developed a technique that uses light-field style rendering to guarantee the quality of the synthesized views, using a linear array of cameras with a life-sized, projected display. Our full-duplex prototype system between Sandia National Laboratories, California and the University of North Carolina at Chapel Hill has been able to synthesize photo-realistic views at interactive rates, and has been used to video conference during regular meetings between the sites.« less

  9. Computer-based System for the Virtual-Endoscopic Guidance of Bronchoscopy.

    PubMed

    Helferty, J P; Sherbondy, A J; Kiraly, A P; Higgins, W E

    2007-11-01

    The standard procedure for diagnosing lung cancer involves two stages: three-dimensional (3D) computed-tomography (CT) image assessment, followed by interventional bronchoscopy. In general, the physician has no link between the 3D CT image assessment results and the follow-on bronchoscopy. Thus, the physician essentially performs bronchoscopic biopsy of suspect cancer sites blindly. We have devised a computer-based system that greatly augments the physician's vision during bronchoscopy. The system uses techniques from computer graphics and computer vision to enable detailed 3D CT procedure planning and follow-on image-guided bronchoscopy. The procedure plan is directly linked to the bronchoscope procedure, through a live registration and fusion of the 3D CT data and bronchoscopic video. During a procedure, the system provides many visual tools, fused CT-video data, and quantitative distance measures; this gives the physician considerable visual feedback on how to maneuver the bronchoscope and where to insert the biopsy needle. Central to the system is a CT-video registration technique, based on normalized mutual information. Several sets of results verify the efficacy of the registration technique. In addition, we present a series of test results for the complete system for phantoms, animals, and human lung-cancer patients. The results indicate that not only is the variation in skill level between different physicians greatly reduced by the system over the standard procedure, but that biopsy effectiveness increases.

  10. Tactical 3D model generation using structure-from-motion on video from unmanned systems

    NASA Astrophysics Data System (ADS)

    Harguess, Josh; Bilinski, Mark; Nguyen, Kim B.; Powell, Darren

    2015-05-01

    Unmanned systems have been cited as one of the future enablers of all the services to assist the warfighter in dominating the battlespace. The potential benefits of unmanned systems are being closely investigated -- from providing increased and potentially stealthy surveillance, removing the warfighter from harms way, to reducing the manpower required to complete a specific job. In many instances, data obtained from an unmanned system is used sparingly, being applied only to the mission at hand. Other potential benefits to be gained from the data are overlooked and, after completion of the mission, the data is often discarded or lost. However, this data can be further exploited to offer tremendous tactical, operational, and strategic value. To show the potential value of this otherwise lost data, we designed a system that persistently stores the data in its original format from the unmanned vehicle and then generates a new, innovative data medium for further analysis. The system streams imagery and video from an unmanned system (original data format) and then constructs a 3D model (new data medium) using structure-from-motion. The 3D generated model provides warfighters additional situational awareness, tactical and strategic advantages that the original video stream lacks. We present our results using simulated unmanned vehicle data with Google Earth™providing the imagery as well as real-world data, including data captured from an unmanned aerial vehicle flight.

  11. Video Salient Object Detection via Fully Convolutional Networks.

    PubMed

    Wang, Wenguan; Shen, Jianbing; Shao, Ling

    This paper proposes a deep learning model to efficiently detect salient regions in videos. It addresses two important issues: 1) deep video saliency model training with the absence of sufficiently large and pixel-wise annotated video data and 2) fast video saliency training and detection. The proposed deep video saliency network consists of two modules, for capturing the spatial and temporal saliency information, respectively. The dynamic saliency model, explicitly incorporating saliency estimates from the static saliency model, directly produces spatiotemporal saliency inference without time-consuming optical flow computation. We further propose a novel data augmentation technique that simulates video training data from existing annotated image data sets, which enables our network to learn diverse saliency information and prevents overfitting with the limited number of training videos. Leveraging our synthetic video data (150K video sequences) and real videos, our deep video saliency model successfully learns both spatial and temporal saliency cues, thus producing accurate spatiotemporal saliency estimate. We advance the state-of-the-art on the densely annotated video segmentation data set (MAE of .06) and the Freiburg-Berkeley Motion Segmentation data set (MAE of .07), and do so with much improved speed (2 fps with all steps).This paper proposes a deep learning model to efficiently detect salient regions in videos. It addresses two important issues: 1) deep video saliency model training with the absence of sufficiently large and pixel-wise annotated video data and 2) fast video saliency training and detection. The proposed deep video saliency network consists of two modules, for capturing the spatial and temporal saliency information, respectively. The dynamic saliency model, explicitly incorporating saliency estimates from the static saliency model, directly produces spatiotemporal saliency inference without time-consuming optical flow computation. We further propose a novel data augmentation technique that simulates video training data from existing annotated image data sets, which enables our network to learn diverse saliency information and prevents overfitting with the limited number of training videos. Leveraging our synthetic video data (150K video sequences) and real videos, our deep video saliency model successfully learns both spatial and temporal saliency cues, thus producing accurate spatiotemporal saliency estimate. We advance the state-of-the-art on the densely annotated video segmentation data set (MAE of .06) and the Freiburg-Berkeley Motion Segmentation data set (MAE of .07), and do so with much improved speed (2 fps with all steps).

  12. DMD-based quantitative phase microscopy and optical diffraction tomography

    NASA Astrophysics Data System (ADS)

    Zhou, Renjie

    2018-02-01

    Digital micromirror devices (DMDs), which offer high speed and high degree of freedoms in steering light illuminations, have been increasingly applied to optical microscopy systems in recent years. Lately, we introduced DMDs into digital holography to enable new imaging modalities and break existing imaging limitations. In this paper, we will first present our progress in using DMDs for demonstrating laser-illumination Fourier ptychographic microscopy (FPM) with shotnoise limited detection. After that, we will present a novel common-path quantitative phase microscopy (QPM) system based on using a DMD. Building on those early developments, a DMD-based high speed optical diffraction tomography (ODT) system has been recently demonstrated, and the results will also be presented. This ODT system is able to achieve video-rate 3D refractive-index imaging, which can potentially enable observations of high-speed 3D sample structural changes.

  13. "Use Condoms for Safe Sex!" Youth-Led Video Making and Sex Education

    ERIC Educational Resources Information Center

    Yang, Kyung-Hwa; MacEntee, Katie

    2015-01-01

    Situated at the intersection between child-led visual methods and sex education, this paper focuses on the potential of youth-led video making to enable young people to develop guiding principles to inform their own sexual behaviour. It draws on findings from a video-making project carried out with a group of South African young people, which…

  14. Uses of Video in Understanding and Improving Mathematical Thinking and Teaching

    ERIC Educational Resources Information Center

    Schoenfeld, Alan H.

    2017-01-01

    This article characterizes my use of video as a tool for research, design and development. I argue that videos, while a potentially overwhelming source of data, provide the kind of large bandwidth that enables one to capture phenomena that one might otherwise miss; and that although the act of taping is in itself an act of selection, there is…

  15. Improved technical performance of a multifunctional prehospital telemedicine system between the research phase and the routine use phase - an observational study.

    PubMed

    Felzen, Marc; Brokmann, Jörg C; Beckers, Stefan K; Czaplik, Michael; Hirsch, Frederik; Tamm, Miriam; Rossaint, Rolf; Bergrath, Sebastian

    2017-04-01

    Introduction Telemedical concepts in emergency medical services (EMS) lead to improved process times and patient outcomes, but their technical performance has thus far been insufficient; nevertheless, the concept was transferred into EMS routine care in Aachen, Germany. This study evaluated the system's technical performance and compared it to a precursor system. Methods The telemedicine system was implemented on seven ambulances and a teleconsultation centre staffed with experienced EMS physicians was established in April 2014. Telemedical applications included mobile vital data, 12-lead, picture transmission and video streaming from inside the ambulances. The tele-EMS physician filled in a questionnaire regarding the technical performance of the applications, background noise and assessed clinical values of the transmitted pictures and videos after each mission between 15 May 2014-15 October 2014. Results Teleconsultation was established during 539 emergency cases. In 83% of the cases ( n = 447), only the paramedics and the tele-EMS physician were involved. Transmission success rates ranged from 98% (audio connection) to 93% (12-lead electrocardiogram (ECG) transmission). All functionalities, except video transmission, were significantly better than the pilot project ( p < 0.05). Severe background noise was detected to a lesser extent ( p = 0.0004) and the clinical value of the pictures and videos were considered significantly more valuable. Discussion The multifunctional system is now sufficient for routine use and is the most reliable mobile emergency telemedicine system compared to other published projects. Dropouts were due to user errors and network coverage problems. These findings enable widespread use of this system in the future, reducing the critical time intervals until medical therapy is started.

  16. ICAROUS: Integrated Configurable Architecture for Unmanned Systems

    NASA Technical Reports Server (NTRS)

    Consiglio, Maria C.

    2016-01-01

    NASA's Unmanned Aerial System (UAS) Traffic Management (UTM) project aims at enabling near-term, safe operations of small UAS vehicles in uncontrolled airspace, i.e., Class G airspace. A far-term goal of UTM research and development is to accommodate the expected rise in small UAS traffic density throughout the National Airspace System (NAS) at low altitudes for beyond visual line-of-sight operations. This video describes a new capability referred to as ICAROUS (Integrated Configurable Algorithms for Reliable Operations of Unmanned Systems), which is being developed under the auspices of the UTM project. ICAROUS is a software architecture comprised of highly assured algorithms for building safety-centric, autonomous, unmanned aircraft applications. Central to the development of the ICAROUS algorithms is the use of well-established formal methods to guarantee higher levels of safety assurance by monitoring and bounding the behavior of autonomous systems. The core autonomy-enabling capabilities in ICAROUS include constraint conformance monitoring and autonomous detect and avoid functions. ICAROUS also provides a highly configurable user interface that enables the modular integration of mission-specific software components.

  17. A web-based system for home monitoring of patients with Parkinson's disease using wearable sensors.

    PubMed

    Chen, Bor-Rong; Patel, Shyamal; Buckley, Thomas; Rednic, Ramona; McClure, Douglas J; Shih, Ludy; Tarsy, Daniel; Welsh, Matt; Bonato, Paolo

    2011-03-01

    This letter introduces MercuryLive, a platform to enable home monitoring of patients with Parkinson's disease (PD) using wearable sensors. MercuryLive contains three tiers: a resource-aware data collection engine that relies upon wearable sensors, web services for live streaming and storage of sensor data, and a web-based graphical user interface client with video conferencing capability. Besides, the platform has the capability of analyzing sensor (i.e., accelerometer) data to reliably estimate clinical scores capturing the severity of tremor, bradykinesia, and dyskinesia. Testing results showed an average data latency of less than 400 ms and video latency of about 200 ms with video frame rate of about 13 frames/s when 800 kb/s of bandwidth were available and we used a 40% video compression, and data feature upload requiring 1 min of extra time following a 10 min interactive session. These results indicate that the proposed platform is suitable to monitor patients with PD to facilitate the titration of medications in the late stages of the disease.

  18. Semi-automatic 2D-to-3D conversion of human-centered videos enhanced by age and gender estimation

    NASA Astrophysics Data System (ADS)

    Fard, Mani B.; Bayazit, Ulug

    2014-01-01

    In this work, we propose a feasible 3D video generation method to enable high quality visual perception using a monocular uncalibrated camera. Anthropometric distances between face standard landmarks are approximated based on the person's age and gender. These measurements are used in a 2-stage approach to facilitate the construction of binocular stereo images. Specifically, one view of the background is registered in initial stage of video shooting. It is followed by an automatically guided displacement of the camera toward its secondary position. At the secondary position the real-time capturing is started and the foreground (viewed person) region is extracted for each frame. After an accurate parallax estimation the extracted foreground is placed in front of the background image that was captured at the initial position. So the constructed full view of the initial position combined with the view of the secondary (current) position, form the complete binocular pairs during real-time video shooting. The subjective evaluation results present a competent depth perception quality through the proposed system.

  19. NV-CMOS HD camera for day/night imaging

    NASA Astrophysics Data System (ADS)

    Vogelsong, T.; Tower, J.; Sudol, Thomas; Senko, T.; Chodelka, D.

    2014-06-01

    SRI International (SRI) has developed a new multi-purpose day/night video camera with low-light imaging performance comparable to an image intensifier, while offering the size, weight, ruggedness, and cost advantages enabled by the use of SRI's NV-CMOS HD digital image sensor chip. The digital video output is ideal for image enhancement, sharing with others through networking, video capture for data analysis, or fusion with thermal cameras. The camera provides Camera Link output with HD/WUXGA resolution of 1920 x 1200 pixels operating at 60 Hz. Windowing to smaller sizes enables operation at higher frame rates. High sensitivity is achieved through use of backside illumination, providing high Quantum Efficiency (QE) across the visible and near infrared (NIR) bands (peak QE <90%), as well as projected low noise (<2h+) readout. Power consumption is minimized in the camera, which operates from a single 5V supply. The NVCMOS HD camera provides a substantial reduction in size, weight, and power (SWaP) , ideal for SWaP-constrained day/night imaging platforms such as UAVs, ground vehicles, fixed mount surveillance, and may be reconfigured for mobile soldier operations such as night vision goggles and weapon sights. In addition the camera with the NV-CMOS HD imager is suitable for high performance digital cinematography/broadcast systems, biofluorescence/microscopy imaging, day/night security and surveillance, and other high-end applications which require HD video imaging with high sensitivity and wide dynamic range. The camera comes with an array of lens mounts including C-mount and F-mount. The latest test data from the NV-CMOS HD camera will be presented.

  20. NASA Lewis' Telescience Support Center Supports Orbiting Microgravity Experiments

    NASA Technical Reports Server (NTRS)

    Hawersaat, Bob W.

    1998-01-01

    The Telescience Support Center (TSC) at the NASA Lewis Research Center was developed to enable Lewis-based science teams and principal investigators to monitor and control experimental and operational payloads onboard the International Space Station. The TSC is a remote operations hub that can interface with other remote facilities, such as universities and industrial laboratories. As a pathfinder for International Space Station telescience operations, the TSC has incrementally developed an operational capability by supporting space shuttle missions. The TSC has evolved into an environment where experimenters and scientists can control and monitor the health and status of their experiments in near real time. Remote operations (or telescience) allow local scientists and their experiment teams to minimize their travel and maintain a local complement of expertise for hardware and software troubleshooting and data analysis. The TSC was designed, developed, and is operated by Lewis' Engineering and Technical Services Directorate and its support contractors, Analex Corporation and White's Information System, Inc. It is managed by Lewis' Microgravity Science Division. The TSC provides operational support in conjunction with the NASA Marshall Space Flight Center and NASA Johnson Space Center. It enables its customers to command, receive, and view telemetry; monitor the science video from their on-orbit experiments; and communicate over mission-support voice loops. Data can be received and routed to experimenter-supplied ground support equipment and/or to the TSC data system for display. Video teleconferencing capability and other video sources, such as NASA TV, are also available. The TSC has a full complement of standard services to aid experimenters in telemetry operations.

  1. Wireless live streaming video of laparoscopic surgery: a bandwidth analysis for handheld computers.

    PubMed

    Gandsas, Alex; McIntire, Katherine; George, Ivan M; Witzke, Wayne; Hoskins, James D; Park, Adrian

    2002-01-01

    Over the last six years, streaming media has emerged as a powerful tool for delivering multimedia content over networks. Concurrently, wireless technology has evolved, freeing users from desktop boundaries and wired infrastructures. At the University of Kentucky Medical Center, we have integrated these technologies to develop a system that can wirelessly transmit live surgery from the operating room to a handheld computer. This study establishes the feasibility of using our system to view surgeries and describes the effect of bandwidth on image quality. A live laparoscopic ventral hernia repair was transmitted to a single handheld computer using five encoding speeds at a constant frame rate, and the quality of the resulting streaming images was evaluated. No video images were rendered when video data were encoded at 28.8 kilobytes per second (Kbps), the slowest encoding bitrate studied. The highest quality images were rendered at encoding speeds greater than or equal to 150 Kbps. Of note, a 15 second transmission delay was experienced using all four encoding schemes that rendered video images. We believe that the wireless transmission of streaming video to handheld computers has tremendous potential to enhance surgical education. For medical students and residents, the ability to view live surgeries, lectures, courses and seminars on handheld computers means a larger number of learning opportunities. In addition, we envision that wireless enabled devices may be used to telemonitor surgical procedures. However, bandwidth availability and streaming delay are major issues that must be addressed before wireless telementoring becomes a reality.

  2. Implementation of real-time digital endoscopic image processing system

    NASA Astrophysics Data System (ADS)

    Song, Chul Gyu; Lee, Young Mook; Lee, Sang Min; Kim, Won Ky; Lee, Jae Ho; Lee, Myoung Ho

    1997-10-01

    Endoscopy has become a crucial diagnostic and therapeutic procedure in clinical areas. Over the past four years, we have developed a computerized system to record and store clinical data pertaining to endoscopic surgery of laparascopic cholecystectomy, pelviscopic endometriosis, and surgical arthroscopy. In this study, we developed a computer system, which is composed of a frame grabber, a sound board, a VCR control board, a LAN card and EDMS. Also, computer system controls peripheral instruments such as a color video printer, a video cassette recorder, and endoscopic input/output signals. Digital endoscopic data management system is based on open architecture and a set of widely available industry standards; namely Microsoft Windows as an operating system, TCP/IP as a network protocol and a time sequential database that handles both images and speech. For the purpose of data storage, we used MOD and CD- R. Digital endoscopic system was designed to be able to store, recreate, change, and compress signals and medical images. Computerized endoscopy enables us to generate and manipulate the original visual document, making it accessible to a virtually unlimited number of physicians.

  3. An assessment of the value of video recordings of receptionists.

    PubMed Central

    Sharp, A J; Platts, P; Turner, J H; Drucquer, M H

    1989-01-01

    Video recordings of receptionists at work in general practice were found to be useful for self assessment by the receptionists and enabled the doctors to see areas for improvement in the organization of the reception area. PMID:2560024

  4. Cardiology-oriented PACS

    NASA Astrophysics Data System (ADS)

    Silva, Augusto F. d.; Costa, Carlos; Abrantes, Pedro; Gama, Vasco; Den Boer, Ad

    1998-07-01

    This paper describes an integrated system designed to provide efficient means for DICOM compliant cardiac imaging archival, transmission and visualization based on a communications backbone matching recent enabling telematic technologies like Asynchronous Transfer Mode (ATM) and switched Local Area Networks (LANs). Within a distributed client-server framework, the system was conceived on a modality based bottom-up approach, aiming ultrafast access to short term archives and seamless retrieval of cardiac video sequences throughout review stations located at the outpatient referral rooms, intensive and intermediate care units and operating theaters.

  5. In-network adaptation of SHVC video in software-defined networks

    NASA Astrophysics Data System (ADS)

    Awobuluyi, Olatunde; Nightingale, James; Wang, Qi; Alcaraz Calero, Jose Maria; Grecos, Christos

    2016-04-01

    Software Defined Networks (SDN), when combined with Network Function Virtualization (NFV) represents a paradigm shift in how future networks will behave and be managed. SDN's are expected to provide the underpinning technologies for future innovations such as 5G mobile networks and the Internet of Everything. The SDN architecture offers features that facilitate an abstracted and centralized global network view in which packet forwarding or dropping decisions are based on application flows. Software Defined Networks facilitate a wide range of network management tasks, including the adaptation of real-time video streams as they traverse the network. SHVC, the scalable extension to the recent H.265 standard is a new video encoding standard that supports ultra-high definition video streams with spatial resolutions of up to 7680×4320 and frame rates of 60fps or more. The massive increase in bandwidth required to deliver these U-HD video streams dwarfs the bandwidth requirements of current high definition (HD) video. Such large bandwidth increases pose very significant challenges for network operators. In this paper we go substantially beyond the limited number of existing implementations and proposals for video streaming in SDN's all of which have primarily focused on traffic engineering solutions such as load balancing. By implementing and empirically evaluating an SDN enabled Media Adaptation Network Entity (MANE) we provide a valuable empirical insight into the benefits and limitations of SDN enabled video adaptation for real time video applications. The SDN-MANE is the video adaptation component of our Video Quality Assurance Manager (VQAM) SDN control plane application, which also includes an SDN monitoring component to acquire network metrics and a decision making engine using algorithms to determine the optimum adaptation strategy for any real time video application flow given the current network conditions. Our proposed VQAM application has been implemented and evaluated on an SDN allowing us to provide important benchmarks for video streaming over SDN and for SDN control plane latency.

  6. Reduction in Fall Rate in Dementia Managed Care Through Video Incident Review: Pilot Study.

    PubMed

    Bayen, Eleonore; Jacquemot, Julien; Netscher, George; Agrawal, Pulkit; Tabb Noyce, Lynn; Bayen, Alexandre

    2017-10-17

    Falls of individuals with dementia are frequent, dangerous, and costly. Early detection and access to the history of a fall is crucial for efficient care and secondary prevention in cognitively impaired individuals. However, most falls remain unwitnessed events. Furthermore, understanding why and how a fall occurred is a challenge. Video capture and secure transmission of real-world falls thus stands as a promising assistive tool. The objective of this study was to analyze how continuous video monitoring and review of falls of individuals with dementia can support better quality of care. A pilot observational study (July-September 2016) was carried out in a Californian memory care facility. Falls were video-captured (24×7), thanks to 43 wall-mounted cameras (deployed in all common areas and in 10 out of 40 private bedrooms of consenting residents and families). Video review was provided to facility staff, thanks to a customized mobile device app. The outcome measures were the count of residents' falls happening in the video-covered areas, the acceptability of video recording, the analysis of video review, and video replay possibilities for care practice. Over 3 months, 16 falls were video-captured. A drop in fall rate was observed in the last month of the study. Acceptability was good. Video review enabled screening for the severity of falls and fall-related injuries. Video replay enabled identifying cognitive-behavioral deficiencies and environmental circumstances contributing to the fall. This allowed for secondary prevention in high-risk multi-faller individuals and for updated facility care policies regarding a safer living environment for all residents. Video monitoring offers high potential to support conventional care in memory care facilities. ©Eleonore Bayen, Julien Jacquemot, George Netscher, Pulkit Agrawal, Lynn Tabb Noyce, Alexandre Bayen. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 17.10.2017.

  7. Cross-modal signatures in maternal speech and singing

    PubMed Central

    Trehub, Sandra E.; Plantinga, Judy; Brcic, Jelena; Nowicki, Magda

    2013-01-01

    We explored the possibility of a unique cross-modal signature in maternal speech and singing that enables adults and infants to link unfamiliar speaking or singing voices with subsequently viewed silent videos of the talkers or singers. In Experiment 1, adults listened to 30-s excerpts of speech followed by successively presented 7-s silent video clips, one from the previously heard speaker (different speech content) and the other from a different speaker. They successfully identified the previously heard speaker. In Experiment 2, adults heard comparable excerpts of singing followed by silent video clips from the previously heard singer (different song) and another singer. They failed to identify the previously heard singer. In Experiment 3, the videos of talkers and singers were blurred to obscure mouth movements. Adults successfully identified the talkers and they also identified the singers from videos of different portions of the song previously heard. In Experiment 4, 6− to 8-month-old infants listened to 30-s excerpts of the same maternal speech or singing followed by exposure to the silent videos on alternating trials. They looked longer at the silent videos of previously heard talkers and singers. The findings confirm the individuality of maternal speech and singing performance as well as adults' and infants' ability to discern the unique cross-modal signatures. The cues that enable cross-modal matching of talker and singer identity remain to be determined. PMID:24198805

  8. Cross-modal signatures in maternal speech and singing.

    PubMed

    Trehub, Sandra E; Plantinga, Judy; Brcic, Jelena; Nowicki, Magda

    2013-01-01

    We explored the possibility of a unique cross-modal signature in maternal speech and singing that enables adults and infants to link unfamiliar speaking or singing voices with subsequently viewed silent videos of the talkers or singers. In Experiment 1, adults listened to 30-s excerpts of speech followed by successively presented 7-s silent video clips, one from the previously heard speaker (different speech content) and the other from a different speaker. They successfully identified the previously heard speaker. In Experiment 2, adults heard comparable excerpts of singing followed by silent video clips from the previously heard singer (different song) and another singer. They failed to identify the previously heard singer. In Experiment 3, the videos of talkers and singers were blurred to obscure mouth movements. Adults successfully identified the talkers and they also identified the singers from videos of different portions of the song previously heard. In Experiment 4, 6- to 8-month-old infants listened to 30-s excerpts of the same maternal speech or singing followed by exposure to the silent videos on alternating trials. They looked longer at the silent videos of previously heard talkers and singers. The findings confirm the individuality of maternal speech and singing performance as well as adults' and infants' ability to discern the unique cross-modal signatures. The cues that enable cross-modal matching of talker and singer identity remain to be determined.

  9. Video ethnography during and after caesarean sections: methodological challenges.

    PubMed

    Stevens, Jeni; Schmied, Virginia; Burns, Elaine; Dahlen, Hannah G

    2017-07-01

    To describe the challenges of, and steps taken to successfully collect video ethnographic data during and after caesarean sections. Video ethnographic research uses real-time video footage to study a cultural group or phenomenon in the natural environment. It allows researchers to discover previously undocumented practices, which in-turn provides insight into strengths and weaknesses in practice. This knowledge can be used to translate evidence-based interventions into practice. Video ethnographic design. A video ethnographic approach was used to observe the contact between mothers and babies immediately after elective caesarean sections in a tertiary hospital in Sydney, Australia. Women, their support people and staff participated in the study. Data were collected via video footage and field notes in the operating theatre, recovery and the postnatal ward. Challenges faced whilst conducting video ethnographic research included attaining ethics approval, recruiting vast numbers of staff members and 'vulnerable' pregnant women, and endeavouring to be a 'fly on the wall' and a 'complete observer'. There were disadvantages being an 'insider' whilst conducting the research because occasionally staff members requested help with clinical tasks whilst collecting data; however, it was an advantage as it enabled ease of access to the environment and staff members that were to be recruited. Despite the challenges, video ethnographic research enabled the provision of unique data that could not be attained by any other means. Video ethnographic data are beneficial as it provides exceptionally rich data for in-depth analysis of interactions between the environment, equipment and people in the hospital environment. The analysis of this type of data can then be used to inform improvements for future care. © 2016 John Wiley & Sons Ltd.

  10. Selfies of Imperial Cormorants (Phalacrocorax atriceps): What Is Happening Underwater?

    PubMed Central

    Gómez-Laich, Agustina; Yoda, Ken; Zavalaga, Carlos; Quintana, Flavio

    2015-01-01

    During the last few years, the development of animal-borne still cameras and video recorders has enabled researchers to observe what a wild animal sees in the field. In the present study, we deployed miniaturized video recorders to investigate the underwater foraging behavior of Imperial cormorants (Phalacrocorax atriceps). Video footage was obtained from 12 animals and 49 dives comprising a total of 8.1 h of foraging data. Video information revealed that Imperial cormorants are almost exclusively benthic feeders. While foraging along the seafloor, animals did not necessarily keep their body horizontal but inclined it downwards. The head of the instrumented animal was always visible in the videos and in the majority of the dives it was moved constantly forward and backward by extending and contracting the neck while travelling on the seafloor. Animals detected prey at very short distances, performed quick capture attempts and spent the majority of their time on the seafloor searching for prey. Cormorants foraged at three different sea bottom habitats and the way in which they searched for food differed between habitats. Dives were frequently performed under low luminosity levels suggesting that cormorants would locate prey with other sensory systems in addition to sight. Our video data support the idea that Imperial cormorants’ efficient hunting involves the use of specialized foraging techniques to compensate for their poor underwater vision. PMID:26367384

  11. Selfies of Imperial Cormorants (Phalacrocorax atriceps): What Is Happening Underwater?

    PubMed

    Gómez-Laich, Agustina; Yoda, Ken; Zavalaga, Carlos; Quintana, Flavio

    2015-01-01

    During the last few years, the development of animal-borne still cameras and video recorders has enabled researchers to observe what a wild animal sees in the field. In the present study, we deployed miniaturized video recorders to investigate the underwater foraging behavior of Imperial cormorants (Phalacrocorax atriceps). Video footage was obtained from 12 animals and 49 dives comprising a total of 8.1 h of foraging data. Video information revealed that Imperial cormorants are almost exclusively benthic feeders. While foraging along the seafloor, animals did not necessarily keep their body horizontal but inclined it downwards. The head of the instrumented animal was always visible in the videos and in the majority of the dives it was moved constantly forward and backward by extending and contracting the neck while travelling on the seafloor. Animals detected prey at very short distances, performed quick capture attempts and spent the majority of their time on the seafloor searching for prey. Cormorants foraged at three different sea bottom habitats and the way in which they searched for food differed between habitats. Dives were frequently performed under low luminosity levels suggesting that cormorants would locate prey with other sensory systems in addition to sight. Our video data support the idea that Imperial cormorants' efficient hunting involves the use of specialized foraging techniques to compensate for their poor underwater vision.

  12. Pedestrian detection in video surveillance using fully convolutional YOLO neural network

    NASA Astrophysics Data System (ADS)

    Molchanov, V. V.; Vishnyakov, B. V.; Vizilter, Y. V.; Vishnyakova, O. V.; Knyaz, V. A.

    2017-06-01

    More than 80% of video surveillance systems are used for monitoring people. Old human detection algorithms, based on background and foreground modelling, could not even deal with a group of people, to say nothing of a crowd. Recent robust and highly effective pedestrian detection algorithms are a new milestone of video surveillance systems. Based on modern approaches in deep learning, these algorithms produce very discriminative features that can be used for getting robust inference in real visual scenes. They deal with such tasks as distinguishing different persons in a group, overcome problem with sufficient enclosures of human bodies by the foreground, detect various poses of people. In our work we use a new approach which enables to combine detection and classification tasks into one challenge using convolution neural networks. As a start point we choose YOLO CNN, whose authors propose a very efficient way of combining mentioned above tasks by learning a single neural network. This approach showed competitive results with state-of-the-art models such as FAST R-CNN, significantly overcoming them in speed, which allows us to apply it in real time video surveillance and other video monitoring systems. Despite all advantages it suffers from some known drawbacks, related to the fully-connected layers that obstruct applying the CNN to images with different resolution. Also it limits the ability to distinguish small close human figures in groups which is crucial for our tasks since we work with rather low quality images which often include dense small groups of people. In this work we gradually change network architecture to overcome mentioned above problems, train it on a complex pedestrian dataset and finally get the CNN detecting small pedestrians in real scenes.

  13. Affective video retrieval: violence detection in Hollywood movies by large-scale segmental feature extraction.

    PubMed

    Eyben, Florian; Weninger, Felix; Lehment, Nicolas; Schuller, Björn; Rigoll, Gerhard

    2013-01-01

    Without doubt general video and sound, as found in large multimedia archives, carry emotional information. Thus, audio and video retrieval by certain emotional categories or dimensions could play a central role for tomorrow's intelligent systems, enabling search for movies with a particular mood, computer aided scene and sound design in order to elicit certain emotions in the audience, etc. Yet, the lion's share of research in affective computing is exclusively focusing on signals conveyed by humans, such as affective speech. Uniting the fields of multimedia retrieval and affective computing is believed to lend to a multiplicity of interesting retrieval applications, and at the same time to benefit affective computing research, by moving its methodology "out of the lab" to real-world, diverse data. In this contribution, we address the problem of finding "disturbing" scenes in movies, a scenario that is highly relevant for computer-aided parental guidance. We apply large-scale segmental feature extraction combined with audio-visual classification to the particular task of detecting violence. Our system performs fully data-driven analysis including automatic segmentation. We evaluate the system in terms of mean average precision (MAP) on the official data set of the MediaEval 2012 evaluation campaign's Affect Task, which consists of 18 original Hollywood movies, achieving up to .398 MAP on unseen test data in full realism. An in-depth analysis of the worth of individual features with respect to the target class and the system errors is carried out and reveals the importance of peak-related audio feature extraction and low-level histogram-based video analysis.

  14. Affective Video Retrieval: Violence Detection in Hollywood Movies by Large-Scale Segmental Feature Extraction

    PubMed Central

    Eyben, Florian; Weninger, Felix; Lehment, Nicolas; Schuller, Björn; Rigoll, Gerhard

    2013-01-01

    Without doubt general video and sound, as found in large multimedia archives, carry emotional information. Thus, audio and video retrieval by certain emotional categories or dimensions could play a central role for tomorrow's intelligent systems, enabling search for movies with a particular mood, computer aided scene and sound design in order to elicit certain emotions in the audience, etc. Yet, the lion's share of research in affective computing is exclusively focusing on signals conveyed by humans, such as affective speech. Uniting the fields of multimedia retrieval and affective computing is believed to lend to a multiplicity of interesting retrieval applications, and at the same time to benefit affective computing research, by moving its methodology “out of the lab” to real-world, diverse data. In this contribution, we address the problem of finding “disturbing” scenes in movies, a scenario that is highly relevant for computer-aided parental guidance. We apply large-scale segmental feature extraction combined with audio-visual classification to the particular task of detecting violence. Our system performs fully data-driven analysis including automatic segmentation. We evaluate the system in terms of mean average precision (MAP) on the official data set of the MediaEval 2012 evaluation campaign's Affect Task, which consists of 18 original Hollywood movies, achieving up to .398 MAP on unseen test data in full realism. An in-depth analysis of the worth of individual features with respect to the target class and the system errors is carried out and reveals the importance of peak-related audio feature extraction and low-level histogram-based video analysis. PMID:24391704

  15. Remote environmental sensor array system

    NASA Astrophysics Data System (ADS)

    Hall, Geoffrey G.

    This thesis examines the creation of an environmental monitoring system for inhospitable environments. It has been named The Remote Environmental Sensor Array System or RESA System for short. This thesis covers the development of RESA from its inception, to the design and modeling of the hardware and software required to make it functional. Finally, the actual manufacture, and laboratory testing of the finished RESA product is discussed and documented. The RESA System is designed as a cost-effective way to bring sensors and video systems to the underwater environment. It contains as water quality probe with sensors such as dissolved oxygen, pH, temperature, specific conductivity, oxidation-reduction potential and chlorophyll a. In addition, an omni-directional hydrophone is included to detect underwater acoustic signals. It has a colour, high-definition and a low-light, black and white camera system, which it turn are coupled to a laser scaling system. Both high-intensity discharge and halogen lighting system are included to illuminate the video images. The video and laser scaling systems are manoeuvred using pan and tilt units controlled from an underwater computer box. Finally, a sediment profile imager is included to enable profile images of sediment layers to be acquired. A control and manipulation system to control the instruments and move the data across networks is integrated into the underwater system while a power distribution node provides the correct voltages to power the instruments. Laboratory testing was completed to ensure that the different instruments associated with the RESA performed as designed. This included physical testing of the motorized instruments, calibration of the instruments, benchmark performance testing and system failure exercises.

  16. Optimized Two-Party Video Chat with Restored Eye Contact Using Graphics Hardware

    NASA Astrophysics Data System (ADS)

    Dumont, Maarten; Rogmans, Sammy; Maesen, Steven; Bekaert, Philippe

    We present a practical system prototype to convincingly restore eye contact between two video chat participants, with a minimal amount of constraints. The proposed six-fold camera setup is easily integrated into the monitor frame, and is used to interpolate an image as if its virtual camera captured the image through a transparent screen. The peer user has a large freedom of movement, resulting in system specifications that enable genuine practical usage. Our software framework thereby harnesses the powerful computational resources inside graphics hardware, and maximizes arithmetic intensity to achieve over real-time performance up to 42 frames per second for 800 ×600 resolution images. Furthermore, an optimal set of fine tuned parameters are presented, that optimizes the end-to-end performance of the application to achieve high subjective visual quality, and still allows for further algorithmic advancement without loosing its real-time capabilities.

  17. Video cameras on wild birds.

    PubMed

    Rutz, Christian; Bluff, Lucas A; Weir, Alex A S; Kacelnik, Alex

    2007-11-02

    New Caledonian crows (Corvus moneduloides) are renowned for using tools for extractive foraging, but the ecological context of this unusual behavior is largely unknown. We developed miniaturized, animal-borne video cameras to record the undisturbed behavior and foraging ecology of wild, free-ranging crows. Our video recordings enabled an estimate of the species' natural foraging efficiency and revealed that tool use, and choice of tool materials, are more diverse than previously thought. Video tracking has potential for studying the behavior and ecology of many other bird species that are shy or live in inaccessible habitats.

  18. Evolving discriminators for querying video sequences

    NASA Astrophysics Data System (ADS)

    Iyengar, Giridharan; Lippman, Andrew B.

    1997-01-01

    In this paper we present a framework for content based query and retrieval of information from large video databases. This framework enables content based retrieval of video sequences by characterizing the sequences using motion, texture and colorimetry cues. This characterization is biologically inspired and results in a compact parameter space where every segment of video is represented by an 8 dimensional vector. Searching and retrieval is done in real- time with accuracy in this parameter space. Using this characterization, we then evolve a set of discriminators using Genetic Programming Experiments indicate that these discriminators are capable of analyzing and characterizing video. The VideoBook is able to search and retrieve video sequences with 92% accuracy in real-time. Experiments thus demonstrate that the characterization is capable of extracting higher level structure from raw pixel values.

  19. Rapid Development of Orion Structural Test Systems

    NASA Astrophysics Data System (ADS)

    Baker, Dave

    2012-07-01

    NASA is currently validating the Orion spacecraft design for human space flight. Three systems developed by G Systems using hardware and software from National Instruments play an important role in the testing of the new Multi- purpose crew vehicle (MPCV). A custom pressurization and venting system enables engineers to apply pressure inside the test article for measuring strain. A custom data acquisition system synchronizes over 1,800 channels of analog data. This data, along with multiple video and audio streams and calculated data, can be viewed, saved, and replayed in real-time on multiple client stations. This paper presents design features and how the system works together in a distributed fashion.

  20. Video Surveillance in Mental Health Facilities: Is it Ethical?

    PubMed

    Stolovy, Tali; Melamed, Yuval; Afek, Arnon

    2015-05-01

    Video surveillance is a tool for managing safety and security within public spaces. In mental health facilities, the major benefit of video surveillance is that it enables 24 hour monitoring of patients, which has the potential to reduce violent and aggressive behavior. The major disadvantage is that such observation is by nature intrusive. It diminishes privacy, a factor of huge importance for psychiatric inpatients. Thus, an ongoing debate has developed following the increasing use of cameras in this setting. This article presents the experience of a medium-large academic state hospital that uses video surveillance, and explores the various ethical and administrative aspects of video surveillance in mental health facilities.

  1. Advanced Infant Car Seat Would Increase Highway Safety

    NASA Technical Reports Server (NTRS)

    Dabney, Richard; Elrod, Susan

    2004-01-01

    An advanced infant car seat has been proposed to increase highway safety by reducing the incidence of crying, fussy behavior, and other child-related distractions that divert an adult driver s attention from driving. In addition to a conventional infant car seat with safety restraints, the proposed advanced infant car seat would include a number of components and subsystems that would function together as a comprehensive infant-care system that would keep its occupant safe, comfortable, and entertained, and would enable the driver to monitor the baby without having to either stop the car or turn around to face the infant during driving. The system would include a vibrator with bulb switch to operate; the switch would double as a squeeze toy that would make its own specific sound. A music subsystem would include loudspeakers built into the seat plus digital and analog circuitry that would utilize plug-in memory modules to synthesize music or a variety of other sounds. The music subsystem would include a built-in sound generator that could synthesize white noise or a human heartbeat to calm the baby to sleep. A second bulb switch could be used to control the music subsystem and would double as a squeeze toy that would make a distinct sound. An anti-noise sound-suppression system would isolate the baby from potentially disturbing ambient external noises. This subsystem would include small microphones, placed near the baby s ears, to detect ambient noise. The outputs of the microphone would be amplified and fed to the loudspeakers at appropriate amplitude and in a phase opposite that of the detected ambient noise, such that the net ambient sound arriving at the baby s ears would be almost completely cancelled. A video-camera subsystem would enable the driver to monitor the baby visually while continuing to face forward. One or more portable miniature video cameras could be embedded in the side of the infant car seat (see figure) or in a flip-down handle. The outputs of the video cameras would be transmitted by radio or infrared to a portable, miniature receiver/video monitor unit that would be attached to the dashboard of the car. The video-camera subsystem can also be used within transmission/reception range when the seat was removed from the car. The system would include a biotelemetric and tracking subsystem, which would include a Global Positioning System receiver for measuring its location. This subsystem would transmit the location of the infant car seat (even if the seat were not in a car) along with such biometric data as the baby s heart rate, perspiration rate, urinary status, temperature, and rate of breathing. Upon detecting any anomalies in the biometric data, this subsystem would send a warning to a paging device installed in the car or carried by the driver, so that the driver could pull the car off the road to attend to the baby. A motion detector in this subsystem would send a warning if the infant car seat were to be moved or otherwise disturbed unexpectedly while the infant was seated in it: this warning function, in combination with the position- tracking function, could help in finding a baby who had been kidnapped with the seat. Removable rechargeable batteries would enable uninterrupted functioning of all parts of the system while transporting the baby to and from the car. The batteries could be recharged via the cigarette-lighter outlet in the car or by use of an external AC-powered charger.

  2. First results of the multi-purpose real-time processing video camera system on the Wendelstein 7-X stellarator and implications for future devices

    NASA Astrophysics Data System (ADS)

    Zoletnik, S.; Biedermann, C.; Cseh, G.; Kocsis, G.; König, R.; Szabolics, T.; Szepesi, T.; Wendelstein 7-X Team

    2018-01-01

    A special video camera has been developed for the 10-camera overview video system of the Wendelstein 7-X (W7-X) stellarator considering multiple application needs and limitations resulting from this complex long-pulse superconducting stellarator experiment. The event detection intelligent camera (EDICAM) uses a special 1.3 Mpixel CMOS sensor with non-destructive read capability which enables fast monitoring of smaller Regions of Interest (ROIs) even during long exposures. The camera can perform simple data evaluation algorithms (minimum/maximum, mean comparison to levels) on the ROI data which can dynamically change the readout process and generate output signals. Multiple EDICAM cameras were operated in the first campaign of W7-X and capabilities were explored in the real environment. Data prove that the camera can be used for taking long exposure (10-100 ms) overview images of the plasma while sub-ms monitoring and even multi-camera correlated edge plasma turbulence measurements of smaller areas can be done in parallel. These latter revealed that filamentary turbulence structures extend between neighboring modules of the stellarator. Considerations emerging for future upgrades of this system and similar setups on future long-pulse fusion experiments such as ITER are discussed.

  3. Full-frame video stabilization with motion inpainting.

    PubMed

    Matsushita, Yasuyuki; Ofek, Eyal; Ge, Weina; Tang, Xiaoou; Shum, Heung-Yeung

    2006-07-01

    Video stabilization is an important video enhancement technology which aims at removing annoying shaky motion from videos. We propose a practical and robust approach of video stabilization that produces full-frame stabilized videos with good visual quality. While most previous methods end up with producing smaller size stabilized videos, our completion method can produce full-frame videos by naturally filling in missing image parts by locally aligning image data of neighboring frames. To achieve this, motion inpainting is proposed to enforce spatial and temporal consistency of the completion in both static and dynamic image areas. In addition, image quality in the stabilized video is enhanced with a new practical deblurring algorithm. Instead of estimating point spread functions, our method transfers and interpolates sharper image pixels of neighboring frames to increase the sharpness of the frame. The proposed video completion and deblurring methods enabled us to develop a complete video stabilizer which can naturally keep the original image quality in the stabilized videos. The effectiveness of our method is confirmed by extensive experiments over a wide variety of videos.

  4. Validation of Functional Reaching Volume as an Outcome Measure across the Spectrum of Abilities in Muscular Dystrophy

    DTIC Science & Technology

    2017-09-01

    interactive video game regardless of ambulatory status. The objective of this project is to produce a trial ready outcome measure that will enable clinical...custom-designed video game using the Microsoft Kinect camera, measures functional reaching volume (FRV) across the spectrum of the disease in DMD...Kinect, video game , clinical trial readiness, neuromuscular disease, Soliton, functional reaching volume 3. ACCOMPLISHMENTS: The PI is reminded

  5. Flexible Macroblock Ordering for Context-Aware Ultrasound Video Transmission over Mobile WiMAX

    PubMed Central

    Martini, Maria G.; Hewage, Chaminda T. E. R.

    2010-01-01

    The most recent network technologies are enabling a variety of new applications, thanks to the provision of increased bandwidth and better management of Quality of Service. Nevertheless, telemedical services involving multimedia data are still lagging behind, due to the concern of the end users, that is, clinicians and also patients, about the low quality provided. Indeed, emerging network technologies should be appropriately exploited by designing the transmission strategy focusing on quality provision for end users. Stemming from this principle, we propose here a context-aware transmission strategy for medical video transmission over WiMAX systems. Context, in terms of regions of interest (ROI) in a specific session, is taken into account for the identification of multiple regions of interest, and compression/transmission strategies are tailored to such context information. We present a methodology based on H.264 medical video compression and Flexible Macroblock Ordering (FMO) for ROI identification. Two different unequal error protection methodologies, providing higher protection to the most diagnostically relevant data, are presented. PMID:20827292

  6. [Multimedia (visual collaboration) brings true nature of human life].

    PubMed

    Tomita, N

    2000-03-01

    Videoconferencing system, high-quality visual collaboration, is bringing Multimedia into a society. Multimedia, high quality media such as TV broadcast, looks expensive because it requires broadband network with 100-200 Mpbs bandwidth or 3,700 analog telephone lines. However, thanks to the existing digital-line called N-ISDN (Narrow Integrated Service Digital Network) and PictureTel's audio/video compression technologies, it becomes far less expensive. N-ISDN provides 128 Kbps bandwidth, over twice wider than analog line. PictureTel's technology instantly compress audio/video signal into 1/1,000 in size. This means, with ISDN and PictureTel technology. Multimedia is materialized over even single ISDN line. This will allow doctor to remotely meet face-to-face with a medical specialist or patients to interview, conduct physical examinations, review records, and prescribe treatments. Bonding multiple ISDN lines will further improve video quality that enables remote surgery. Surgeon can perform an operation on internal organ by projecting motion video from Endoscope's CCD camera to large display monitor. Also, PictureTel provides advanced technologies of eliminating background noise generated by surgical knives or scalpels during surgery. This will allow sound of the breath or heartbeat be clearly transmitted to the remote site. Thus, Multimedia eliminates the barrier of distance, enabling people to be just at home, to be anywhere in the world, to undergo up-to-date medical treatment by expertise. This will reduce medical cost and allow people to live in the suburbs, in less pollution, closer to the nature. People will foster more open and collaborative environment by participating in local activities. Such community-oriented life-style will atone for mass consumption, materialistic economy in the past, then bring true happiness and welfare into our life after all.

  7. Digital cinema system using JPEG2000 movie of 8-million pixel resolution

    NASA Astrophysics Data System (ADS)

    Fujii, Tatsuya; Nomura, Mitsuru; Shirai, Daisuke; Yamaguchi, Takahiro; Fujii, Tetsuro; Ono, Sadayasu

    2003-05-01

    We have developed a prototype digital cinema system that can store, transmit and display extra high quality movies of 8-million pixel resolution, using JPEG2000 coding algorithm. The image quality is 4 times better than HDTV in resolution, and enables us to replace conventional films with digital cinema archives. Using wide-area optical gigabit IP networks, cinema contents are distributed and played back as a video-on-demand (VoD) system. The system consists of three main devices, a video server, a real-time JPEG2000 decoder, and a large-venue LCD projector. All digital movie data are compressed by JPEG2000 and stored in advance. The coded streams of 300~500 Mbps can be continuously transmitted from the PC server using TCP/IP. The decoder can perform the real-time decompression at 24/48 frames per second, using 120 parallel JPEG2000 processing elements. The received streams are expanded into 4.5Gbps raw video signals. The prototype LCD projector uses 3 pieces of 3840×2048 pixel reflective LCD panels (D-ILA) to show RGB 30-bit color movies fed by the decoder. The brightness exceeds 3000 ANSI lumens for a 300-inch screen. The refresh rate is chosen to 96Hz to thoroughly eliminate flickers, while preserving compatibility to cinema movies of 24 frames per second.

  8. Live Streaming of the Moon's Shadow from the Edge of Space across the United States during the August 2017 Total Solar Eclipse

    NASA Astrophysics Data System (ADS)

    Guzik, T. G.

    2017-12-01

    On August 21, 2017 approximately 55 teams across the path of totality of the eclipse across America will use sounding balloon platforms to transmit, in real-time from an altitude of 90,000 feet, HD video of the moon's shadow as it crosses the U.S. from Oregon to South Carolina. This unprecedented activity was originally organized by the Montana Space Grant Consortium in order to 1) use the rare total eclipse event to captivate the imagination of students and encourage the development of new ballooning teams across the United States, 2) provide an inexpensive high bandwidth data telemetry system for real-time video streaming, and 3) establish the basic infrastructure at multiple institutions enabling advanced "new generation" student ballooning projects following the eclipse event. A ballooning leadership group consisting of Space Grant Consortia in Montana, Colorado, Louisiana, and Minnesota was established to support further development and testing of the systems, as well as to assist in training the ballooning teams. This presentation will describe the high bandwidth telemetry system used for the never before attempted live streaming of HD video from the edge of space, the results of this highly collaborative science campaign stretching from coast-to-coast, potential uses of the data telemetry system for other student science projects, and lessons learned that can be applied to the 2024 total solar eclipse.

  9. Shot boundary detection and label propagation for spatio-temporal video segmentation

    NASA Astrophysics Data System (ADS)

    Piramanayagam, Sankaranaryanan; Saber, Eli; Cahill, Nathan D.; Messinger, David

    2015-02-01

    This paper proposes a two stage algorithm for streaming video segmentation. In the first stage, shot boundaries are detected within a window of frames by comparing dissimilarity between 2-D segmentations of each frame. In the second stage, the 2-D segments are propagated across the window of frames in both spatial and temporal direction. The window is moved across the video to find all shot transitions and obtain spatio-temporal segments simultaneously. As opposed to techniques that operate on entire video, the proposed approach consumes significantly less memory and enables segmentation of lengthy videos. We tested our segmentation based shot detection method on the TRECVID 2007 video dataset and compared it with block-based technique. Cut detection results on the TRECVID 2007 dataset indicate that our algorithm has comparable results to the best of the block-based methods. The streaming video segmentation routine also achieves promising results on a challenging video segmentation benchmark database.

  10. Optofluidic technology for monitoring rotifer Brachionus calyciflorus responses to regular light pulses

    NASA Astrophysics Data System (ADS)

    Cartlidge, Rhys; Campana, Olivia; Nugegoda, Dayanthi; Wlodkowic, Donald

    2016-12-01

    Behavioural alterations can occur as a result of a toxicant exposure at concentrations significantly lower than lethal effects that are commonly measured in acute toxicity testing. The use of alternating light and dark photoperiods to test phototactic responses of aquatic invertebrates in the presence of environmental contaminants provides an attractive analytical avenue. Quantification of phototactic responses represents a sublethal endpoint that can be employed as an early warning signal. Despite the benefits associated with the assessment of these endpoints, there is currently a lack of automated and miniaturized bioanalytical technologies to implement the development of toxicity testing with small aquatic species. In this study we present a proof-of-concept microfluidic Lab-on-a-Chip (LOC) platform for the assessment of rotifer swimming behavior in the presence of the toxicant copper sulfate. The device was designed to assess impact of toxicants at sub-lethal concentrations on freshwater crustacean Brachionus calyciflorus, testing behavioral endpoints such as animal swimming distance, speed and acceleration. The LOC device presented in this work enabled straightforward caging of microscopic crustaceans as well as non-invasive analysis of rapidly swimming animals in a focal plane of a video-microscopy system. The chip-based technology was fabricated using a new photolithography method that enabled formation of thick photoresist layers with minimal distortion. Photoresist molds were then employed for replica molding of LOC devices with poly(dimethylsiloxane) (PDMS) elastomer. The complete bioanalytical system consisted of: (i) microfluidic PDMS chip-based device; (ii) peristaltic microperfusion pumping manifold; (iii) miniaturized CMOS camera for video data acquisition; and (iv) video analysis software algorithms for quantification of changes in swimming behaviour of B. calyciflorus in response to reference toxicants.

  11. Depicting surgical anatomy of the porta hepatis in living donor liver transplantation.

    PubMed

    Kelly, Paul; Fung, Albert; Qu, Joy; Greig, Paul; Tait, Gordon; Jenkinson, Jodie; McGilvray, Ian; Agur, Anne

    2017-01-01

    Visualizing the complex anatomy of vascular and biliary structures of the liver on a case-by-case basis has been challenging. A living donor liver transplant (LDLT) right hepatectomy case, with focus on the porta hepatis, was used to demonstrate an innovative method to visualize anatomy with the purpose of refining preoperative planning and teaching of complex surgical procedures. The production of an animation-enhanced video consisted of many stages including the integration of pre-surgical planning; case-specific footage and 3D models of the liver and associated vasculature, reconstructed from contrast-enhanced CTs. Reconstructions of the biliary system were modeled from intraoperative cholangiograms. The distribution of the donor portal veins, hepatic arteries and bile ducts was defined from the porta hepatis intrahepatically to the point of surgical division. Each step of the surgery was enhanced with 3D animation to provide sequential and seamless visualization from pre-surgical planning to outcome. Use of visualization techniques such as transparency and overlays allows viewers not only to see the operative field, but also the origin and course of segmental branches and their spatial relationships. This novel educational approach enables integrating case-based operative footage with advanced editing techniques for visualizing not only the surgical procedure, but also complex anatomy such as vascular and biliary structures. The surgical team has found this approach to be beneficial for preoperative planning and clinical teaching, especially for complex cases. Each animation-enhanced video case is posted to the open-access Toronto Video Atlas of Surgery (TVASurg), an education resource with a global clinical and patient user base. The novel educational system described in this paper enables integrating operative footage with 3D animation and cinematic editing techniques for seamless sequential organization from pre-surgical planning to outcome.

  12. Depicting surgical anatomy of the porta hepatis in living donor liver transplantation

    PubMed Central

    Fung, Albert; Qu, Joy; Greig, Paul; Tait, Gordon; Jenkinson, Jodie; McGilvray, Ian; Agur, Anne

    2017-01-01

    Visualizing the complex anatomy of vascular and biliary structures of the liver on a case-by-case basis has been challenging. A living donor liver transplant (LDLT) right hepatectomy case, with focus on the porta hepatis, was used to demonstrate an innovative method to visualize anatomy with the purpose of refining preoperative planning and teaching of complex surgical procedures. The production of an animation-enhanced video consisted of many stages including the integration of pre-surgical planning; case-specific footage and 3D models of the liver and associated vasculature, reconstructed from contrast-enhanced CTs. Reconstructions of the biliary system were modeled from intraoperative cholangiograms. The distribution of the donor portal veins, hepatic arteries and bile ducts was defined from the porta hepatis intrahepatically to the point of surgical division. Each step of the surgery was enhanced with 3D animation to provide sequential and seamless visualization from pre-surgical planning to outcome. Use of visualization techniques such as transparency and overlays allows viewers not only to see the operative field, but also the origin and course of segmental branches and their spatial relationships. This novel educational approach enables integrating case-based operative footage with advanced editing techniques for visualizing not only the surgical procedure, but also complex anatomy such as vascular and biliary structures. The surgical team has found this approach to be beneficial for preoperative planning and clinical teaching, especially for complex cases. Each animation-enhanced video case is posted to the open-access Toronto Video Atlas of Surgery (TVASurg), an education resource with a global clinical and patient user base. The novel educational system described in this paper enables integrating operative footage with 3D animation and cinematic editing techniques for seamless sequential organization from pre-surgical planning to outcome. PMID:29078606

  13. A Novel System for Supporting Autism Diagnosis Using Home Videos: Iterative Development and Evaluation of System Design.

    PubMed

    Nazneen, Nazneen; Rozga, Agata; Smith, Christopher J; Oberleitner, Ron; Abowd, Gregory D; Arriaga, Rosa I

    2015-06-17

    Observing behavior in the natural environment is valuable to obtain an accurate and comprehensive assessment of a child's behavior, but in practice it is limited to in-clinic observation. Research shows significant time lag between when parents first become concerned and when the child is finally diagnosed with autism. This lag can delay early interventions that have been shown to improve developmental outcomes. To develop and evaluate the design of an asynchronous system that allows parents to easily collect clinically valid in-home videos of their child's behavior and supports diagnosticians in completing diagnostic assessment of autism. First, interviews were conducted with 11 clinicians and 6 families to solicit feedback from stakeholders about the system concept. Next, the system was iteratively designed, informed by experiences of families using it in a controlled home-like experimental setting and a participatory design process involving domain experts. Finally, in-field evaluation of the system design was conducted with 5 families of children (4 with previous autism diagnosis and 1 child typically developing) and 3 diagnosticians. For each family, 2 diagnosticians, blind to the child's previous diagnostic status, independently completed an autism diagnosis via our system. We compared the outcome of the assessment between the 2 diagnosticians, and between each diagnostician and the child's previous diagnostic status. The system that resulted through the iterative design process includes (1) NODA smartCapture, a mobile phone-based application for parents to record prescribed video evidence at home; and (2) NODA Connect, a Web portal for diagnosticians to direct in-home video collection, access developmental history, and conduct an assessment by linking evidence of behaviors tagged in the videos to the Diagnostic and Statistical Manual of Mental Disorders criteria. Applying clinical judgment, the diagnostician concludes a diagnostic outcome. During field evaluation, without prior training, parents easily (average rating of 4 on a 5-point scale) used the system to record video evidence. Across all in-home video evidence recorded during field evaluation, 96% (26/27) were judged as clinically useful, for performing an autism diagnosis. For 4 children (3 with autism and 1 typically developing), both diagnosticians independently arrived at the correct diagnostic status (autism versus typical). Overall, in 91% of assessments (10/11) via NODA Connect, diagnosticians confidently (average rating 4.5 on a 5-point scale) concluded a diagnostic outcome that matched with the child's previous diagnostic status. The in-field evaluation demonstrated that the system's design enabled parents to easily record clinically valid evidence of their child's behavior, and diagnosticians to complete a diagnostic assessment. These results shed light on the potential for appropriately designed telehealth technology to support clinical assessments using in-home video captured by families. This assessment model can be readily generalized to other conditions where direct observation of behavior plays a central role in the assessment process.

  14. Investigating the Conservation of Mechanical Energy Using Video Analysis: Four Cases

    ERIC Educational Resources Information Center

    Bryan, J. A.

    2010-01-01

    Inexpensive video analysis technology now enables students to make precise measurements of an object's position at incremental times during its motion. Such capability now allows users to "examine", rather than simply "assume", energy conservation in a variety of situations commonly discussed in introductory physics courses. This article describes…

  15. A Graphical Operator Interface for a Telerobotic Inspection System

    NASA Technical Reports Server (NTRS)

    Kim, W. S.; Tso, K. S.; Hayati, S.

    1993-01-01

    Operator interface has recently emerged as an important element for efficient and safe operatorinteractions with the telerobotic system. Recent advances in graphical user interface (GUI) andgraphics/video merging technologies enable development of more efficient, flexible operatorinterfaces. This paper describes an advanced graphical operator interface newly developed for aremote surface inspection system at Jet Propulsion Laboratory. The interface has been designed sothat remote surface inspection can be performed by a single operator with an integrated robot controland image inspection capability. It supports three inspection strategies of teleoperated human visual inspection, human visual inspection with automated scanning, and machine-vision-based automated inspection.

  16. FaceIt: face recognition from static and live video for law enforcement

    NASA Astrophysics Data System (ADS)

    Atick, Joseph J.; Griffin, Paul M.; Redlich, A. N.

    1997-01-01

    Recent advances in image and pattern recognition technology- -especially face recognition--are leading to the development of a new generation of information systems of great value to the law enforcement community. With these systems it is now possible to pool and manage vast amounts of biometric intelligence such as face and finger print records and conduct computerized searches on them. We review one of the enabling technologies underlying these systems: the FaceIt face recognition engine; and discuss three applications that illustrate its benefits as a problem-solving technology and an efficient and cost effective investigative tool.

  17. Web-based teaching video packages on anatomical education.

    PubMed

    Ozer, Mehmet Asim; Govsa, Figen; Bati, Ayse Hilal

    2017-11-01

    The aim of this study was to study the effect of web-based teaching video packages on medical students' satisfaction during gross anatomy education. The objective was to test the hypothesis that individual preference, which can be related to learning style, influences individual utilization of the video packages developed specifically for the undergraduate medical curriculum. Web-based teaching video packages consisting of Closed Circuit Audiovisual System and Distance Education of Anatomy were prepared. 54 informative application videos each lasting an average 12 min, competent with learning objectives have been prepared. 300 young adults of the medical school on applied anatomy education were evaluated in terms of their course content, exam performance and perceptions. A survey was conducted to determine the difference between the students who did not use teaching packages with those who used it during or after the lecture. A mean of 150 hits for each student per year was indicated. Academic performance of anatomy has been an increase of 10 points. Positive effects of the video packages on anatomy education have manifested on the survey conducted on students. The survey was compiled under twenty different items including effectiveness, providing education opportunity and affecting learning positively. Additionally, the difference was remarkable that the positive ideas of the second year students on learning were statistically significant from that of the third year students. Web-based video packages are helpful, definitive, easily accessible and affordable which enable students with different pace of learning to reach information simultaneously in equal conditions and increase the learning activity in crowded group lectures in cadaver labs. We conclude that personality/learning preferences of individual students influence their use of video packages in the medical curriculum.

  18. Investigating Advances in the Acquisition of Secure Systems Based on Open Architecture, Open Source Software, and Software Product Lines

    DTIC Science & Technology

    2012-01-27

    example is found in games converted to serve a purpose other than entertainment , such as the development and use of games for science, technology, and...These play-session histories can then be further modded via video editing or remixing with other media (e.g., adding music ) to better enable cinematic...available OSS (e.g., the Linux Kernel on the Sony PS3 game console2) that game system hackers seek to undo. Finally, games are one of the most commonly

  19. Indexing and retrieval of multimedia objects at different levels of granularity

    NASA Astrophysics Data System (ADS)

    Faudemay, Pascal; Durand, Gwenael; Seyrat, Claude; Tondre, Nicolas

    1998-10-01

    Intelligent access to multimedia databases for `naive user' should probably be based on queries formulation by `intelligent agents'. These agents should `understand' the semantics of the contents, learn user preferences and deliver to the user a subset of the source contents, for further navigation. The goal of such systems should be to enable `zero-command' access to the contents, while keeping the freedom of choice of the user. Such systems should interpret multimedia contents in terms of multiple audiovisual objects (from video to visual or audio object), and on actions and scenarios.

  20. GeoTrack: bio-inspired global video tracking by networks of unmanned aircraft systems

    NASA Astrophysics Data System (ADS)

    Barooah, Prabir; Collins, Gaemus E.; Hespanha, João P.

    2009-05-01

    Research from the Institute for Collaborative Biotechnologies (ICB) at the University of California at Santa Barbara (UCSB) has identified swarming algorithms used by flocks of birds and schools of fish that enable these animals to move in tight formation and cooperatively track prey with minimal estimation errors, while relying solely on local communication between the animals. This paper describes ongoing work by UCSB, the University of Florida (UF), and the Toyon Research Corporation on the utilization of these algorithms to dramatically improve the capabilities of small unmanned aircraft systems (UAS) to cooperatively locate and track ground targets. Our goal is to construct an electronic system, called GeoTrack, through which a network of hand-launched UAS use dedicated on-board processors to perform multi-sensor data fusion. The nominal sensors employed by the system will EO/IR video cameras on the UAS. When GMTI or other wide-area sensors are available, as in a layered sensing architecture, data from the standoff sensors will also be fused into the GeoTrack system. The output of the system will be position and orientation information on stationary or mobile targets in a global geo-stationary coordinate system. The design of the GeoTrack system requires significant advances beyond the current state-of-the-art in distributed control for a swarm of UAS to accomplish autonomous coordinated tracking; target geo-location using distributed sensor fusion by a network of UAS, communicating over an unreliable channel; and unsupervised real-time image-plane video tracking in low-powered computing platforms.

  1. An objective method for a video quality evaluation in a 3DTV service

    NASA Astrophysics Data System (ADS)

    Wilczewski, Grzegorz

    2015-09-01

    The following article describes proposed objective method for a 3DTV video quality evaluation, a Compressed Average Image Intensity (CAII) method. Identification of the 3DTV service's content chain nodes enables to design a versatile, objective video quality metric. It is based on an advanced approach to the stereoscopic videostream analysis. Insights towards designed metric mechanisms, as well as the evaluation of performance of the designed video quality metric, in the face of the simulated environmental conditions are herein discussed. As a result, created CAII metric might be effectively used in a variety of service quality assessment applications.

  2. Human recognition in a video network

    NASA Astrophysics Data System (ADS)

    Bhanu, Bir

    2009-10-01

    Video networks is an emerging interdisciplinary field with significant and exciting scientific and technological challenges. It has great promise in solving many real-world problems and enabling a broad range of applications, including smart homes, video surveillance, environment and traffic monitoring, elderly care, intelligent environments, and entertainment in public and private spaces. This paper provides an overview of the design of a wireless video network as an experimental environment, camera selection, hand-off and control, anomaly detection. It addresses challenging questions for individual identification using gait and face at a distance and present new techniques and their comparison for robust identification.

  3. Direct Methanol Fuel Cell (DMFC) Battery Replacement Program

    DTIC Science & Technology

    2013-01-29

    selection of the Reynold’s number enables use of water for simulation of gas or liquid flow. Introduction of dye to the flow stream, with video...calibrated using a soap -film flow meter (Bubble-o-meter, Dublin, OH). Eleven Array system temperature regions were set as follows prior to start of...expected. The ar- ray flow proceeds down the columns: column effects would be more likely than row effects from a design of experiments perspective

  4. Captivating Broad Audiences with an Internet-connected Ocean

    NASA Astrophysics Data System (ADS)

    Moran, K.; Elliott, L.; Gervais, F.; Juniper, K.; Owens, D.; Pirenne, B.

    2012-12-01

    NEPTUNE Canada, a network of Ocean Networks Canada and the first deep water cabled ocean observatory, began operations in December 2009. Located offshore Canada's west coast, the network streams data from passive, active, and interactive sensors positioned at five nodes along its 800 km long looped cable to the Internet. This technically advanced system includes a sophisticated data management and archiving system, which enables the collection of real-time physical, chemical, geological, and biological oceanographic data, including video, at resolutions relevant for furthering our understanding of the dynamics of the earth-ocean system. Scientists in Canada and around the world comprise the primary audience for these data, but NEPTUNE Canada is also serving these data to broader audiences including K-16 students and teachers, informal educators, citizen scientists, the press, and the public. Here we present our engagement tools, approaches, and experiences including electronic books, personal phone apps, Internet-served video, social media, mini-observatory systems, print media, live broadcasting from sea, and a citizen scientist portal.NEPTUNE Canada's ibook available on Apple's iBook store.

  5. Spaceflight Operations Services Grid (SOSG) Prototype Implementation and Feasibility Study

    NASA Technical Reports Server (NTRS)

    Bradford, Robert N.; Thigpen, William W.; Lisotta, Anthony J.; Redman, Sandra

    2004-01-01

    Science Operations Services Grid is focusing on building a prototype grid-based environment that incorporates existing and new spaceflight services to enable current and future NASA programs with cost savings and new and evolvable methods to conduct science in a distributed environment. The Science Operations Services Grid (SOSG) will provide a distributed environment for widely disparate organizations to conduct their systems and processes in a more efficient and cost effective manner. These organizations include those that: 1) engage in space-based science and operations, 2) develop space-based systems and processes, and 3) conduct scientific research, bringing together disparate scientific disciplines like geology and oceanography to create new information. In addition educational outreach will be significantly enhanced by providing to schools the same tools used by NASA with the ability of the schools to actively participate on many levels in the science generated by NASA from space and on the ground. The services range from voice, video and telemetry processing and display to data mining, high level processing and visualization tools all accessible from a single portal. In this environment, users would not require high end systems or processes at their home locations to use these services. Also, the user would need to know minimal details about the applications in order to utilize the services. In addition, security at all levels is an underlying goal of the project. The Science Operations Services Grid will focus on four tools that are currently used by the ISS Payload community along with nine more that are new to the community. Under the prototype four Grid virtual organizations PO) will be developed to represent four types of users. They are a Payload (experimenters) VO, a Flight Controllers VO, an Engineering and Science Collaborators VO and an Education and Public Outreach VO. The User-based services will be implemented to replicate the operational voice, video, telemetry and commanding systems. Once the User-based services are in place, they will be analyzed to establish feasibility for Grid enabling. If feasible then each User-based service will be Grid enabled. The remaining non-Grid services if not already Web enabled will be so enabled. In the end, four portals will be developed one for each VO. Each portal will contain the appropriate User-based services required for that VO to operate.

  6. Video Guidance Sensor and Time-of-Flight Rangefinder

    NASA Technical Reports Server (NTRS)

    Bryan, Thomas; Howard, Richard; Bell, Joseph L.; Roe, Fred D.; Book, Michael L.

    2007-01-01

    A proposed video guidance sensor (VGS) would be based mostly on the hardware and software of a prior Advanced VGS (AVGS), with some additions to enable it to function as a time-of-flight rangefinder (in contradistinction to a triangulation or image-processing rangefinder). It would typically be used at distances of the order of 2 or 3 kilometers, where a typical target would appear in a video image as a single blob, making it possible to extract the direction to the target (but not the orientation of the target or the distance to the target) from a video image of light reflected from the target. As described in several previous NASA Tech Briefs articles, an AVGS system is an optoelectronic system that provides guidance for automated docking of two vehicles. In the original application, the two vehicles are spacecraft, but the basic principles of design and operation of the system are applicable to aircraft, robots, objects maneuvered by cranes, or other objects that may be required to be aligned and brought together automatically or under remote control. In a prior AVGS system of the type upon which the now-proposed VGS is largely based, the tracked vehicle is equipped with one or more passive targets that reflect light from one or more continuous-wave laser diode(s) on the tracking vehicle, a video camera on the tracking vehicle acquires images of the targets in the reflected laser light, the video images are digitized, and the image data are processed to obtain the direction to the target. The design concept of the proposed VGS does not call for any memory or processor hardware beyond that already present in the prior AVGS, but does call for some additional hardware and some additional software. It also calls for assignment of some additional tasks to two subsystems that are parts of the prior VGS: a field-programmable gate array (FPGA) that generates timing and control signals, and a digital signal processor (DSP) that processes the digitized video images. The additional timing and control signals generated by the FPGA would cause the VGS to alternate between an imaging (direction-finding) mode and a time-of-flight (range-finding mode) and would govern operation in the range-finding mode.

  7. Synchronous-digitization for Video Rate Polarization Modulated Beam Scanning Second Harmonic Generation Microscopy.

    PubMed

    Sullivan, Shane Z; DeWalt, Emma L; Schmitt, Paul D; Muir, Ryan M; Simpson, Garth J

    2015-03-09

    Fast beam-scanning non-linear optical microscopy, coupled with fast (8 MHz) polarization modulation and analytical modeling have enabled simultaneous nonlinear optical Stokes ellipsometry (NOSE) and linear Stokes ellipsometry imaging at video rate (15 Hz). NOSE enables recovery of the complex-valued Jones tensor that describes the polarization-dependent observables, in contrast to polarimetry, in which the polarization stated of the exciting beam is recorded. Each data acquisition consists of 30 images (10 for each detector, with three detectors operating in parallel), each of which corresponds to polarization-dependent results. Processing of this image set by linear fitting contracts down each set of 10 images to a set of 5 parameters for each detector in second harmonic generation (SHG) and three parameters for the transmittance of the fundamental laser beam. Using these parameters, it is possible to recover the Jones tensor elements of the sample at video rate. Video rate imaging is enabled by performing synchronous digitization (SD), in which a PCIe digital oscilloscope card is synchronized to the laser (the laser is the master clock.) Fast polarization modulation was achieved by modulating an electro-optic modulator synchronously with the laser and digitizer, with a simple sine-wave at 1/10th the period of the laser, producing a repeating pattern of 10 polarization states. This approach was validated using Z-cut quartz, and NOSE microscopy was performed for micro-crystals of naproxen.

  8. Synchronous-digitization for video rate polarization modulated beam scanning second harmonic generation microscopy

    NASA Astrophysics Data System (ADS)

    Sullivan, Shane Z.; DeWalt, Emma L.; Schmitt, Paul D.; Muir, Ryan D.; Simpson, Garth J.

    2015-03-01

    Fast beam-scanning non-linear optical microscopy, coupled with fast (8 MHz) polarization modulation and analytical modeling have enabled simultaneous nonlinear optical Stokes ellipsometry (NOSE) and linear Stokes ellipsometry imaging at video rate (15 Hz). NOSE enables recovery of the complex-valued Jones tensor that describes the polarization-dependent observables, in contrast to polarimetry, in which the polarization stated of the exciting beam is recorded. Each data acquisition consists of 30 images (10 for each detector, with three detectors operating in parallel), each of which corresponds to polarization-dependent results. Processing of this image set by linear fitting contracts down each set of 10 images to a set of 5 parameters for each detector in second harmonic generation (SHG) and three parameters for the transmittance of the fundamental laser beam. Using these parameters, it is possible to recover the Jones tensor elements of the sample at video rate. Video rate imaging is enabled by performing synchronous digitization (SD), in which a PCIe digital oscilloscope card is synchronized to the laser (the laser is the master clock.) Fast polarization modulation was achieved by modulating an electro-optic modulator synchronously with the laser and digitizer, with a simple sine-wave at 1/10th the period of the laser, producing a repeating pattern of 10 polarization states. This approach was validated using Z-cut quartz, and NOSE microscopy was performed for micro-crystals of naproxen.

  9. Conducting a study of Internet-based video conferencing for assessing acute medical problems in a nursing facility.

    PubMed Central

    Weiner, Michael; Schadow, Gunther; Lindbergh, Donald; Warvel, Jill; Abernathy, Greg; Perkins, Susan M.; Dexter, Paul R.; McDonald, Clement J.

    2002-01-01

    We expect the use of real-time, interactive video conferencing to grow, due to more affordable technology and new health policies. Building and implementing portable systems to enable conferencing between physicians and patients requires durable equipment, committed staff, reliable service, and adequate protection and capture of data. We are studying the use of Internet-based conferencing between on-call physicians and patients residing in a nursing facility. We describe the challenges we experienced in constructing the study. Initiating and orchestrating unscheduled conferences needs to be easy, and requirements for training staff in using equipment should be minimal. Studies of health outcomes should include identification of medical conditions most amenable to benefit from conferencing, and outcomes should include positive as well as negative effects. PMID:12463950

  10. A real-time spectral mapper as an emerging diagnostic technology in biomedical sciences.

    PubMed

    Epitropou, George; Kavvadias, Vassilis; Iliou, Dimitris; Stathopoulos, Efstathios; Balas, Costas

    2013-01-01

    Real time spectral imaging and mapping at video rates can have tremendous impact not only on diagnostic sciences but also on fundamental physiological problems. We report the first real-time spectral mapper based on the combination of snap-shot spectral imaging and spectral estimation algorithms. Performance evaluation revealed that six band imaging combined with the Wiener algorithm provided high estimation accuracy, with error levels lying within the experimental noise. High accuracy is accompanied with much faster, by 3 orders of magnitude, spectral mapping, as compared with scanning spectral systems. This new technology is intended to enable spectral mapping at nearly video rates in all kinds of dynamic bio-optical effects as well as in applications where the target-probe relative position is randomly and fast changing.

  11. Automation and apps for clinical dental biomechanics.

    PubMed

    Adams, Bruce W

    2016-09-01

    The aim of this research summary is to introduce the current and ongoing work using smartphone video, tracking markers to measure musculoskeletal disorders of cranial and mandibular origin, and the potential significance of the technology to doctors and therapists. The MPA™ biomechanical measuring apps are in beta trials with various doctors and therapists. The technique requires substantial image processing and statistical analysis, best suited to server-side processing. A smartphone environment has enabled a virtual laboratory, which provides automated generation of graphics and in some cases automated interpretation. The system enables highly accurate real-time biomechanics studies using only a smartphone and tracking markers. Despite the technical challenges in setting up and testing of the virtual environment and with interpretation of clinical relevance, the trials have enabled a demonstration of real-time biomechanics studies. The technology has prompted a lot of discussion about the relevance of rapid assessment tools in clinical practice. It seems that a prior bias against motion tracking and its relevance is very strong with occlusion related use cases, yet there has been a general agreement about the use case for cranial movement tracking in managing complex issues related to the head, neck, and TMJ. Measurement of cranial and mandibular functions using a smartphone video as the input have been investigated. Ongoing research will depend upon doctors and therapists to provide feedback as to which uses are considered clinically relevant.

  12. Achieving quality of service in IP networks

    NASA Astrophysics Data System (ADS)

    Hays, Tim

    2001-07-01

    The Internet Protocol (IP) has served global networks well, providing a standardized method to transmit data among many disparate systems. But IP is designed for simplicity, and only enables a `best effort' service that can be subject to delays and loss of data. For data networks, this is an acceptable trade-off. In the emerging world of convergence, driven by new applications such as video streaming and IP telephony, minimizing latency and packet loss as well as jitter can be critical. Simply increasing the size of the IP network `pipe' to meet those demands is not always sufficient. In this environment, vendors and standards bodies are endeavoring to create technologies and techniques to enable IP to improve the quality of service it can provide, while retaining the characteristics that has enabled it to become the dominant networking protocol.

  13. Choosing an Angle: Citizenship through Video Production

    ERIC Educational Resources Information Center

    Verrall, Ben

    2006-01-01

    Citizenship education is an important part of the development of young adults, enabling them to learn about their rights and responsibilities, and to understand how society works. Video is an effective medium for young people to express their views and, through involvement in a production process, they are able to learn more about putting forward…

  14. Enabling Access and Enhancing Comprehension of Video Content for Postsecondary Students with Intellectual Disability

    ERIC Educational Resources Information Center

    Evmenova, Anya S.; Behrmann, Michael M.

    2014-01-01

    There is a great need for new innovative tools to integrate individuals with intellectual disability into educational experiences. This multiple baseline study examined the effects of various adaptations for improving factual and inferential comprehension of non-fiction videos by six postsecondary students with intellectual disability. Video…

  15. Toward a Video Pedagogy: A Teaching Typology with Learning Goals

    ERIC Educational Resources Information Center

    Andrist, Lester; Chepp, Valerie; Dean, Paul; Miller, Michael V.

    2014-01-01

    Given the massive volume of course-relevant videos now available on the Internet, this article outlines a pedagogy to facilitate the instructional employment of such materials. First, we describe special features of streaming media that have enabled their use in the classroom. Next, we introduce a typology comprised of six categories (conjuncture,…

  16. The Effect of Student Self-Video of Performance on Clinical Skill Competency: A Randomised Controlled Trial

    ERIC Educational Resources Information Center

    Maloney, Stephen; Storr, Michael; Morgan, Prue; Ilic, Dragan

    2013-01-01

    Emerging technologies and student information technology literacy are enabling new methods of teaching and learning for clinical skill performance. Facilitating experiential practice and reflection on performance through student self-video, and exposure to peer benchmarks, may promote greater levels of skill competency. This study examines the…

  17. Development of an NPS Middle Ultraviolet Spectrograph (Mustang) Electronic Interface

    DTIC Science & Technology

    1991-12-01

    connecting coaxial shield ................................ 140 xiii Figure 7-12 Encode Command Signal (top) and Video Data Signal (bottom) after connecting...coaxial shield ................................................ 142 Figure 7-13 Data Ready Signal (top) and Video Data Signal (bottom) after...connecting coaxial shield ....................................................... 142 Figure 7-14 Word Clock (top) and Gated Enable Signal Rising Edge (bottom

  18. The choking game and YouTube: a dangerous combination.

    PubMed

    Linkletter, Martha; Gordon, Kevin; Dooley, Joe

    2010-03-01

    To study postings of partial asphyxiation by adolescents on YouTube and to increase awareness of this dangerous activity as well as the value of YouTube as a research tool. Videos were searched on YouTube using many terms for recreational partial asphyxiation. Data were gathered on the participants and on the occurrence of hypoxic seizure. Sixty-five videos of the asphyxiation game were identified. Most (90%) participants were male. A variety of techniques were used. Hypoxic seizures were witnessed in 55% of videos, but occurred in 88% of videos that employed the "sleeper hold" technique. The videos were collectively viewed 173550 times on YouTube. YouTube has enabled millions of young people to watch videos of the "choking game" and other dangerous activities. Seeing videos may normalize the behavior among adolescents. Increased awareness of this activity may prevent some youths from participating and potentially harming themselves or others.

  19. A simple method for panretinal imaging with the slit lamp.

    PubMed

    Gellrich, Marcus-Matthias

    2016-12-01

    Slit lamp biomicroscopy of the retina with a convex lens is a key procedure in clinical practice. The methods presented enable ophthalmologists to adequately image large and peripheral parts of the fundus using a video-slit lamp and freely available stitching software. A routine examination of the fundus with a slit lamp and a +90 D lens is recorded on a video film. Later, sufficiently sharp still images are identified on the video sequence. These still images are imported into a freely available image-processing program (Hugin, for stitching mosaics together digitally) and corresponding points are marked on adjacent still images with some overlap. Using the digital stitching program Hugin panoramic overviews of the retina can be built which can extend to the equator. This allows to image diseases involving the whole retina or its periphery by performing a structured fundus examination with a video-slit lamp. Similar images with a video-slit lamp based on a fundus examination through a hand-held non-contact lens have not been demonstrated before. The methods presented enable those ophthalmologists without high-end imaging equipment to monitor pathological fundus findings. The suggested procedure might even be interesting for retinological departments if peripheral findings are to be documented which might be difficult with fundus cameras.

  20. Video Capture of Plastic Surgery Procedures Using the GoPro HERO 3+.

    PubMed

    Graves, Steven Nicholas; Shenaq, Deana Saleh; Langerman, Alexander J; Song, David H

    2015-02-01

    Significant improvements can be made in recoding surgical procedures, particularly in capturing high-quality video recordings from the surgeons' point of view. This study examined the utility of the GoPro HERO 3+ Black Edition camera for high-definition, point-of-view recordings of plastic and reconstructive surgery. The GoPro HERO 3+ Black Edition camera was head-mounted on the surgeon and oriented to the surgeon's perspective using the GoPro App. The camera was used to record 4 cases: 2 fat graft procedures and 2 breast reconstructions. During cases 1-3, an assistant remotely controlled the GoPro via the GoPro App. For case 4 the GoPro was linked to a WiFi remote, and controlled by the surgeon. Camera settings for case 1 were as follows: 1080p video resolution; 48 fps; Protune mode on; wide field of view; 16:9 aspect ratio. The lighting contrast due to the overhead lights resulted in limited washout of the video image. Camera settings were adjusted for cases 2-4 to a narrow field of view, which enabled the camera's automatic white balance to better compensate for bright lights focused on the surgical field. Cases 2-4 captured video sufficient for teaching or presentation purposes. The GoPro HERO 3+ Black Edition camera enables high-quality, cost-effective video recording of plastic and reconstructive surgery procedures. When set to a narrow field of view and automatic white balance, the camera is able to sufficiently compensate for the contrasting light environment of the operating room and capture high-resolution, detailed video.

  1. Vehicle-triggered video compression/decompression for fast and efficient searching in large video databases

    NASA Astrophysics Data System (ADS)

    Bulan, Orhan; Bernal, Edgar A.; Loce, Robert P.; Wu, Wencheng

    2013-03-01

    Video cameras are widely deployed along city streets, interstate highways, traffic lights, stop signs and toll booths by entities that perform traffic monitoring and law enforcement. The videos captured by these cameras are typically compressed and stored in large databases. Performing a rapid search for a specific vehicle within a large database of compressed videos is often required and can be a time-critical life or death situation. In this paper, we propose video compression and decompression algorithms that enable fast and efficient vehicle or, more generally, event searches in large video databases. The proposed algorithm selects reference frames (i.e., I-frames) based on a vehicle having been detected at a specified position within the scene being monitored while compressing a video sequence. A search for a specific vehicle in the compressed video stream is performed across the reference frames only, which does not require decompression of the full video sequence as in traditional search algorithms. Our experimental results on videos captured in a local road show that the proposed algorithm significantly reduces the search space (thus reducing time and computational resources) in vehicle search tasks within compressed video streams, particularly those captured in light traffic volume conditions.

  2. High End Visualization of Geophysical Datasets Using Immersive Technology: The SIO Visualization Center.

    NASA Astrophysics Data System (ADS)

    Newman, R. L.

    2002-12-01

    How many images can you display at one time with Power Point without getting "postage stamps"? Do you have fantastic datasets that you cannot view because your computer is too slow/small? Do you assume a few 2-D images of a 3-D picture are sufficient? High-end visualization centers can minimize and often eliminate these problems. The new visualization center [http://siovizcenter.ucsd.edu] at Scripps Institution of Oceanography [SIO] immerses users into a virtual world by projecting 3-D images onto a Panoram GVR-120E wall-sized floor-to-ceiling curved screen [7' x 23'] that has 3.2 mega-pixels of resolution. The Infinite Reality graphics subsystem is driven by a single-pipe SGI Onyx 3400 with a system bandwidth of 44 Gbps. The Onyx is powered by 16 MIPS R12K processors and 16 GB of addressable memory. The system is also equipped with transmitters and LCD shutter glasses which permit stereographic 3-D viewing of high-resolution images. This center is ideal for groups of up to 60 people who can simultaneously view these large-format images. A wide range of hardware and software is available, giving the users a totally immersive working environment in which to display, analyze, and discuss large datasets. The system enables simultaneous display of video and audio streams from sources such as SGI megadesktop and stereo megadesktop, S-VHS video, DVD video, and video from a Macintosh or PC. For instance, one-third of the screen might be displaying S-VHS video from a remotely-operated-vehicle [ROV], while the remaining portion of the screen might be used for an interactive 3-D flight over the same parcel of seafloor. The video and audio combinations using this system are numerous, allowing users to combine and explore data and images in innovative ways, greatly enhancing scientists' ability to visualize, understand and collaborate on complex datasets. In the not-distant future, with the rapid growth in networking speeds in the US, it will be possible for Earth Sciences Departments to collaborate effectively while limiting the amount of physical travel required. This includes porting visualization content to the popular, low-cost Geowall visualization systems, and providing web-based access to databanks filled with stock geoscience visualizations.

  3. Aspects of Synthetic Vision Display Systems and the Best Practices of the NASA's SVS Project

    NASA Technical Reports Server (NTRS)

    Bailey, Randall E.; Kramer, Lynda J.; Jones, Denise R.; Young, Steven D.; Arthur, Jarvis J.; Prinzel, Lawrence J.; Glaab, Louis J.; Harrah, Steven D.; Parrish, Russell V.

    2008-01-01

    NASA s Synthetic Vision Systems (SVS) Project conducted research aimed at eliminating visibility-induced errors and low visibility conditions as causal factors in civil aircraft accidents while enabling the operational benefits of clear day flight operations regardless of actual outside visibility. SVS takes advantage of many enabling technologies to achieve this capability including, for example, the Global Positioning System (GPS), data links, radar, imaging sensors, geospatial databases, advanced display media and three dimensional video graphics processors. Integration of these technologies to achieve the SVS concept provides pilots with high-integrity information that improves situational awareness with respect to terrain, obstacles, traffic, and flight path. This paper attempts to emphasize the system aspects of SVS - true systems, rather than just terrain on a flight display - and to document from an historical viewpoint many of the best practices that evolved during the SVS Project from the perspective of some of the NASA researchers most heavily involved in its execution. The Integrated SVS Concepts are envisagements of what production-grade Synthetic Vision systems might, or perhaps should, be in order to provide the desired functional capabilities that eliminate low visibility as a causal factor to accidents and enable clear-day operational benefits regardless of visibility conditions.

  4. 3rd-generation MW/LWIR sensor engine for advanced tactical systems

    NASA Astrophysics Data System (ADS)

    King, Donald F.; Graham, Jason S.; Kennedy, Adam M.; Mullins, Richard N.; McQuitty, Jeffrey C.; Radford, William A.; Kostrzewa, Thomas J.; Patten, Elizabeth A.; McEwan, Thomas F.; Vodicka, James G.; Wootan, John J.

    2008-04-01

    Raytheon has developed a 3rd-Generation FLIR Sensor Engine (3GFSE) for advanced U.S. Army systems. The sensor engine is based around a compact, productized detector-dewar assembly incorporating a 640 x 480 staring dual-band (MW/LWIR) focal plane array (FPA) and a dual-aperture coldshield mechanism. The capability to switch the coldshield aperture and operate at either of two widely-varying f/#s will enable future multi-mode tactical systems to more fully exploit the many operational advantages offered by dual-band FPAs. RVS has previously demonstrated high-performance dual-band MW/LWIR FPAs in 640 x 480 and 1280 x 720 formats with 20 μm pitch. The 3GFSE includes compact electronics that operate the dual-band FPA and variable-aperture mechanism, and perform 14-bit analog-to-digital conversion of the FPA output video. Digital signal processing electronics perform "fixed" two-point non-uniformity correction (NUC) of the video from both bands and optional dynamic scene-based NUC; advanced enhancement processing of the output video is also supported. The dewar-electronics assembly measures approximately 4.75 x 2.25 x 1.75 inches. A compact, high-performance linear cooler and cooler electronics module provide the necessary FPA cooling over a military environmental temperature range. 3GFSE units are currently being assembled and integrated at RVS, with the first units planned for delivery to the US Army.

  5. A new generation of small pixel pitch/SWaP cooled infrared detectors

    NASA Astrophysics Data System (ADS)

    Espuno, L.; Pacaud, O.; Reibel, Y.; Rubaldo, L.; Kerlain, A.; Péré-Laperne, N.; Dariel, A.; Roumegoux, J.; Brunner, A.; Kessler, A.; Gravrand, O.; Castelein, P.

    2015-10-01

    Following clear technological trends, the cooled IR detectors market is now in demand for smaller, more efficient and higher performance products. This demand pushes products developments towards constant innovations on detectors, read-out circuits, proximity electronics boards, and coolers. Sofradir was first to show a 10μm focal plane array (FPA) at DSS 2012, and announced the DAPHNIS 10μm product line back in 2014. This pixel pitch is a key enabler for infrared detectors with increased resolution. Sofradir recently achieved outstanding products demonstrations at this pixel pitch, which clearly demonstrate the benefits of adopting 10μm pixel pitch focal plane array-based detectors. Both HD and XGA Daphnis 10μm products also benefit from a global video datapath efficiency improvement by transitioning to digital video interfaces. Moreover, innovative smart pixels functionalities drastically increase product versatility. In addition to this strong push towards a higher pixels density, Sofradir acknowledges the need for smaller and lower power cooled infrared detector. Together with straightforward system interfaces and better overall performances, latest technological advances on SWAP-C (Size, Weight, Power and Cost) Sofradir products enable the advent of a new generation of high performance portable and agile systems (handheld thermal imagers, unmanned aerial vehicles, light gimbals etc...). This paper focuses on those features and performances that can make an actual difference in the field.

  6. Enabling high grayscale resolution displays and accurate response time measurements on conventional computers.

    PubMed

    Li, Xiangrui; Lu, Zhong-Lin

    2012-02-29

    Display systems based on conventional computer graphics cards are capable of generating images with 8-bit gray level resolution. However, most experiments in vision research require displays with more than 12 bits of luminance resolution. Several solutions are available. Bit++ (1) and DataPixx (2) use the Digital Visual Interface (DVI) output from graphics cards and high resolution (14 or 16-bit) digital-to-analog converters to drive analog display devices. The VideoSwitcher (3) described here combines analog video signals from the red and blue channels of graphics cards with different weights using a passive resister network (4) and an active circuit to deliver identical video signals to the three channels of color monitors. The method provides an inexpensive way to enable high-resolution monochromatic displays using conventional graphics cards and analog monitors. It can also provide trigger signals that can be used to mark stimulus onsets, making it easy to synchronize visual displays with physiological recordings or response time measurements. Although computer keyboards and mice are frequently used in measuring response times (RT), the accuracy of these measurements is quite low. The RTbox is a specialized hardware and software solution for accurate RT measurements. Connected to the host computer through a USB connection, the driver of the RTbox is compatible with all conventional operating systems. It uses a microprocessor and high-resolution clock to record the identities and timing of button events, which are buffered until the host computer retrieves them. The recorded button events are not affected by potential timing uncertainties or biases associated with data transmission and processing in the host computer. The asynchronous storage greatly simplifies the design of user programs. Several methods are available to synchronize the clocks of the RTbox and the host computer. The RTbox can also receive external triggers and be used to measure RT with respect to external events. Both VideoSwitcher and RTbox are available for users to purchase. The relevant information and many demonstration programs can be found at http://lobes.usc.edu/.

  7. Activity-based exploitation of Full Motion Video (FMV)

    NASA Astrophysics Data System (ADS)

    Kant, Shashi

    2012-06-01

    Video has been a game-changer in how US forces are able to find, track and defeat its adversaries. With millions of minutes of video being generated from an increasing number of sensor platforms, the DOD has stated that the rapid increase in video is overwhelming their analysts. The manpower required to view and garner useable information from the flood of video is unaffordable, especially in light of current fiscal restraints. "Search" within full-motion video has traditionally relied on human tagging of content, and video metadata, to provision filtering and locate segments of interest, in the context of analyst query. Our approach utilizes a novel machine-vision based approach to index FMV, using object recognition & tracking, events and activities detection. This approach enables FMV exploitation in real-time, as well as a forensic look-back within archives. This approach can help get the most information out of video sensor collection, help focus the attention of overburdened analysts form connections in activity over time and conserve national fiscal resources in exploiting FMV.

  8. Presentation video retrieval using automatically recovered slide and spoken text

    NASA Astrophysics Data System (ADS)

    Cooper, Matthew

    2013-03-01

    Video is becoming a prevalent medium for e-learning. Lecture videos contain text information in both the presentation slides and lecturer's speech. This paper examines the relative utility of automatically recovered text from these sources for lecture video retrieval. To extract the visual information, we automatically detect slides within the videos and apply optical character recognition to obtain their text. Automatic speech recognition is used similarly to extract spoken text from the recorded audio. We perform controlled experiments with manually created ground truth for both the slide and spoken text from more than 60 hours of lecture video. We compare the automatically extracted slide and spoken text in terms of accuracy relative to ground truth, overlap with one another, and utility for video retrieval. Results reveal that automatically recovered slide text and spoken text contain different content with varying error profiles. Experiments demonstrate that automatically extracted slide text enables higher precision video retrieval than automatically recovered spoken text.

  9. Modeling the time--varying subjective quality of HTTP video streams with rate adaptations.

    PubMed

    Chen, Chao; Choi, Lark Kwon; de Veciana, Gustavo; Caramanis, Constantine; Heath, Robert W; Bovik, Alan C

    2014-05-01

    Newly developed hypertext transfer protocol (HTTP)-based video streaming technologies enable flexible rate-adaptation under varying channel conditions. Accurately predicting the users' quality of experience (QoE) for rate-adaptive HTTP video streams is thus critical to achieve efficiency. An important aspect of understanding and modeling QoE is predicting the up-to-the-moment subjective quality of a video as it is played, which is difficult due to hysteresis effects and nonlinearities in human behavioral responses. This paper presents a Hammerstein-Wiener model for predicting the time-varying subjective quality (TVSQ) of rate-adaptive videos. To collect data for model parameterization and validation, a database of longer duration videos with time-varying distortions was built and the TVSQs of the videos were measured in a large-scale subjective study. The proposed method is able to reliably predict the TVSQ of rate adaptive videos. Since the Hammerstein-Wiener model has a very simple structure, the proposed method is suitable for online TVSQ prediction in HTTP-based streaming.

  10. PRagmatic trial Of Video Education in Nursing homes: The design and rationale for a pragmatic cluster randomized trial in the nursing home setting.

    PubMed

    Mor, Vincent; Volandes, Angelo E; Gutman, Roee; Gatsonis, Constantine; Mitchell, Susan L

    2017-04-01

    Background/Aims Nursing homes are complex healthcare systems serving an increasingly sick population. Nursing homes must engage patients in advance care planning, but do so inconsistently. Video decision support tools improved advance care planning in small randomized controlled trials. Pragmatic trials are increasingly employed in health services research, although not commonly in the nursing home setting to which they are well-suited. This report presents the design and rationale for a pragmatic cluster randomized controlled trial that evaluated the "real world" application of an Advance Care Planning Video Program in two large US nursing home healthcare systems. Methods PRagmatic trial Of Video Education in Nursing homes was conducted in 360 nursing homes (N = 119 intervention/N = 241 control) owned by two healthcare systems. Over an 18-month implementation period, intervention facilities were instructed to offer the Advance Care Planning Video Program to all patients. Control facilities employed usual advance care planning practices. Patient characteristics and outcomes were ascertained from Medicare Claims, Minimum Data Set assessments, and facility electronic medical record data. Intervention adherence was measured using a Video Status Report embedded into electronic medical record systems. The primary outcome was the number of hospitalizations/person-day alive among long-stay patients with advanced dementia or cardiopulmonary disease. The rationale for the approaches to facility randomization and recruitment, intervention implementation, population selection, data acquisition, regulatory issues, and statistical analyses are discussed. Results The large number of well-characterized candidate facilities enabled several unique design features including stratification on historical hospitalization rates, randomization prior to recruitment, and 2:1 control to intervention facilities ratio. Strong endorsement from corporate leadership made randomization prior to recruitment feasible with 100% participation of facilities randomized to the intervention arm. Critical regulatory issues included minimal risk determination, waiver of informed consent, and determination that nursing home providers were not engaged in human subjects research. Intervention training and implementation were initiated on 5 January 2016 using corporate infrastructures for new program roll-out guided by standardized training elements designed by the research team. Video Status Reports in facilities' electronic medical records permitted "real-time" adherence monitoring and corrective actions. The Centers for Medicare and Medicaid Services Virtual Research Data Center allowed for rapid outcomes ascertainment. Conclusion We must rigorously evaluate interventions to deliver more patient-focused care to an increasingly frail nursing home population. Video decision support is a practical approach to improve advance care planning. PRagmatic trial Of Video Education in Nursing homes has the potential to promote goal-directed care among millions of older Americans in nursing homes and establish a methodology for future pragmatic randomized controlled trials in this complex healthcare setting.

  11. Implications of the law on video recording in clinical practice.

    PubMed

    Henken, Kirsten R; Jansen, Frank Willem; Klein, Jan; Stassen, Laurents P S; Dankelman, Jenny; van den Dobbelsteen, John J

    2012-10-01

    Technological developments allow for a variety of applications of video recording in health care, including endoscopic procedures. Although the value of video registration is recognized, medicolegal concerns regarding the privacy of patients and professionals are growing. A clear understanding of the legal framework is lacking. Therefore, this research aims to provide insight into the juridical position of patients and professionals regarding video recording in health care practice. Jurisprudence was searched to exemplify legislation on video recording in health care. In addition, legislation was translated for different applications of video in health care found in the literature. Three principles in Western law are relevant for video recording in health care practice: (1) regulations on privacy regarding personal data, which apply to the gathering and processing of video data in health care settings; (2) the patient record, in which video data can be stored; and (3) professional secrecy, which protects the privacy of patients including video data. Practical implementation of these principles in video recording in health care does not exist. Practical regulations on video recording in health care for different specifically defined purposes are needed. Innovations in video capture technology that enable video data to be made anonymous automatically can contribute to protection for the privacy of all the people involved.

  12. Terabytes to Megabytes: Data Reduction Onsite for Remote Limited Bandwidth Systems

    NASA Astrophysics Data System (ADS)

    Hirsch, M.

    2016-12-01

    Inexpensive, battery-powerable embedded computer systems such as the Intel Edison and Raspberry Pi have inspired makers of all ages to create and deploy sensor systems. Geoscientists are also leveraging such inexpensive embedded computers for solar-powered or other low-resource utilization systems for ionospheric observation. We have developed OpenCV-based machine vision algorithms to reduce terabytes per night of high-speed aurora video data down to megabytes of data to aid in automated sifting and retention of high-value data from the mountains of less interesting data. Given prohibitively expensive data connections in many parts of the world, such techniques may be generalizable to more than just the auroral video and passive FM radar implemented so far. After the automated algorithm decides which data to keep, automated upload and distribution techniques are relevant to avoid excessive delay and consumption of researcher time. Open-source collaborative software development enables data audiences from experts through citizen enthusiasts to access the data and make exciting plots. Open software and data aids in cross-disciplinary collaboration opportunities, STEM outreach and increasing public awareness of the contributions each geoscience data collection system makes.

  13. Artificial vision support system (AVS(2)) for improved prosthetic vision.

    PubMed

    Fink, Wolfgang; Tarbell, Mark A

    2014-11-01

    State-of-the-art and upcoming camera-driven, implanted artificial vision systems provide only tens to hundreds of electrodes, affording only limited visual perception for blind subjects. Therefore, real time image processing is crucial to enhance and optimize this limited perception. Since tens or hundreds of pixels/electrodes allow only for a very crude approximation of the typically megapixel optical resolution of the external camera image feed, the preservation and enhancement of contrast differences and transitions, such as edges, are especially important compared to picture details such as object texture. An Artificial Vision Support System (AVS(2)) is devised that displays the captured video stream in a pixelation conforming to the dimension of the epi-retinal implant electrode array. AVS(2), using efficient image processing modules, modifies the captured video stream in real time, enhancing 'present but hidden' objects to overcome inadequacies or extremes in the camera imagery. As a result, visual prosthesis carriers may now be able to discern such objects in their 'field-of-view', thus enabling mobility in environments that would otherwise be too hazardous to navigate. The image processing modules can be engaged repeatedly in a user-defined order, which is a unique capability. AVS(2) is directly applicable to any artificial vision system that is based on an imaging modality (video, infrared, sound, ultrasound, microwave, radar, etc.) as the first step in the stimulation/processing cascade, such as: retinal implants (i.e. epi-retinal, sub-retinal, suprachoroidal), optic nerve implants, cortical implants, electric tongue stimulators, or tactile stimulators.

  14. Simulation videos presented in a blended learning platform to improve Australian nursing students' knowledge of family assessment.

    PubMed

    Coyne, Elisabeth; Frommolt, Valda; Rands, Hazel; Kain, Victoria; Mitchell, Marion

    2018-07-01

    The provision of simulation to enhance learning is becoming common practice as clinical placement becomes harder to secure within Bachelor of Nursing programs. The use of simulation videos within a blended learning platform enables students to view best practice and provides relevant links between theory and practice. Four simulation videos depicting family assessment viewed by a cohort of Australian undergraduate nursing students were evaluated. These videos were professionally developed using actors and experienced family nurses. Surveys were used to explore the students' self-assessed knowledge, confidence and learning preferences before and after exposure to blended learning resources. Students' engagement with the simulated videos was captured via the Learning Management System. Time 1 survey was completed by 163 students and Time 2 by 91 students. There was a significant increase in students' perceived knowledge of family theory Item 1 from a mean 4.13 (SD = 1.04) at Time 1 to 4.74 (SD = 0.89) (Z = -4.54 p < 0.001) at Time 2; Item 2- Knowledge of family assessment improved from mean 3.91 (SD = 1.02) at Time 1 to 4.90 (SD = 0.67) (Z = -7.86 p < 0.001) at Time 2. Also a significant increase in their confidence undertaking family assessment Item 5 from a mean 3.55 (SD = 1.14) at Time 1 to 4.44 (SD = 0.85) (Z = -6.12 p < 0.001) at Time 2. The students watched the videos an average of 1.9 times. The simulated videos as a blended learning resource increases the students' understanding of family assessment and is worth incorporating into future development of courses. Copyright © 2018 Elsevier Ltd. All rights reserved.

  15. Learners' Use of Communication Strategies in Text-Based and Video-Based Synchronous Computer-Mediated Communication Environments: Opportunities for Language Learning

    ERIC Educational Resources Information Center

    Hung, Yu-Wan; Higgins, Steve

    2016-01-01

    This study investigates the different learning opportunities enabled by text-based and video-based synchronous computer-mediated communication (SCMC) from an interactionist perspective. Six Chinese-speaking learners of English and six English-speaking learners of Chinese were paired up as tandem (reciprocal) learning dyads. Each dyad participated…

  16. Game-Based Curricula in Biology Classes: Differential Effects among Varying Academic Levels

    ERIC Educational Resources Information Center

    Sadler, Troy D.; Romine, William L.; Stuart, Parker E.; Merle-Johnson, Dominike

    2013-01-01

    Video games have become a popular medium in our society, and recent scholarship suggests that games can support substantial learning. This study stems from a project in which we created a video game enabling students to use biotechnology to solve a societal problem. As students engaged in the game, they necessarily interacted with the underlying…

  17. Using Text Mining to Uncover Students' Technology-Related Problems in Live Video Streaming

    ERIC Educational Resources Information Center

    Abdous, M'hammed; He, Wu

    2011-01-01

    Because of their capacity to sift through large amounts of data, text mining and data mining are enabling higher education institutions to reveal valuable patterns in students' learning behaviours without having to resort to traditional survey methods. In an effort to uncover live video streaming (LVS) students' technology related-problems and to…

  18. The Resonance Factor: Probing the Impact of Video on Student Retention in Distance Learning

    ERIC Educational Resources Information Center

    Geri, Nitza

    2012-01-01

    Teaching and instructing is one of the challenging manifestations of informing, within which distance learning is considered harder than face-to-face instruction. Student retention is one of the major challenges of distance learning. Current innovative technologies enable widespread use of video lectures that may ease the loneliness of the…

  19. The U-Curve of E-Learning: Course Website and Online Video Use in Blended and Distance Learning

    ERIC Educational Resources Information Center

    Geri, Nitza; Gafni, Ruti; Winer, Amir

    2014-01-01

    Procrastination is a common challenge for students. While course Websites and online video lectures enable studying anytime, anywhere, and expand learning opportunities, their availability may increase procrastination by making it easier for students to defer until tomorrow. This research used Google Analytics to examine temporal use patterns of…

  20. Teaching "How Science Works" by Making and Sharing Videos

    ERIC Educational Resources Information Center

    Ingram, Neil

    2010-01-01

    "Science.tv" is a website where teachers and pupils can find quality video clips on a variety of scientific topics. It enables pupils to share research ideas and adds a dynamic new dimension to practical work. It has the potential to become an innovative way of incorporating "How science works" into secondary science curricula by encouraging…

  1. Monitoring Therapy Adherence of Tuberculosis Patients by using Video-Enabled Electronic Devices

    PubMed Central

    Story, Alistair; Garfein, Richard S.; Hayward, Andrew; Rusovich, Valiantsin; Dadu, Andrei; Soltan, Viorel; Oprunenco, Alexandru; Collins, Kelly; Sarin, Rohit; Quraishi, Subhi; Sharma, Mukta; Migliori, Giovanni Battista; Varadarajan, Maithili

    2016-01-01

    A recent innovation to help patients adhere to daily tuberculosis (TB) treatment over many months is video (or virtually) observed therapy (VOT). VOT is becoming increasingly feasible as mobile telephone applications and tablet computers become more widely available. Studies of the effectiveness of VOT in improving TB patient outcomes are being conducted. PMID:26891363

  2. Plugin free remote visualization in the browser

    NASA Astrophysics Data System (ADS)

    Tamm, Georg; Slusallek, Philipp

    2015-01-01

    Today, users access information and rich media from anywhere using the web browser on their desktop computers, tablets or smartphones. But the web evolves beyond media delivery. Interactive graphics applications like visualization or gaming become feasible as browsers advance in the functionality they provide. However, to deliver large-scale visualization to thin clients like mobile devices, a dedicated server component is necessary. Ideally, the client runs directly within the browser the user is accustomed to, requiring no installation of a plugin or native application. In this paper, we present the state-of-the-art of technologies which enable plugin free remote rendering in the browser. Further, we describe a remote visualization system unifying these technologies. The system transfers rendering results to the client as images or as a video stream. We utilize the upcoming World Wide Web Consortium (W3C) conform Web Real-Time Communication (WebRTC) standard, and the Native Client (NaCl) technology built into Chrome, to deliver video with low latency.

  3. Compact light-emitting diode lighting ring for video-assisted thoracic surgery.

    PubMed

    Lu, Ming-Kuan; Chang, Feng-Chen; Wang, Wen-Zhe; Hsieh, Chih-Cheng; Kao, Fu-Jen

    2014-01-01

    In this work, a foldable ring-shaped light-emitting diode (LED) lighting assembly, designed to attach to a rubber wound retractor, is realized and tested through porcine animal experiments. Enabled by the small size and the high efficiency of LED chips, the lighting assembly is compact, flexible, and disposable while providing direct and high brightness lighting for more uniform background illumination in video-assisted thoracic surgery (VATS). When compared with a conventional fiber bundle coupled light source that is usually used in laparoscopy and endoscopy, the much broader solid angle of illumination enabled by the LED assembly allows greatly improved background lighting and imaging quality in VATS.

  4. Lymphatic mapping with fluorescence navigation using indocyanine green and axillary surgery in patients with primary breast cancer.

    PubMed

    Takeuchi, Megumi; Sugie, Tomoharu; Abdelazeem, Kassim; Kato, Hironori; Shinkura, Nobuhiko; Takada, Masahiro; Yamashiro, Hiroyasu; Ueno, Takayuki; Toi, Masakazu

    2012-01-01

    The indocyanine green fluorescence (ICGf) navigation method provides real-time lymphatic mapping and sentinel lymph node (SLN) visualization, which enables the removal of SLNs and their associated lymphatic networks. In this study, we investigated the features of the drainage pathways detected with the ICGf navigation system and the order of metastasis in axillary nodes. From April 2008 to February 2010, 145 patients with clinically node-negative breast cancer underwent SLN surgery with ICGf navigation. The video-recorded data from 79 patients were used for lymphatic mapping analysis. We analyzed 145 patients with clinically node-negative breast cancer who underwent SLN surgery with the ICGf navigation system. Fluorescence-positive SLNs were identified in 144 (99%) of 145 patients. Both single and multiple routes to the axilla were identified in 47% of cases using video-recorded lymphatic mapping data. An internal mammary route was detected in 6% of the cases. Skip metastasis to the second or third SLNs was observed in 6 of the 28 node-positive patients. We also examined the strategy of axillary surgery using the ICGf navigation system. We found that, based on the features of nodal involvement, 4-node resection could provide precise information on the nodal status. The ICGf navigation system may provide a different lymphatic mapping result than computed tomography lymphography in clinically node-negative breast cancer patients. Furthermore, it enables the identification of lymph nodes that do not accumulate indocyanine green or dye adjacent to the SLNs in the sequence of drainage. Knowledge of the order of nodal metastasis as revealed by the ICGf system may help to personalize the surgical treatment of axilla in SLN-positive cases, although additional studies are required. © 2012 Wiley Periodicals, Inc.

  5. Feasibility Study of Utilization of Action Camera, GoPro Hero 4, Google Glass, and Panasonic HX-A100 in Spine Surgery.

    PubMed

    Lee, Chang Kyu; Kim, Youngjun; Lee, Nam; Kim, Byeongwoo; Kim, Doyoung; Yi, Seong

    2017-02-15

    Study for feasibility of commercially available action cameras in recording video of spine. Recent innovation of the wearable action camera with high-definition video recording enables surgeons to use camera in the operation at ease without high costs. The purpose of this study is to compare the feasibility, safety, and efficacy of commercially available action cameras in recording video of spine surgery. There are early reports of medical professionals using Google Glass throughout the hospital, Panasonic HX-A100 action camera, and GoPro. This study is the first report for spine surgery. Three commercially available cameras were tested: GoPro Hero 4 Silver, Google Glass, and Panasonic HX-A100 action camera. Typical spine surgery was selected for video recording; posterior lumbar laminectomy and fusion. Three cameras were used by one surgeon and video was recorded throughout the operation. The comparison was made on the perspective of human factor, specification, and video quality. The most convenient and lightweight device for wearing and holding throughout the long operation time was Google Glass. The image quality; all devices except Google Glass supported HD format and GoPro has unique 2.7K or 4K resolution. Quality of video resolution was best in GoPro. Field of view, GoPro can adjust point of interest, field of view according to the surgery. Narrow FOV option was the best for recording in GoPro to share the video clip. Google Glass has potentials by using application programs. Connectivity such as Wi-Fi and Bluetooth enables video streaming for audience, but only Google Glass has two-way communication feature in device. Action cameras have the potential to improve patient safety, operator comfort, and procedure efficiency in the field of spinal surgery and broadcasting a surgery with development of the device and applied program in the future. N/A.

  6. Color image processing and object tracking workstation

    NASA Technical Reports Server (NTRS)

    Klimek, Robert B.; Paulick, Michael J.

    1992-01-01

    A system is described for automatic and semiautomatic tracking of objects on film or video tape which was developed to meet the needs of the microgravity combustion and fluid science experiments at NASA Lewis. The system consists of individual hardware parts working under computer control to achieve a high degree of automation. The most important hardware parts include 16 mm film projector, a lens system, a video camera, an S-VHS tapedeck, a frame grabber, and some storage and output devices. Both the projector and tapedeck have a computer interface enabling remote control. Tracking software was developed to control the overall operation. In the automatic mode, the main tracking program controls the projector or the tapedeck frame incrementation, grabs a frame, processes it, locates the edge of the objects being tracked, and stores the coordinates in a file. This process is performed repeatedly until the last frame is reached. Three representative applications are described. These applications represent typical uses and include tracking the propagation of a flame front, tracking the movement of a liquid-gas interface with extremely poor visibility, and characterizing a diffusion flame according to color and shape.

  7. Deep Sea Gazing: Making Ship-Based Research Aboard RV Falkor Relevant and Accessible

    NASA Astrophysics Data System (ADS)

    Wiener, C.; Zykov, V.; Miller, A.; Pace, L. J.; Ferrini, V. L.; Friedman, A.

    2016-02-01

    Schmidt Ocean Institute (SOI) is a private, non-profit operating foundation established to advance the understanding of the world's oceans through technological advancement, intelligent observation, and open sharing of information. Our research vessel Falkorprovides ship time to selected scientists and supports a wide range of scientific functions, including ROV operations with live streaming capabilities. Since 2013, SOI has live streamed 55 ROV dives in high definition and recorded them onto YouTube. This has totaled over 327 hours of video which received 1,450, 461 views in 2014. SOI is one of the only research programs that makes their entire dive series available online, creating a rich collection of video data sets. In doing this, we provide an opportunity for scientists to make new discoveries in the video data that may have been missed earlier. These data sets are also available to students, allowing them to engage with real data in the classroom. SOI's video collection is also being used in a newly developed video management system, Ocean Video Lab. Telepresence-enabled research is an important component of Falkor cruises, which is exemplified by several that were conducted in 2015. This presentation will share a few case studies including an image tagging citizen science project conducted through the Squidle interface in partnership with the Australian Center for Field Robotics. Using real-time image data collected in the Timor Sea, numerous shore-based citizens created seafloor image tags that could be used by a machine learning algorithms on Falkor's high performance computer (HPC) to accomplish habitat characterization. With the use of the HPC system real-time robot tracking, image tagging, and other outreach connections were made possible, allowing scientists on board to engage with the public and build their knowledge base. The above mentioned examples will be used to demonstrate the benefits of remote data analysis and participatory engagement in science-based telepresence.

  8. Development of a telediagnosis endoscopy system over secure internet.

    PubMed

    Ohashi, K; Sakamoto, N; Watanabe, M; Mizushima, H; Tanaka, H

    2008-01-01

    We developed a new telediagnosis system to securely transmit high-quality endoscopic moving images over the Internet in real time. This system would enable collaboration between physicians seeking advice from endoscopists separated by long distances, to facilitate diagnosis. We adapted a new type of digital video streaming system (DVTS) to our teleendoscopic diagnosis system. To investigate its feasibility, we conducted a two-step experiment. A basic experiment was first conducted to transmit endoscopic video images between hospitals using a plain DVTS. After investigating the practical usability, we incorporated a secure and reliable communication function into the system, by equipping DVTS with "TCP2", a new security technology that establishes secure communication in the transport layer. The second experiment involved international transmission of teleendoscopic image between Hawaii and Japan using the improved system. In both the experiments, no serious transmission delay was observed to disturb physicians' communications and, after subjective evaluation by endoscopists, the diagnostic qualities of the images were found to be adequate. Moreover, the second experiment showed that "TCP2-equipped DVTS" successfully executed high-quality secure image transmission over a long distance network. We conclude that DVTS technology would be promising for teleendoscopic diagnosis. It was also shown that a high quality, secure teleendoscopic diagnosis system can be developed by equipping DVTS with TCP2.

  9. Conformal, Transparent Printed Antenna Developed for Communication and Navigation Systems

    NASA Technical Reports Server (NTRS)

    Lee, Richard Q.; Simons, Rainee N.

    1999-01-01

    Conformal, transparent printed antennas have advantages over conventional antennas in terms of space reuse and aesthetics. Because of their compactness and thin profile, these antennas can be mounted on video displays for efficient integration in communication systems such as palmtop computers, digital telephones, and flat-panel television displays. As an array of multiple elements, the antenna subsystem may save weight by reusing space (via vertical stacking) on photovoltaic arrays or on Earth-facing sensors. Also, the antenna could go unnoticed on automobile windshields or building windows, enabling satellite uplinks and downlinks or other emerging high-frequency communications.

  10. Depth assisted compression of full parallax light fields

    NASA Astrophysics Data System (ADS)

    Graziosi, Danillo B.; Alpaslan, Zahir Y.; El-Ghoroury, Hussein S.

    2015-03-01

    Full parallax light field displays require high pixel density and huge amounts of data. Compression is a necessary tool used by 3D display systems to cope with the high bandwidth requirements. One of the formats adopted by MPEG for 3D video coding standards is the use of multiple views with associated depth maps. Depth maps enable the coding of a reduced number of views, and are used by compression and synthesis software to reconstruct the light field. However, most of the developed coding and synthesis tools target linearly arranged cameras with small baselines. Here we propose to use the 3D video coding format for full parallax light field coding. We introduce a view selection method inspired by plenoptic sampling followed by transform-based view coding and view synthesis prediction to code residual views. We determine the minimal requirements for view sub-sampling and present the rate-distortion performance of our proposal. We also compare our method with established video compression techniques, such as H.264/AVC, H.264/MVC, and the new 3D video coding algorithm, 3DV-ATM. Our results show that our method not only has an improved rate-distortion performance, it also preserves the structure of the perceived light fields better.

  11. Gigantic Rolling Wave Captured on the Sun [hd video

    NASA Image and Video Library

    2017-12-08

    A corona mass ejection (CME) erupted from just around the edge of the sun on May 1, 2013, in a gigantic rolling wave. CMEs can shoot over a billion tons of particles into space at over a million miles per hour. This CME occurred on the sun’s limb and is not headed toward Earth. The video, taken in extreme ultraviolet light by NASA’s Solar Dynamics Observatory (SDO), covers about two and a half hours. Credit: NASA/Goddard/SDO NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  12. Earth from Orbit 2014

    NASA Image and Video Library

    2015-04-20

    Every day of every year, NASA satellites provide useful data about our home planet, and along the way, some beautiful images as well. This video includes satellite images of Earth in 2014 from NASA and its partners as well as photos and a time lapse video from the International Space Station. We’ve also included a range of data visualizations, model runs, and a conceptual animation that were produced in 2014 (but in some cases might have been utilizing data from earlier years.) Credit: NASA's Goddard Space Flight Center NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  13. Update on POCIT portable optical communicators: VideoBeam and EtherBeam

    NASA Astrophysics Data System (ADS)

    Mecherle, G. Stephen; Holcomb, Terry L.

    2000-05-01

    LDSC is developing the POCITTM (Portable Optical Communication Integrated Transceiver) family of products which includes VideoBeamTM and the latest addition, EtherBeamTM. Each is a full duplex portable laser communicator: VideoBeamTM providing near-broadcast- quality analog video and stereo audio, and EtherBeamTM providing standard Ethernet connectivity. Each POCITTM transceiver consists of a 3.5-pound unit with a binocular- type form factor, which can be manually pointed, tripod- mounted or gyro-stabilized. Both units have an operational range of over two miles (clear air) with excellent jam- resistance and low probability of interception characteristics. The transmission wavelength of 1550 nm enables Class 1 eyesafe operation (ANSI, IEC). The POCITTM units are ideally suited for numerous military scenarios, surveillance/espionage, industrial precious mineral exploration, and campus video teleconferencing applications. VideoBeam will be available second quarter 2000, followed by EtherBeam in third quarter 2000.

  14. The use of video clips in teleconsultation for preschool children with movement disorders.

    PubMed

    Gorter, Hetty; Lucas, Cees; Groothuis-Oudshoorn, Karin; Maathuis, Carel; van Wijlen-Hempel, Rietje; Elvers, Hans

    2013-01-01

    To investigate the reliability and validity of video clips in assessing movement disorders in preschool children. The study group included 27 children with neuromotor concerns. The explorative validity group included children with motor problems (n = 21) or with typical development (n = 9). Hempel screening was used for live observation of the child, full recording, and short video clips. The explorative study tested the validity of the clinical classifications "typical" or "suspect." Agreement between live observation and the full recording was almost perfect; Agreement for the clinical classification "typical" or "suspect" was substantial. Agreement between the full recording and short video clips was substantial to moderate. The explorative validity study, based on short video clips and the presence of a neuromotor developmental disorder, showed substantial agreement. Hempel screening enables reliable and valid observation of video clips, but further research is necessary to demonstrate the predictive value.

  15. A Miniaturized Video System for Monitoring Drosophila Behavior

    NASA Technical Reports Server (NTRS)

    Bhattacharya, Sharmila; Inan, Omer; Kovacs, Gregory; Etemadi, Mozziyar; Sanchez, Max; Marcu, Oana

    2011-01-01

    Long-term spaceflight may induce a variety of harmful effects in astronauts, resulting in altered motor and cognitive behavior. The stresses experienced by humans in space - most significantly weightlessness (microgravity) and cosmic radiation - are difficult to accurately simulate on Earth. In fact, prolonged and concomitant exposure to microgravity and cosmic radiation can only be studied in space. Behavioral studies in space have focused on model organisms, including Drosophila melanogaster. Drosophila is often used due to its short life span and generational cycle, small size, and ease of maintenance. Additionally, the well-characterized genetics of Drosophila behavior on Earth can be applied to the analysis of results from spaceflights, provided that the behavior in space is accurately recorded. In 2001, the BioExplorer project introduced a low-cost option for researchers: the small satellite. While this approach enabled multiple inexpensive launches of biological experiments, it also imposed stringent restrictions on the monitoring systems in terms of size, mass, data bandwidth, and power consumption. Suggested parameters for size are on the order of 100 mm3 and 1 kg mass for the entire payload. For Drosophila behavioral studies, these engineering requirements are not met by commercially available systems. One system that does meet many requirements for behavioral studies in space is the actimeter. Actimeters use infrared light gates to track the number of times a fly crosses a boundary within a small container (3x3x40 mm). Unfortunately, the apparatus needed to monitor several flies at once would be larger than the capacity of the small satellite. A system is presented, which expands on the actimeter approach to achieve a highly compact, low-power, ultra-low bandwidth solution for simultaneous monitoring of the behavior of multiple flies in space. This also provides a simple, inexpensive alternative to the current systems for monitoring Drosophila populations in terrestrial experiments, and could be especially useful in field experiments in remote locations. Two practical limitations of the system should be noted: first, only walking flies can be observed - not flying - and second, although it enables population studies, tracking individual flies within the population is not currently possible. The system used video recording and an analog circuit to extract the average light changes as a function of time. Flies were held in a 5-cm diameter Petri dish and illuminated from below by a uniform light source. A miniature, monochrome CMOS (complementary metal-oxide semiconductor) video camera imaged the flies. This camera had automatic gain control, and this did not affect system performance. The camera was positioned 5-7 cm above the Petri dish such that the imaging area was 2.25 sq cm. With this basic setup, still images and continuous video of 15 flies at one time were obtained. To reduce the required data bandwidth by several orders of magnitude, a band-pass filter (0.3-10 Hz) circuit compressed the video signal and extracted changes in image luminance over time. The raw activity signal output of this circuit was recorded on a computer and digitally processed to extract the fly movement "events" from the waveform. These events corresponded to flies entering and leaving the image and were used for extracting activity parameters such as inter-event duration. The efficacy of the system in quantifying locomotor activity was evaluated by varying environmental temperature, then measuring the activity level of the flies.

  16. Coordinated traffic incident management using the I-Net embedded sensor architecture

    NASA Astrophysics Data System (ADS)

    Dudziak, Martin J.

    1999-01-01

    The I-Net intelligent embedded sensor architecture enables the reconfigurable construction of wide-area remote sensing and data collection networks employing diverse processing and data acquisition modules communicating over thin- server/thin-client protocols. Adaptive initially for operation using mobile remotely-piloted vehicle platforms such as small helicopter robots such as the Hornet and Ascend-I, the I-Net architecture lends itself to a critical problem in the management of both spontaneous and planned traffic congestion and rerouting over major interstate thoroughfares such as the I-95 Corridor. Pre-programmed flight plans and ad hoc operator-assisted navigation of the lightweight helicopter, using an auto-pilot and gyroscopic stabilization augmentation units, allows daytime or nighttime over-the-horizon flights of the unit to collect and transmit real-time video imagery that may be stored or transmitted to other locations. With on-board GPS and ground-based pattern recognition capabilities to augment the standard video collection process, this approach enables traffic management and emergency response teams to plan and assist real-time in the adjustment of traffic flows in high- density or congested areas or during dangerous road conditions such as during ice, snow, and hurricane storms. The I-Net architecture allows for integration of land-based and roadside sensors within a comprehensive automated traffic management system with communications to and form an airborne or other platform to devices in the network other than human-operated desktop computers, thereby allowing more rapid assimilation and response for critical data. Experiments have been conducted using several modified platforms and standard video and still photographic equipment. Current research and development is focused upon modification of the modular instrumentation units in order to accommodate faster loading and reloading of equipment onto the RPV, extension of the I-Net architecture to enable RPV-to-RPV signaling and control, and refinement of safety and emergency mechanisms to handle RPV mechanical failure during flight.

  17. Video Capture of Plastic Surgery Procedures Using the GoPro HERO 3+

    PubMed Central

    Graves, Steven Nicholas; Shenaq, Deana Saleh; Langerman, Alexander J.

    2015-01-01

    Background: Significant improvements can be made in recoding surgical procedures, particularly in capturing high-quality video recordings from the surgeons’ point of view. This study examined the utility of the GoPro HERO 3+ Black Edition camera for high-definition, point-of-view recordings of plastic and reconstructive surgery. Methods: The GoPro HERO 3+ Black Edition camera was head-mounted on the surgeon and oriented to the surgeon’s perspective using the GoPro App. The camera was used to record 4 cases: 2 fat graft procedures and 2 breast reconstructions. During cases 1-3, an assistant remotely controlled the GoPro via the GoPro App. For case 4 the GoPro was linked to a WiFi remote, and controlled by the surgeon. Results: Camera settings for case 1 were as follows: 1080p video resolution; 48 fps; Protune mode on; wide field of view; 16:9 aspect ratio. The lighting contrast due to the overhead lights resulted in limited washout of the video image. Camera settings were adjusted for cases 2-4 to a narrow field of view, which enabled the camera’s automatic white balance to better compensate for bright lights focused on the surgical field. Cases 2-4 captured video sufficient for teaching or presentation purposes. Conclusions: The GoPro HERO 3+ Black Edition camera enables high-quality, cost-effective video recording of plastic and reconstructive surgery procedures. When set to a narrow field of view and automatic white balance, the camera is able to sufficiently compensate for the contrasting light environment of the operating room and capture high-resolution, detailed video. PMID:25750851

  18. Video Browsing on Handheld Devices

    NASA Astrophysics Data System (ADS)

    Hürst, Wolfgang

    Recent improvements in processing power, storage space, and video codec development enable users now to playback video on their handheld devices in a reasonable quality. However, given the form factor restrictions of such a mobile device, screen size still remains a natural limit and - as the term "handheld" implies - always will be a critical resource. This is not only true for video but any data that is processed on such devices. For this reason, developers have come up with new and innovative ways to deal with large documents in such limited scenarios. For example, if you look at the iPhone, innovative techniques such as flicking have been introduced to skim large lists of text (e.g. hundreds of entries in your music collection). Automatically adapting the zoom level to, for example, the width of table cells when double tapping on the screen enables reasonable browsing of web pages that have originally been designed for large, desktop PC sized screens. A multi touch interface allows you to easily zoom in and out of large text documents and images using two fingers. In the next section, we will illustrate that advanced techniques to browse large video files have been developed in the past years, as well. However, if you look at state-of-the-art video players on mobile devices, normally just simple, VCR like controls are supported (at least at the time of this writing) that only allow users to just start, stop, and pause video playback. If supported at all, browsing and navigation functionality is often restricted to simple skipping of chapters via two single buttons for backward and forward navigation and a small and thus not very sensitive timeline slider.

  19. Photogrammetry with an Unmanned Aerial System to Assess Body Condition and Growth of Blainville’s Beaked Whales

    DTIC Science & Technology

    2015-09-30

    metrics for key age/ sex classes: 1) Width profiles for adult females, specifically comparing those with (lactating) and without dependent young...hexacopter using remote controls at a height 3 of ~100ft, aided by live video output from the hexacopter that will be monitored on a portable ground unit...Blainville’s beaked whales can be readily assigned to age/ sex classes from photographs of dentition and scarring (Claridge 2013), enabling us to link

  20. Snapshot hyperspectral fovea vision system (HyperVideo)

    NASA Astrophysics Data System (ADS)

    Kriesel, Jason; Scriven, Gordon; Gat, Nahum; Nagaraj, Sheela; Willson, Paul; Swaminathan, V.

    2012-06-01

    The development and demonstration of a new snapshot hyperspectral sensor is described. The system is a significant extension of the four dimensional imaging spectrometer (4DIS) concept, which resolves all four dimensions of hyperspectral imaging data (2D spatial, spectral, and temporal) in real-time. The new sensor, dubbed "4×4DIS" uses a single fiber optic reformatter that feeds into four separate, miniature visible to near-infrared (VNIR) imaging spectrometers, providing significantly better spatial resolution than previous systems. Full data cubes are captured in each frame period without scanning, i.e., "HyperVideo". The current system operates up to 30 Hz (i.e., 30 cubes/s), has 300 spectral bands from 400 to 1100 nm (~2.4 nm resolution), and a spatial resolution of 44×40 pixels. An additional 1.4 Megapixel video camera provides scene context and effectively sharpens the spatial resolution of the hyperspectral data. Essentially, the 4×4DIS provides a 2D spatially resolved grid of 44×40 = 1760 separate spectral measurements every 33 ms, which is overlaid on the detailed spatial information provided by the context camera. The system can use a wide range of off-the-shelf lenses and can either be operated so that the fields of view match, or in a "spectral fovea" mode, in which the 4×4DIS system uses narrow field of view optics, and is cued by a wider field of view context camera. Unlike other hyperspectral snapshot schemes, which require intensive computations to deconvolve the data (e.g., Computed Tomographic Imaging Spectrometer), the 4×4DIS requires only a linear remapping, enabling real-time display and analysis. The system concept has a range of applications including biomedical imaging, missile defense, infrared counter measure (IRCM) threat characterization, and ground based remote sensing.

  1. Using Image Modelling to Teach Newton's Laws with the Ollie Trick

    ERIC Educational Resources Information Center

    Dias, Marco Adriano; Carvalho, Paulo Simeão; Vianna, Deise Miranda

    2016-01-01

    Image modelling is a video-based teaching tool that is a combination of strobe images and video analysis. This tool can enable a qualitative and a quantitative approach to the teaching of physics, in a much more engaging and appealling way than the traditional expositive practice. In a specific scenario shown in this paper, the Ollie trick, we…

  2. The Perspective of Six Malaysian Students on Playing Video Games: Beneficial or Detrimental?

    ERIC Educational Resources Information Center

    Baki, Roselan; Yee Leng, Eow; Wan Ali, Wan Zah; Mahmud, Rosnaini; Hamzah, Mohd. Sahandri Gani

    2008-01-01

    This study provides a glimpse into understanding the potential benefits as well as harm of playing video games from the perspective of six Malaysian secondary school students, aged 16-17 years old. The rationale of the study is to enable parents, educators, administrators and policy makers to develop a sound understanding on the impact of playing…

  3. Do Video Reviews of Therapy Sessions Help People with Mild Intellectual Disabilities Describe Their Perceptions of Cognitive Behaviour Therapy?

    ERIC Educational Resources Information Center

    Burford, B.; Jahoda, A.

    2012-01-01

    Background: This study examined the potential of a retrospective video reviewing process [Burford Reviewing Process (BRP)] for enabling people with intellectual disabilities to describe their experiences of cognitive behaviour therapy (CBT). It is the first time that the BRP, described in this paper, has been used with people with intellectual…

  4. The New York Times Guide to the Best Children's Videos.

    ERIC Educational Resources Information Center

    1999

    More parents than ever before are making a conscious decision to be more selective about what their children watch and the types of games they play. This guide lists recommended videos and provides parents with informative guidelines to enable them to make informed program choices. The first part of the guide presents brief reports from the field:…

  5. The Camera Is Not a Methodology: Towards a Framework for Understanding Young Children's Use of Video Cameras

    ERIC Educational Resources Information Center

    Bird, Jo; Colliver, Yeshe; Edwards, Susan

    2014-01-01

    Participatory research methods argue that young children should be enabled to contribute their perspectives on research seeking to understand their worldviews. Visual research methods, including the use of still and video cameras with young children have been viewed as particularly suited to this aim because cameras have been considered easy and…

  6. The High Definition Earth Viewing (HDEV) Payload

    NASA Technical Reports Server (NTRS)

    Muri, Paul; Runco, Susan; Fontanot, Carlos; Getteau, Chris

    2017-01-01

    The High Definition Earth Viewing (HDEV) payload enables long-term experimentation of four, commercial-of-the-shelf (COTS) high definition video, cameras mounted on the exterior of the International Space Station. The payload enables testing of cameras in the space environment. The HDEV cameras transmit imagery continuously to an encoder that then sends the video signal via Ethernet through the space station for downlink. The encoder, cameras, and other electronics are enclosed in a box pressurized to approximately one atmosphere, containing dry nitrogen, to provide a level of protection to the electronics from the space environment. The encoded video format supports streaming live video of Earth for viewing online. Camera sensor types include charge-coupled device and complementary metal-oxide semiconductor. Received imagery data is analyzed on the ground to evaluate camera sensor performance. Since payload deployment, minimal degradation to imagery quality has been observed. The HDEV payload continues to operate by live streaming and analyzing imagery. Results from the experiment reduce risk in the selection of cameras that could be considered for future use on the International Space Station and other spacecraft. This paper discusses the payload development, end-to- end architecture, experiment operation, resulting image analysis, and future work.

  7. Predefined Redundant Dictionary for Effective Depth Maps Representation

    NASA Astrophysics Data System (ADS)

    Sebai, Dorsaf; Chaieb, Faten; Ghorbel, Faouzi

    2016-01-01

    The multi-view video plus depth (MVD) video format consists of two components: texture and depth map, where a combination of these components enables a receiver to generate arbitrary virtual views. However, MVD presents a very voluminous video format that requires a compression process for storage and especially for transmission. Conventional codecs are perfectly efficient for texture images compression but not for intrinsic depth maps properties. Depth images indeed are characterized by areas of smoothly varying grey levels separated by sharp discontinuities at the position of object boundaries. Preserving these characteristics is important to enable high quality view synthesis at the receiver side. In this paper, sparse representation of depth maps is discussed. It is shown that a significant gain in sparsity is achieved when particular mixed dictionaries are used for approximating these types of images with greedy selection strategies. Experiments are conducted to confirm the effectiveness at producing sparse representations, and competitiveness, with respect to candidate state-of-art dictionaries. Finally, the resulting method is shown to be effective for depth maps compression and represents an advantage over the ongoing 3D high efficiency video coding compression standard, particularly at medium and high bitrates.

  8. Sudden Event Recognition: A Survey

    PubMed Central

    Suriani, Nor Surayahani; Hussain, Aini; Zulkifley, Mohd Asyraf

    2013-01-01

    Event recognition is one of the most active research areas in video surveillance fields. Advancement in event recognition systems mainly aims to provide convenience, safety and an efficient lifestyle for humanity. A precise, accurate and robust approach is necessary to enable event recognition systems to respond to sudden changes in various uncontrolled environments, such as the case of an emergency, physical threat and a fire or bomb alert. The performance of sudden event recognition systems depends heavily on the accuracy of low level processing, like detection, recognition, tracking and machine learning algorithms. This survey aims to detect and characterize a sudden event, which is a subset of an abnormal event in several video surveillance applications. This paper discusses the following in detail: (1) the importance of a sudden event over a general anomalous event; (2) frameworks used in sudden event recognition; (3) the requirements and comparative studies of a sudden event recognition system and (4) various decision-making approaches for sudden event recognition. The advantages and drawbacks of using 3D images from multiple cameras for real-time application are also discussed. The paper concludes with suggestions for future research directions in sudden event recognition. PMID:23921828

  9. Video System for Viewing From a Remote or Windowless Cockpit

    NASA Technical Reports Server (NTRS)

    Banerjee, Amamath

    2009-01-01

    A system of electronic hardware and software synthesizes, in nearly real time, an image of a portion of a scene surveyed by as many as eight video cameras aimed, in different directions, at portions of the scene. This is a prototype of systems that would enable a pilot to view the scene outside a remote or windowless cockpit. The outputs of the cameras are digitized. Direct memory addressing is used to store the data of a few captured images in sequence, and the sequence is repeated in cycles. Cylindrical warping is used in merging adjacent images at their borders to construct a mosaic image of the scene. The mosaic-image data are written to a memory block from which they can be rendered on a head-mounted display (HMD) device. A subsystem in the HMD device tracks the direction of gaze of the wearer, providing data that are used to select, for display, the portion of the mosaic image corresponding to the direction of gaze. The basic functionality of the system has been demonstrated by mounting the cameras on the roof of a van and steering the van by use of the images presented on the HMD device.

  10. Method and System for Temporal Filtering in Video Compression Systems

    NASA Technical Reports Server (NTRS)

    Lu, Ligang; He, Drake; Jagmohan, Ashish; Sheinin, Vadim

    2011-01-01

    Three related innovations combine improved non-linear motion estimation, video coding, and video compression. The first system comprises a method in which side information is generated using an adaptive, non-linear motion model. This method enables extrapolating and interpolating a visual signal, including determining the first motion vector between the first pixel position in a first image to a second pixel position in a second image; determining a second motion vector between the second pixel position in the second image and a third pixel position in a third image; determining a third motion vector between the first pixel position in the first image and the second pixel position in the second image, the second pixel position in the second image, and the third pixel position in the third image using a non-linear model; and determining a position of the fourth pixel in a fourth image based upon the third motion vector. For the video compression element, the video encoder has low computational complexity and high compression efficiency. The disclosed system comprises a video encoder and a decoder. The encoder converts the source frame into a space-frequency representation, estimates the conditional statistics of at least one vector of space-frequency coefficients with similar frequencies, and is conditioned on previously encoded data. It estimates an encoding rate based on the conditional statistics and applies a Slepian-Wolf code with the computed encoding rate. The method for decoding includes generating a side-information vector of frequency coefficients based on previously decoded source data and encoder statistics and previous reconstructions of the source frequency vector. It also performs Slepian-Wolf decoding of a source frequency vector based on the generated side-information and the Slepian-Wolf code bits. The video coding element includes receiving a first reference frame having a first pixel value at a first pixel position, a second reference frame having a second pixel value at a second pixel position, and a third reference frame having a third pixel value at a third pixel position. It determines a first motion vector between the first pixel position and the second pixel position, a second motion vector between the second pixel position and the third pixel position, and a fourth pixel value for a fourth frame based upon a linear or nonlinear combination of the first pixel value, the second pixel value, and the third pixel value. A stationary filtering process determines the estimated pixel values. The parameters of the filter may be predetermined constants.

  11. The future of 3D and video coding in mobile and the internet

    NASA Astrophysics Data System (ADS)

    Bivolarski, Lazar

    2013-09-01

    The current Internet success has already changed our social and economic world and is still continuing to revolutionize the information exchange. The exponential increase of amount and types of data that is currently exchanged on the Internet represents significant challenge for the design of future architectures and solutions. This paper reviews the current status and trends in the design of solutions and research activities in the future Internet from point of view of managing the growth of bandwidth requirements and complexity of the multimedia that is being created and shared. Outlines the challenges that are present before the video coding and approaches to the design of standardized media formats and protocols while considering the expected convergence of multimedia formats and exchange interfaces. The rapid growth of connected mobile devices adds to the current and the future challenges in combination with the expected, in near future, arrival of multitude of connected devices. The new Internet technologies connecting the Internet of Things with wireless visual sensor networks and 3D virtual worlds requires conceptually new approaches of media content handling from acquisition to presentation in the 3D Media Internet. Accounting for the entire transmission system properties and enabling adaptation in real-time to context and content throughout the media proceeding path will be paramount in enabling the new media architectures as well as the new applications and services. The common video coding formats will need to be conceptually redesigned to allow for the implementation of the necessary 3D Media Internet features.

  12. Pyroclast Tracking Velocimetry: A particle tracking velocimetry-based tool for the study of Strombolian explosive eruptions

    NASA Astrophysics Data System (ADS)

    Gaudin, Damien; Moroni, Monica; Taddeucci, Jacopo; Scarlato, Piergiorgio; Shindler, Luca

    2014-07-01

    Image-based techniques enable high-resolution observation of the pyroclasts ejected during Strombolian explosions and drawing inferences on the dynamics of volcanic activity. However, data extraction from high-resolution videos is time consuming and operator dependent, while automatic analysis is often challenging due to the highly variable quality of images collected in the field. Here we present a new set of algorithms to automatically analyze image sequences of explosive eruptions: the pyroclast tracking velocimetry (PyTV) toolbox. First, a significant preprocessing is used to remove the image background and to detect the pyroclasts. Then, pyroclast tracking is achieved with a new particle tracking velocimetry algorithm, featuring an original predictor of velocity based on the optical flow equation. Finally, postprocessing corrects the systematic errors of measurements. Four high-speed videos of Strombolian explosions from Yasur and Stromboli volcanoes, representing various observation conditions, have been used to test the efficiency of the PyTV against manual analysis. In all cases, >106 pyroclasts have been successfully detected and tracked by PyTV, with a precision of 1 m/s for the velocity and 20% for the size of the pyroclast. On each video, more than 1000 tracks are several meters long, enabling us to study pyroclast properties and trajectories. Compared to manual tracking, 3 to 100 times more pyroclasts are analyzed. PyTV, by providing time-constrained information, links physical properties and motion of individual pyroclasts. It is a powerful tool for the study of explosive volcanic activity, as well as an ideal complement for other geological and geophysical volcano observation systems.

  13. Biotic games and cloud experimentation as novel media for biophysics education

    NASA Astrophysics Data System (ADS)

    Riedel-Kruse, Ingmar; Blikstein, Paulo

    2014-03-01

    First-hand, open-ended experimentation is key for effective formal and informal biophysics education. We developed, tested and assessed multiple new platforms that enable students and children to directly interact with and learn about microscopic biophysical processes: (1) Biotic games that enable local and online play using galvano- and photo-tactic stimulation of micro-swimmers, illustrating concepts such as biased random walks, Low Reynolds number hydrodynamics, and Brownian motion; (2) an undergraduate course where students learn optics, electronics, micro-fluidics, real time image analysis, and instrument control by building biotic games; and (3) a graduate class on the biophysics of multi-cellular systems that contains a cloud experimentation lab enabling students to execute open-ended chemotaxis experiments on slimemolds online, analyze their data, and build biophysical models. Our work aims to generate the equivalent excitement and educational impact for biophysics as robotics and video games have had for mechatronics and computer science, respectively. We also discuss how scaled-up cloud experimentation systems can support MOOCs with true lab components and life-science research in general.

  14. Characterization, adaptive traffic shaping, and multiplexing of real-time MPEG II video

    NASA Astrophysics Data System (ADS)

    Agrawal, Sanjay; Barry, Charles F.; Binnai, Vinay; Kazovsky, Leonid G.

    1997-01-01

    We obtain network traffic model for real-time MPEG-II encoded digital video by analyzing video stream samples from real-time encoders from NUKO Information Systems. MPEG-II sample streams include a resolution intensive movie, City of Joy, an action intensive movie, Aliens, a luminance intensive (black and white) movie, Road To Utopia, and a chrominance intensive (color) movie, Dick Tracy. From our analysis we obtain a heuristic model for the encoded video traffic which uses a 15-stage Markov process to model the I,B,P frame sequences within a group of pictures (GOP). A jointly-correlated Gaussian process is used to model the individual frame sizes. Scene change arrivals are modeled according to a gamma process. Simulations show that our MPEG-II traffic model generates, I,B,P frame sequences and frame sizes that closely match the sample MPEG-II stream traffic characteristics as they relate to latency and buffer occupancy in network queues. To achieve high multiplexing efficiency we propose a traffic shaping scheme which sets preferred 1-frame generation times among a group of encoders so as to minimize the overall variation in total offered traffic while still allowing the individual encoders to react to scene changes. Simulations show that our scheme results in multiplexing gains of up to 10% enabling us to multiplex twenty 6 Mbps MPEG-II video streams instead of 18 streams over an ATM/SONET OC3 link without latency or cell loss penalty. This scheme is due for a patent.

  15. Segment scheduling method for reducing 360° video streaming latency

    NASA Astrophysics Data System (ADS)

    Gudumasu, Srinivas; Asbun, Eduardo; He, Yong; Ye, Yan

    2017-09-01

    360° video is an emerging new format in the media industry enabled by the growing availability of virtual reality devices. It provides the viewer a new sense of presence and immersion. Compared to conventional rectilinear video (2D or 3D), 360° video poses a new and difficult set of engineering challenges on video processing and delivery. Enabling comfortable and immersive user experience requires very high video quality and very low latency, while the large video file size poses a challenge to delivering 360° video in a quality manner at scale. Conventionally, 360° video represented in equirectangular or other projection formats can be encoded as a single standards-compliant bitstream using existing video codecs such as H.264/AVC or H.265/HEVC. Such method usually needs very high bandwidth to provide an immersive user experience. While at the client side, much of such high bandwidth and the computational power used to decode the video are wasted because the user only watches a small portion (i.e., viewport) of the entire picture. Viewport dependent 360°video processing and delivery approaches spend more bandwidth on the viewport than on non-viewports and are therefore able to reduce the overall transmission bandwidth. This paper proposes a dual buffer segment scheduling algorithm for viewport adaptive streaming methods to reduce latency when switching between high quality viewports in 360° video streaming. The approach decouples the scheduling of viewport segments and non-viewport segments to ensure the viewport segment requested matches the latest user head orientation. A base layer buffer stores all lower quality segments, and a viewport buffer stores high quality viewport segments corresponding to the most recent viewer's head orientation. The scheduling scheme determines viewport requesting time based on the buffer status and the head orientation. This paper also discusses how to deploy the proposed scheduling design for various viewport adaptive video streaming methods. The proposed dual buffer segment scheduling method is implemented in an end-to-end tile based 360° viewports adaptive video streaming platform, where the entire 360° video is divided into a number of tiles, and each tile is independently encoded into multiple quality level representations. The client requests different quality level representations of each tile based on the viewer's head orientation and the available bandwidth, and then composes all tiles together for rendering. The simulation results verify that the proposed dual buffer segment scheduling algorithm reduces the viewport switch latency, and utilizes available bandwidth more efficiently. As a result, a more consistent immersive 360° video viewing experience can be presented to the user.

  16. Supervisory autonomous local-remote control system design: Near-term and far-term applications

    NASA Technical Reports Server (NTRS)

    Zimmerman, Wayne; Backes, Paul

    1993-01-01

    The JPL Supervisory Telerobotics Laboratory (STELER) has developed a unique local-remote robot control architecture which enables management of intermittent bus latencies and communication delays such as those expected for ground-remote operation of Space Station robotic systems via the TDRSS communication platform. At the local site, the operator updates the work site world model using stereo video feedback and a model overlay/fitting algorithm which outputs the location and orientation of the object in free space. That information is relayed to the robot User Macro Interface (UMI) to enable programming of the robot control macros. The operator can then employ either manual teleoperation, shared control, or supervised autonomous control to manipulate the object under any degree of time-delay. The remote site performs the closed loop force/torque control, task monitoring, and reflex action. This paper describes the STELER local-remote robot control system, and further describes the near-term planned Space Station applications, along with potential far-term applications such as telescience, autonomous docking, and Lunar/Mars rovers.

  17. Artificial Neural Network applied to lightning flashes

    NASA Astrophysics Data System (ADS)

    Gin, R. B.; Guedes, D.; Bianchi, R.

    2013-05-01

    The development of video cameras enabled cientists to study lightning discharges comportment with more precision. The main goal of this project is to create a system able to detect images of lightning discharges stored in videos and classify them using an Artificial Neural Network (ANN)using C Language and OpenCV libraries. The developed system, can be split in two different modules: detection module and classification module. The detection module uses OpenCV`s computer vision libraries and image processing techniques to detect if there are significant differences between frames in a sequence, indicating that something, still not classified, occurred. Whenever there is a significant difference between two consecutive frames, two main algorithms are used to analyze the frame image: brightness and shape algorithms. These algorithms detect both shape and brightness of the event, removing irrelevant events like birds, as well as detecting the relevant events exact position, allowing the system to track it over time. The classification module uses a neural network to classify the relevant events as horizontal or vertical lightning, save the event`s images and calculates his number of discharges. The Neural Network was implemented using the backpropagation algorithm, and was trained with 42 training images , containing 57 lightning events (one image can have more than one lightning). TheANN was tested with one to five hidden layers, with up to 50 neurons each. The best configuration achieved a success rate of 95%, with one layer containing 20 neurons (33 test images with 42 events were used in this phase). This configuration was implemented in the developed system to analyze 20 video files, containing 63 lightning discharges previously manually detected. Results showed that all the lightning discharges were detected, many irrelevant events were unconsidered, and the event's number of discharges was correctly computed. The neural network used in this project achieved a success rate of 90%. The videos used in this experiment were acquired by seven video cameras installed in São Bernardo do Campo, Brazil, that continuously recorded lightning events during the summer. The cameras were disposed in a 360 loop, recording all data at a time resolution of 33ms. During this period, several convective storms were recorded.

  18. Lipid Vesicle Shape Analysis from Populations Using Light Video Microscopy and Computer Vision

    PubMed Central

    Zupanc, Jernej; Drašler, Barbara; Boljte, Sabina; Kralj-Iglič, Veronika; Iglič, Aleš; Erdogmus, Deniz; Drobne, Damjana

    2014-01-01

    We present a method for giant lipid vesicle shape analysis that combines manually guided large-scale video microscopy and computer vision algorithms to enable analyzing vesicle populations. The method retains the benefits of light microscopy and enables non-destructive analysis of vesicles from suspensions containing up to several thousands of lipid vesicles (1–50 µm in diameter). For each sample, image analysis was employed to extract data on vesicle quantity and size distributions of their projected diameters and isoperimetric quotients (measure of contour roundness). This process enables a comparison of samples from the same population over time, or the comparison of a treated population to a control. Although vesicles in suspensions are heterogeneous in sizes and shapes and have distinctively non-homogeneous distribution throughout the suspension, this method allows for the capture and analysis of repeatable vesicle samples that are representative of the population inspected. PMID:25426933

  19. Creating a web-enhanced interactive preclinic technique manual: case report and student response.

    PubMed

    Boberick, Kenneth G

    2004-12-01

    This article describes the development, use, and student response to an online manual developed with off-the-shelf software and made available using a web-based course management system (Blackboard) that was used to transform a freshman restorative preclinical technique course from a lecture-only course into an interactive web-enhanced course. The goals of the project were to develop and implement a web-enhanced interactive learning experience in a preclinical restorative technique course and shift preclinical education from a teacher-centered experience to a student-driven experience. The project was evaluated using an anonymous post-course survey (95 percent response rate) of 123 freshman students that assessed enabling (technical support and access to the technology), process (the actual experience and usability), and outcome criteria (acquisition and successful use of the knowledge gained and skills learned) of the online manual. Students responded favorably to sections called "slide galleries" where ideal and non-ideal examples of projects could be viewed. Causes, solutions, and preventive measures were provided for the errors shown. Sections called "slide series" provided cookbook directions allowing for self-paced and student-directed learning. Virtually all of the students, 99 percent, found the quality of the streaming videos adequate to excellent. Regarding Internet connections and video viewing, 65 percent of students successfully viewed the videos from a remote site; cable connections were the most reliable, dial-up connections were inadequate, and DSL connections were variable. Seventy-three percent of the students felt the videos were an effective substitute for in-class demonstrations. Students preferred video with sound over video with subtitles and preferred short video clips embedded in the text over compilation videos. The results showed it is possible to develop and implement web-enhanced and interactive dental education in a preclinical restorative technique course that successfully delivered information beyond the textual format.

  20. 47 CFR 76.1503 - Carriage of video programming providers on open video systems.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 4 2011-10-01 2011-10-01 false Carriage of video programming providers on open video systems. 76.1503 Section 76.1503 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Open Video Systems § 76.1503...

  1. 47 CFR 76.1503 - Carriage of video programming providers on open video systems.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 4 2012-10-01 2012-10-01 false Carriage of video programming providers on open video systems. 76.1503 Section 76.1503 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Open Video Systems § 76.1503...

  2. 47 CFR 76.1503 - Carriage of video programming providers on open video systems.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 4 2014-10-01 2014-10-01 false Carriage of video programming providers on open video systems. 76.1503 Section 76.1503 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Open Video Systems § 76.1503...

  3. 47 CFR 76.1503 - Carriage of video programming providers on open video systems.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 4 2010-10-01 2010-10-01 false Carriage of video programming providers on open video systems. 76.1503 Section 76.1503 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Open Video Systems § 76.1503...

  4. 47 CFR 76.1503 - Carriage of video programming providers on open video systems.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 4 2013-10-01 2013-10-01 false Carriage of video programming providers on open video systems. 76.1503 Section 76.1503 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Open Video Systems § 76.1503...

  5. Portable emergency telemedicine system over wireless broadband and 3G networks.

    PubMed

    Hong, SungHye; Kim, SangYong; Kim, JungChae; Lim, DongKyu; Jung, SeokMyung; Kim, DongKeun; Yoo, Sun K

    2009-01-01

    The telemedicine system aims at monitoring patients remotely without limit in time and space. However the existing telemedicine systems exchange medical information simply in a specified location. Due to increasing speed in processing data and expanding bandwidth of wireless networks, it is possible to perform telemedicine services on personal digital assistants (PDA). In this paper, a telemedicine system on PDA was developed using wideband mobile networks such as Wi-Fi, HSDPA, and WiBro for high speed bandwidths. This system enables to utilize and exchange variety and reliable patient information of video, biosignals, chatting messages, and triage data. By measuring bandwidths of individual data of the system over wireless networks, and evaluating the performance of this system using PDA, we demonstrated the feasibility of the designed portable emergency telemedicine system.

  6. Enhancing the performance of cooperative face detector by NFGS

    NASA Astrophysics Data System (ADS)

    Yesugade, Snehal; Dave, Palak; Srivastava, Srinkhala; Das, Apurba

    2015-07-01

    Computerized human face detection is an important task of deformable pattern recognition in today's world. Especially in cooperative authentication scenarios like ATM fraud detection, attendance recording, video tracking and video surveillance, the accuracy of the face detection engine in terms of accuracy, memory utilization and speed have been active areas of research for the last decade. The Haar based face detection or SIFT and EBGM based face recognition systems are fairly reliable in this regard. But, there the features are extracted in terms of gray textures. When the input is a high resolution online video with a fairly large viewing area, Haar needs to search for face everywhere (say 352×250 pixels) and every time (e.g., 30 FPS capture all the time). In the current paper we have proposed to address both the aforementioned scenarios by a neuro-visually inspired method of figure-ground segregation (NFGS) [5] to result in a two-dimensional binary array from gray face image. The NFGS would identify the reference video frame in a low sampling rate and updates the same with significant change of environment like illumination. The proposed algorithm would trigger the face detector only when appearance of a new entity is encountered into the viewing area. To address the detection accuracy, classical face detector would be enabled only in a narrowed down region of interest (RoI) as fed by the NFGS. The act of updating the RoI would be done in each frame online with respect to the moving entity which in turn would improve both FR (False Rejection) and FA (False Acceptance) of the face detection system.

  7. The Impact of Video Length on Learning in a Middle-Level Flipped Science Setting: Implications for Diversity Inclusion

    NASA Astrophysics Data System (ADS)

    Slemmons, Krista; Anyanwu, Kele; Hames, Josh; Grabski, Dave; Mlsna, Jeffery; Simkins, Eric; Cook, Perry

    2018-05-01

    Popularity of videos for classroom instruction has increased over the years due to affordability and user-friendliness of today's digital video cameras. This prevalence has led to an increase in flipped, K-12 classrooms countrywide. However, quantitative data establishing the appropriate video length to foster authentic learning is limited, particularly in middle-level classrooms. We focus on this aspect of video technology in two flipped science classrooms at the middle school level to determine the optimal video length to enable learning, increase retention and support student motivation. Our results indicate that while assessments directly following short videos were slightly higher, these findings were not significantly different from scores following longer videos. While short-term retention of material did not seem to be influenced by video length, longer-term retention for males and students with learning disabilities was higher following short videos compared to long as assessed on summative assessments. Students self-report that they were more engaged, had enhanced focus, and had a perceived higher retention of content following shorter videos. This study has important implications for student learning, application of content, and the development of critical thinking skills. This is particularly paramount in an era where content knowledge is just a search engine away.

  8. A Data Hiding Technique to Synchronously Embed Physiological Signals in H.264/AVC Encoded Video for Medicine Healthcare.

    PubMed

    Peña, Raul; Ávila, Alfonso; Muñoz, David; Lavariega, Juan

    2015-01-01

    The recognition of clinical manifestations in both video images and physiological-signal waveforms is an important aid to improve the safety and effectiveness in medical care. Physicians can rely on video-waveform (VW) observations to recognize difficult-to-spot signs and symptoms. The VW observations can also reduce the number of false positive incidents and expand the recognition coverage to abnormal health conditions. The synchronization between the video images and the physiological-signal waveforms is fundamental for the successful recognition of the clinical manifestations. The use of conventional equipment to synchronously acquire and display the video-waveform information involves complex tasks such as the video capture/compression, the acquisition/compression of each physiological signal, and the video-waveform synchronization based on timestamps. This paper introduces a data hiding technique capable of both enabling embedding channels and synchronously hiding samples of physiological signals into encoded video sequences. Our data hiding technique offers large data capacity and simplifies the complexity of the video-waveform acquisition and reproduction. The experimental results revealed successful embedding and full restoration of signal's samples. Our results also demonstrated a small distortion in the video objective quality, a small increment in bit-rate, and embedded cost savings of -2.6196% for high and medium motion video sequences.

  9. Automated Rendezvous and Capture System Development and Simulation for NASA

    NASA Technical Reports Server (NTRS)

    Roe, Fred D.; Howard, Richard T.; Murphy, Leslie

    2004-01-01

    The United States does not have an Automated Rendezvous and Capture/Docking (AR and C) capability and is reliant on manned control for rendezvous and docking of orbiting spacecraft. This reliance on the labor intensive manned interface for control of rendezvous and docking vehicles has a significant impact on the cost of the operation of the International Space Station (ISS) and precludes the use of any U.S. expendable launch capabilities for Space Station resupply. The Soviets have the capability to autonomously dock in space, but their system produces a hard docking with excessive force and contact velocity. Automated Rendezvous and Capture/Docking has been identified as a key enabling technology for the Space Launch Initiative (SLI) Program, DARPA Orbital Express and other DOD Programs. The development and implementation of an AR&C capability can significantly enhance system flexibility, improve safety, and lower the cost of maintaining, supplying, and operating the International Space Station. The Marshall Space Flight Center (MSFC) has conducted pioneering research in the development of an automated rendezvous and capture (or docking) (AR and C) system for U.S. space vehicles. This AR&C system was tested extensively using hardware-in-the-loop simulations in the Flight Robotics Laboratory, and a rendezvous sensor, the Video Guidance Sensor was developed and successfully flown on the Space Shuttle on flights STS-87 and STS-95, proving the concept of a video- based sensor. Further developments in sensor technology and vehicle and target configuration have lead to continued improvements and changes in AR&C system development and simulation. A new Advanced Video Guidance Sensor (AVGS) with target will be utilized on the Demonstration of Autonomous Rendezvous Technologies (DART) flight experiment in 2004.

  10. A Role for YouTube in Telerehabilitation

    PubMed Central

    Manasco, M. Hunter; Barone, Nicholas; Brown, Amanda

    2010-01-01

    YouTube (http://youtube.com) is a free video sharing website that allows users to post and view videos. Although there are definite limitations in the applicability of this website to telerehabilitation, the YouTube technology offers potential uses that should not be overlooked. For example, some types of therapy, such as errorless learning therapy for certain language and cognitive deficits can be provided remotely via YouTube. In addition, the website’s social networking capabilities, via the asynchronous posting of comments and videos in response to posted videos, enables individuals to gain valuable emotional support by communicating with others with similar health and rehabilitation challenges. This article addresses the benefits and limitations of YouTube in the context of telerehabilitation and reports patient feedback on errorless learning therapy for aphasia delivered via videos posted on YouTube. PMID:25945173

  11. POCIT portable optical communicators: VideoBeam and EtherBeam

    NASA Astrophysics Data System (ADS)

    Mecherle, G. Stephen; Holcomb, Terry L.

    1999-12-01

    LDSC is developing the POCITTM (Portable Optical Communication Integrated Transceiver) family of products which now includes VideoBeamTM and the latest addition, EtherBeamTM. Each is a full duplex portable laser communicator: VideoBeamTM providing near-broadcast- quality analog video and stereo audio, and EtherBeamTM providing standard Ethernet connectivity. Each POCITTM transceiver consists of a 3.5-pound unit with a binocular- type form factor, which can be manually pointed, tripod- mounted or gyro-stabilized. Both units have an operational range of over two miles (clear air) with excellent jam- resistance and low probability of interception characteristics. The transmission wavelength of 1550 nm enables Class I eyesafe operation (ANSI, IEC). The POCITTM units are ideally suited for numerous miliary scenarios, surveillance/espionage, industrial precious mineral exploration, and campus video teleconferencing applications.

  12. 47 CFR 76.1504 - Rates, terms and conditions for carriage on open video systems.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... open video systems. 76.1504 Section 76.1504 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Open Video Systems § 76.1504 Rates, terms and conditions for carriage on open video systems. (a) Reasonable rate principle. An...

  13. 47 CFR 76.1504 - Rates, terms and conditions for carriage on open video systems.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... open video systems. 76.1504 Section 76.1504 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Open Video Systems § 76.1504 Rates, terms and conditions for carriage on open video systems. (a) Reasonable rate principle. An...

  14. 47 CFR 76.1504 - Rates, terms and conditions for carriage on open video systems.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... open video systems. 76.1504 Section 76.1504 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Open Video Systems § 76.1504 Rates, terms and conditions for carriage on open video systems. (a) Reasonable rate principle. An...

  15. 47 CFR 76.1504 - Rates, terms and conditions for carriage on open video systems.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... open video systems. 76.1504 Section 76.1504 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Open Video Systems § 76.1504 Rates, terms and conditions for carriage on open video systems. (a) Reasonable rate principle. An...

  16. Video mining using combinations of unsupervised and supervised learning techniques

    NASA Astrophysics Data System (ADS)

    Divakaran, Ajay; Miyahara, Koji; Peker, Kadir A.; Radhakrishnan, Regunathan; Xiong, Ziyou

    2003-12-01

    We discuss the meaning and significance of the video mining problem, and present our work on some aspects of video mining. A simple definition of video mining is unsupervised discovery of patterns in audio-visual content. Such purely unsupervised discovery is readily applicable to video surveillance as well as to consumer video browsing applications. We interpret video mining as content-adaptive or "blind" content processing, in which the first stage is content characterization and the second stage is event discovery based on the characterization obtained in stage 1. We discuss the target applications and find that using a purely unsupervised approach are too computationally complex to be implemented on our product platform. We then describe various combinations of unsupervised and supervised learning techniques that help discover patterns that are useful to the end-user of the application. We target consumer video browsing applications such as commercial message detection, sports highlights extraction etc. We employ both audio and video features. We find that supervised audio classification combined with unsupervised unusual event discovery enables accurate supervised detection of desired events. Our techniques are computationally simple and robust to common variations in production styles etc.

  17. HPC enabled real-time remote processing of laparoscopic surgery

    NASA Astrophysics Data System (ADS)

    Ronaghi, Zahra; Sapra, Karan; Izard, Ryan; Duffy, Edward; Smith, Melissa C.; Wang, Kuang-Ching; Kwartowitz, David M.

    2016-03-01

    Laparoscopic surgery is a minimally invasive surgical technique. The benefit of small incisions has a disadvantage of limited visualization of subsurface tissues. Image-guided surgery (IGS) uses pre-operative and intra-operative images to map subsurface structures. One particular laparoscopic system is the daVinci-si robotic surgical system. The video streams generate approximately 360 megabytes of data per second. Real-time processing this large stream of data on a bedside PC, single or dual node setup, has become challenging and a high-performance computing (HPC) environment may not always be available at the point of care. To process this data on remote HPC clusters at the typical 30 frames per second rate, it is required that each 11.9 MB video frame be processed by a server and returned within 1/30th of a second. We have implement and compared performance of compression, segmentation and registration algorithms on Clemson's Palmetto supercomputer using dual NVIDIA K40 GPUs per node. Our computing framework will also enable reliability using replication of computation. We will securely transfer the files to remote HPC clusters utilizing an OpenFlow-based network service, Steroid OpenFlow Service (SOS) that can increase performance of large data transfers over long-distance and high bandwidth networks. As a result, utilizing high-speed OpenFlow- based network to access computing clusters with GPUs will improve surgical procedures by providing real-time medical image processing and laparoscopic data.

  18. Investigating helmet promotion for cyclists: results from a randomised study with observation of behaviour, using a semi-automatic video system.

    PubMed

    Constant, Aymery; Messiah, Antoine; Felonneau, Marie-Line; Lagarde, Emmanuel

    2012-01-01

    Half of fatal injuries among bicyclists are head injuries. While helmet use is likely to provide protection, their use often remains rare. We assessed the influence of strategies for promotion of helmet use with direct observation of behaviour by a semi-automatic video system. We performed a single-centre randomised controlled study, with 4 balanced randomisation groups. Participants were non-helmet users, aged 18-75 years, recruited at a loan facility in the city of Bordeaux, France. After completing a questionnaire investigating their attitudes towards road safety and helmet use, participants were randomly assigned to three groups with the provision of "helmet only", "helmet and information" or "information only", and to a fourth control group. Bikes were labelled with a colour code designed to enable observation of helmet use by participants while cycling, using a 7-spot semi-automatic video system located in the city. A total of 1557 participants were included in the study. Between October 15th 2009 and September 28th 2010, 2621 cyclists' movements, made by 587 participants, were captured by the video system. Participants seen at least once with a helmet amounted to 6.6% of all observed participants, with higher rates in the two groups that received a helmet at baseline. The likelihood of observed helmet use was significantly increased among participants of the "helmet only" group (OR = 7.73 [2.09-28.5]) and this impact faded within six months following the intervention. No effect of information delivery was found. Providing a helmet may be of value, but will not be sufficient to achieve high rates of helmet wearing among adult cyclists. Integrated and repeated prevention programmes will be needed, including free provision of helmets, but also information on the protective effect of helmets and strategies to increase peer and parental pressure.

  19. Investigating Helmet Promotion for Cyclists: Results from a Randomised Study with Observation of Behaviour, Using a Semi-Automatic Video System

    PubMed Central

    Constant, Aymery; Messiah, Antoine; Felonneau, Marie-Line; Lagarde, Emmanuel

    2012-01-01

    Introduction Half of fatal injuries among bicyclists are head injuries. While helmet use is likely to provide protection, their use often remains rare. We assessed the influence of strategies for promotion of helmet use with direct observation of behaviour by a semi-automatic video system. Methods We performed a single-centre randomised controlled study, with 4 balanced randomisation groups. Participants were non-helmet users, aged 18–75 years, recruited at a loan facility in the city of Bordeaux, France. After completing a questionnaire investigating their attitudes towards road safety and helmet use, participants were randomly assigned to three groups with the provision of “helmet only”, “helmet and information” or “information only”, and to a fourth control group. Bikes were labelled with a colour code designed to enable observation of helmet use by participants while cycling, using a 7-spot semi-automatic video system located in the city. A total of 1557 participants were included in the study. Results Between October 15th 2009 and September 28th 2010, 2621 cyclists' movements, made by 587 participants, were captured by the video system. Participants seen at least once with a helmet amounted to 6.6% of all observed participants, with higher rates in the two groups that received a helmet at baseline. The likelihood of observed helmet use was significantly increased among participants of the “helmet only” group (OR = 7.73 [2.09–28.5]) and this impact faded within six months following the intervention. No effect of information delivery was found. Conclusion Providing a helmet may be of value, but will not be sufficient to achieve high rates of helmet wearing among adult cyclists. Integrated and repeated prevention programmes will be needed, including free provision of helmets, but also information on the protective effect of helmets and strategies to increase peer and parental pressure. PMID:22355384

  20. State of the art in video system performance

    NASA Technical Reports Server (NTRS)

    Lewis, Michael J.

    1990-01-01

    The closed circuit television (CCTV) system that is onboard the Space Shuttle has the following capabilities: camera, video signal switching and routing unit (VSU); and Space Shuttle video tape recorder. However, this system is inadequate for use with many experiments that require video imaging. In order to assess the state-of-the-art in video technology and data storage systems, a survey was conducted of the High Resolution, High Frame Rate Video Technology (HHVT) products. The performance of the state-of-the-art solid state cameras and image sensors, video recording systems, data transmission devices, and data storage systems versus users' requirements are shown graphically.

  1. Applying emerging digital video interface standards to airborne avionics sensor and digital map integrations: benefits outweigh the initial costs

    NASA Astrophysics Data System (ADS)

    Kuehl, C. Stephen

    1996-06-01

    Video signal system performance can be compromised in a military aircraft cockpit management system (CMS) with the tailoring of vintage Electronics Industries Association (EIA) RS170 and RS343A video interface standards. Video analog interfaces degrade when induced system noise is present. Further signal degradation has been traditionally associated with signal data conversions between avionics sensor outputs and the cockpit display system. If the CMS engineering process is not carefully applied during the avionics video and computing architecture development, extensive and costly redesign will occur when visual sensor technology upgrades are incorporated. Close monitoring and technical involvement in video standards groups provides the knowledge-base necessary for avionic systems engineering organizations to architect adaptable and extendible cockpit management systems. With the Federal Communications Commission (FCC) in the process of adopting the Digital HDTV Grand Alliance System standard proposed by the Advanced Television Systems Committee (ATSC), the entertainment and telecommunications industries are adopting and supporting the emergence of new serial/parallel digital video interfaces and data compression standards that will drastically alter present NTSC-M video processing architectures. The re-engineering of the U.S. Broadcasting system must initially preserve the electronic equipment wiring networks within broadcast facilities to make the transition to HDTV affordable. International committee activities in technical forums like ITU-R (former CCIR), ANSI/SMPTE, IEEE, and ISO/IEC are establishing global consensus on video signal parameterizations that support a smooth transition from existing analog based broadcasting facilities to fully digital computerized systems. An opportunity exists for implementing these new video interface standards over existing video coax/triax cabling in military aircraft cockpit management systems. Reductions in signal conversion processing steps, major improvement in video noise reduction, and an added capability to pass audio/embedded digital data within the digital video signal stream are the significant performance increases associated with the incorporation of digital video interface standards. By analyzing the historical progression of military CMS developments, establishing a systems engineering process for CMS design, tracing the commercial evolution of video signal standardization, adopting commercial video signal terminology/definitions, and comparing/contrasting CMS architecture modifications using digital video interfaces; this paper provides a technical explanation on how a systems engineering process approach to video interface standardization can result in extendible and affordable cockpit management systems.

  2. System Synchronizes Recordings from Separated Video Cameras

    NASA Technical Reports Server (NTRS)

    Nail, William; Nail, William L.; Nail, Jasper M.; Le, Doung T.

    2009-01-01

    A system of electronic hardware and software for synchronizing recordings from multiple, physically separated video cameras is being developed, primarily for use in multiple-look-angle video production. The system, the time code used in the system, and the underlying method of synchronization upon which the design of the system is based are denoted generally by the term "Geo-TimeCode(TradeMark)." The system is embodied mostly in compact, lightweight, portable units (see figure) denoted video time-code units (VTUs) - one VTU for each video camera. The system is scalable in that any number of camera recordings can be synchronized. The estimated retail price per unit would be about $350 (in 2006 dollars). The need for this or another synchronization system external to video cameras arises because most video cameras do not include internal means for maintaining synchronization with other video cameras. Unlike prior video-camera-synchronization systems, this system does not depend on continuous cable or radio links between cameras (however, it does depend on occasional cable links lasting a few seconds). Also, whereas the time codes used in prior video-camera-synchronization systems typically repeat after 24 hours, the time code used in this system does not repeat for slightly more than 136 years; hence, this system is much better suited for long-term deployment of multiple cameras.

  3. NASA's SDO Captures Mercury Transit Time-lapses SDO Captures Mercury Transit Time-lapse

    NASA Image and Video Library

    2017-12-08

    Less than once per decade, Mercury passes between the Earth and the sun in a rare astronomical event known as a planetary transit. The 2016 Mercury transit occurred on May 9th, between roughly 7:12 a.m. and 2:42 p.m. EDT. The images in this video are from NASA’s Solar Dynamics Observatory Music: Encompass by Mark Petrie For more info on the Mercury transit go to: www.nasa.gov/transit This video is public domain and may be downloaded at: svs.gsfc.nasa.gov/12235 NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  4. Snaking Filament Eruption [video

    NASA Image and Video Library

    2014-11-14

    A filament (which at one point had an eerie similarity to a snake) broke away from the sun and out into space (Nov. 1, 2014). The video covers just over three hours of activity. This kind of eruptive event is called a Hyder flare. These are filaments (elongated clouds of gases above the sun's surface) that erupt and cause a brightening at the sun's surface, although no active regions are in that area. It did thrust out a cloud of particles but not towards Earth. The images were taken in the 304 Angstrom wavelength of extreme UV light. Credit: NASA/Solar Dynamics Observatory NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  5. Telelearning standards and their application in medical education.

    PubMed

    Duplaga, Mariusz; Juszkiewicz, Krzysztof; Leszczuk, Mikolaj

    2004-01-01

    Medial education, both on the graduate and postgraduate levels, has become a real challenge nowadays. The volume of information in medical sciences grows so rapidly that many health professionals experience essential problems in keeping track of the state of the art in this domain. e-learning offers important advantages to medical education continuation due to its universal availability and opportunity for implementation of flexible patterns of training. An important trace of medical education is developing practical skills. Some examples of standardization efforts include: the CEN/ISSS Workshop on Learning Technology (WSLT), the Advanced Learning Infrastructure Consortium (ALIC), Education Network Australia (EdNA) and PROmoting Multimedia access to Education and Training in European Society (PROMETEUS). Sun Microsystems' support (Sun ONE, iPlanetTM ) for many of the above-mentioned standards is described as well. Development of a medical digital video library with recordings of invasive procedures incorporating additional information and commentary may improve the efficiency of the training process in interventional medicine. A digital video library enabling access to videos of interventional procedures performed in the area of thoracic medicine may be a valuable element for developing practical skills. The library has been filled with video resources recorded at the Department of Interventional Pulmonology; it enhances training options for pulmonologists and thoracic surgeons. The main focus was put on demonstration of bronchofiberoscopic and videothoracoscopic procedures. The opportunity to browse video recordings of procedures performed in the specific field also considerably enhances the options for training in other medical specialties. In the era of growing health consumer awareness, patients are also perceived as the target audience for medical digital libraries. As a case study of Computer-Based Training systems, the Medical Digital Video Library is presented.

  6. 47 CFR 76.1712 - Open video system (OVS) requests for carriage.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 4 2011-10-01 2011-10-01 false Open video system (OVS) requests for carriage... RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Documents to be Maintained for Inspection § 76.1712 Open video system (OVS) requests for carriage. An open video system operator shall maintain a...

  7. 47 CFR 76.1712 - Open video system (OVS) requests for carriage.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 4 2010-10-01 2010-10-01 false Open video system (OVS) requests for carriage... RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Documents to be Maintained for Inspection § 76.1712 Open video system (OVS) requests for carriage. An open video system operator shall maintain a...

  8. 47 CFR 76.1712 - Open video system (OVS) requests for carriage.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 4 2014-10-01 2014-10-01 false Open video system (OVS) requests for carriage... RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Documents to be Maintained for Inspection § 76.1712 Open video system (OVS) requests for carriage. An open video system operator shall maintain a...

  9. 47 CFR 76.1712 - Open video system (OVS) requests for carriage.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 4 2013-10-01 2013-10-01 false Open video system (OVS) requests for carriage... RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Documents to be Maintained for Inspection § 76.1712 Open video system (OVS) requests for carriage. An open video system operator shall maintain a...

  10. 47 CFR 76.1712 - Open video system (OVS) requests for carriage.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 4 2012-10-01 2012-10-01 false Open video system (OVS) requests for carriage... RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Documents to be Maintained for Inspection § 76.1712 Open video system (OVS) requests for carriage. An open video system operator shall maintain a...

  11. 47 CFR 76.1501 - Qualifications to be an open video system operator.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 4 2011-10-01 2011-10-01 false Qualifications to be an open video system... RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Open Video Systems § 76.1501 Qualifications to be an open video system operator. Any person may obtain a certification to operate an open...

  12. 47 CFR 76.1501 - Qualifications to be an open video system operator.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 4 2014-10-01 2014-10-01 false Qualifications to be an open video system... RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Open Video Systems § 76.1501 Qualifications to be an open video system operator. Any person may obtain a certification to operate an open...

  13. 47 CFR 76.1508 - Network non-duplication.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Open Video Systems § 76.1508 Network non-duplication. (a) Sections 76.92 through 76.97 shall apply to open video systems in accordance with the provisions contained... unit” shall apply to an open video system or that portion of an open video system that operates or will...

  14. 47 CFR 76.1508 - Network non-duplication.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Open Video Systems § 76.1508 Network non-duplication. (a) Sections 76.92 through 76.97 shall apply to open video systems in accordance with the provisions contained... unit” shall apply to an open video system or that portion of an open video system that operates or will...

  15. 47 CFR 76.1501 - Qualifications to be an open video system operator.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 4 2012-10-01 2012-10-01 false Qualifications to be an open video system... RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Open Video Systems § 76.1501 Qualifications to be an open video system operator. Any person may obtain a certification to operate an open...

  16. 47 CFR 76.1508 - Network non-duplication.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Open Video Systems § 76.1508 Network non-duplication. (a) Sections 76.92 through 76.97 shall apply to open video systems in accordance with the provisions contained... unit” shall apply to an open video system or that portion of an open video system that operates or will...

  17. 47 CFR 76.1501 - Qualifications to be an open video system operator.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 4 2010-10-01 2010-10-01 false Qualifications to be an open video system... RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Open Video Systems § 76.1501 Qualifications to be an open video system operator. Any person may obtain a certification to operate an open...

  18. 47 CFR 76.1501 - Qualifications to be an open video system operator.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 4 2013-10-01 2013-10-01 false Qualifications to be an open video system... RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Open Video Systems § 76.1501 Qualifications to be an open video system operator. Any person may obtain a certification to operate an open...

  19. 47 CFR 76.1508 - Network non-duplication.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Open Video Systems § 76.1508 Network non-duplication. (a) Sections 76.92 through 76.97 shall apply to open video systems in accordance with the provisions contained... unit” shall apply to an open video system or that portion of an open video system that operates or will...

  20. Innovative Solution to Video Enhancement

    NASA Technical Reports Server (NTRS)

    2001-01-01

    Through a licensing agreement, Intergraph Government Solutions adapted a technology originally developed at NASA's Marshall Space Flight Center for enhanced video imaging by developing its Video Analyst(TM) System. Marshall's scientists developed the Video Image Stabilization and Registration (VISAR) technology to help FBI agents analyze video footage of the deadly 1996 Olympic Summer Games bombing in Atlanta, Georgia. VISAR technology enhanced nighttime videotapes made with hand-held camcorders, revealing important details about the explosion. Intergraph's Video Analyst System is a simple, effective, and affordable tool for video enhancement and analysis. The benefits associated with the Video Analyst System include support of full-resolution digital video, frame-by-frame analysis, and the ability to store analog video in digital format. Up to 12 hours of digital video can be stored and maintained for reliable footage analysis. The system also includes state-of-the-art features such as stabilization, image enhancement, and convolution to help improve the visibility of subjects in the video without altering underlying footage. Adaptable to many uses, Intergraph#s Video Analyst System meets the stringent demands of the law enforcement industry in the areas of surveillance, crime scene footage, sting operations, and dash-mounted video cameras.

  1. Thematic video indexing to support video database retrieval and query processing

    NASA Astrophysics Data System (ADS)

    Khoja, Shakeel A.; Hall, Wendy

    1999-08-01

    This paper presents a novel video database system, which caters for complex and long videos, such as documentaries, educational videos, etc. As compared to relatively structured format videos like CNN news or commercial advertisements, this database system has the capacity to work with long and unstructured videos.

  2. Real-Time Detection and Reading of LED/LCD Displays for Visually Impaired Persons

    PubMed Central

    Tekin, Ender; Coughlan, James M.; Shen, Huiying

    2011-01-01

    Modern household appliances, such as microwave ovens and DVD players, increasingly require users to read an LED or LCD display to operate them, posing a severe obstacle for persons with blindness or visual impairment. While OCR-enabled devices are emerging to address the related problem of reading text in printed documents, they are not designed to tackle the challenge of finding and reading characters in appliance displays. Any system for reading these characters must address the challenge of first locating the characters among substantial amounts of background clutter; moreover, poor contrast and the abundance of specular highlights on the display surface – which degrade the image in an unpredictable way as the camera is moved – motivate the need for a system that processes images at a few frames per second, rather than forcing the user to take several photos, each of which can take seconds to acquire and process, until one is readable. We describe a novel system that acquires video, detects and reads LED/LCD characters in real time, reading them aloud to the user with synthesized speech. The system has been implemented on both a desktop and a cell phone. Experimental results are reported on videos of display images, demonstrating the feasibility of the system. PMID:21804957

  3. Unequal error control scheme for dimmable visible light communication systems

    NASA Astrophysics Data System (ADS)

    Deng, Keyan; Yuan, Lei; Wan, Yi; Li, Huaan

    2017-01-01

    Visible light communication (VLC), which has the advantages of a very large bandwidth, high security, and freedom from license-related restrictions and electromagnetic-interference, has attracted much interest. Because a VLC system simultaneously performs illumination and communication functions, dimming control, efficiency, and reliable transmission are significant and challenging issues of such systems. In this paper, we propose a novel unequal error control (UEC) scheme in which expanding window fountain (EWF) codes in an on-off keying (OOK)-based VLC system are used to support different dimming target values. To evaluate the performance of the scheme for various dimming target values, we apply it to H.264 scalable video coding bitstreams in a VLC system. The results of the simulations that are performed using additive white Gaussian noises (AWGNs) with different signal-to-noise ratios (SNRs) are used to compare the performance of the proposed scheme for various dimming target values. It is found that the proposed UEC scheme enables earlier base layer recovery compared to the use of the equal error control (EEC) scheme for different dimming target values and therefore afford robust transmission for scalable video multicast over optical wireless channels. This is because of the unequal error protection (UEP) and unequal recovery time (URT) of the EWF code in the proposed scheme.

  4. PointCom: semi-autonomous UGV control with intuitive interface

    NASA Astrophysics Data System (ADS)

    Rohde, Mitchell M.; Perlin, Victor E.; Iagnemma, Karl D.; Lupa, Robert M.; Rohde, Steven M.; Overholt, James; Fiorani, Graham

    2008-04-01

    Unmanned ground vehicles (UGVs) will play an important role in the nation's next-generation ground force. Advances in sensing, control, and computing have enabled a new generation of technologies that bridge the gap between manual UGV teleoperation and full autonomy. In this paper, we present current research on a unique command and control system for UGVs named PointCom (Point-and-Go Command). PointCom is a semi-autonomous command system for one or multiple UGVs. The system, when complete, will be easy to operate and will enable significant reduction in operator workload by utilizing an intuitive image-based control framework for UGV navigation and allowing a single operator to command multiple UGVs. The project leverages new image processing algorithms for monocular visual servoing and odometry to yield a unique, high-performance fused navigation system. Human Computer Interface (HCI) techniques from the entertainment software industry are being used to develop video-game style interfaces that require little training and build upon the navigation capabilities. By combining an advanced navigation system with an intuitive interface, a semi-autonomous control and navigation system is being created that is robust, user friendly, and less burdensome than many current generation systems. mand).

  5. Optimisation of multiplet identifier processing on a PLAYSTATION® 3

    NASA Astrophysics Data System (ADS)

    Hattori, Masami; Mizuno, Takashi

    2010-02-01

    To enable high-performance computing (HPC) for applications with large datasets using a Sony® PLAYSTATION® 3 (PS3™) video game console, we configured a hybrid system consisting of a Windows® PC and a PS3™. To validate this system, we implemented the real-time multiplet identifier (RTMI) application, which identifies multiplets of microearthquakes in terms of the similarity of their waveforms. The cross-correlation computation, which is a core algorithm of the RTMI application, was optimised for the PS3™ platform, while the rest of the computation, including data input and output remained on the PC. With this configuration, the core part of the algorithm ran 69 times faster than the original program, accelerating total computation speed more than five times. As a result, the system processed up to 2100 total microseismic events, whereas the original implementation had a limit of 400 events. These results indicate that this system enables high-performance computing for large datasets using the PS3™, as long as data transfer time is negligible compared with computation time.

  6. Fluorescence-guided tumor visualization using a custom designed NIR attachment to a surgical microscope for high sensitivity imaging (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Kittle, David S.; Patil, Chirag G.; Mamelak, Adam; Hansen, Stacey; Perry, Jeff; Ishak, Laura; Black, Keith L.; Butte, Pramod V.

    2016-03-01

    Current surgical microscopes are limited in sensitivity for NIR fluorescence. Recent developments in tumor markers attached with NIR dyes require newer, more sensitive imaging systems with high resolution to guide surgical resection. We report on a small, single camera solution enabling advanced image processing opportunities previously unavailable for ultra-high sensitivity imaging of these agents. The system captures both visible reflectance and NIR fluorescence at 300 fps while displaying full HD resolution video at 60 fps. The camera head has been designed to easily mount onto the Zeiss Pentero microscope head for seamless integration into surgical procedures.

  7. A Usability Survey of a Contents-Based Video Retrieval System by Combining Digital Video and an Electronic Bulletin Board

    ERIC Educational Resources Information Center

    Haga, Hirohide; Kaneda, Shigeo

    2005-01-01

    This article describes the survey of the usability of a novel content-based video retrieval system. This system combines video streaming and an electronic bulletin board system (BBS). Comments submitted to the BBS are used to index video data. Following the development of the prototype system an experimental survey with ten subjects was performed.…

  8. Modern Methods for fast generation of digital holograms

    NASA Astrophysics Data System (ADS)

    Tsang, P. W. M.; Liu, J. P.; Cheung, K. W. K.; Poon, T.-C.

    2010-06-01

    With the advancement of computers, digital holography (DH) has become an area of interest that has gained much popularity. Research findings derived from this technology enables holograms representing three dimensional (3-D) scenes to be acquired with optical means, or generated with numerical computation. In both cases, the holograms are in the form of numerical data that can be recorded, transmitted, and processed with digital techniques. On top of that, the availability of high capacity digital storage and wide-band communication technologies also cast light on the emergence of real time video holographic systems, enabling animated 3-D contents to be encoded as holographic data, and distributed via existing medium. At present, development in DH has reached a reasonable degree of maturity, but at the same time the heavy computation involved also imposes difficulty in practical applications. In this paper, a summary on a number of successful accomplishments that have been made recently in overcoming this problem is presented. Subsequently, we shall propose an economical framework that is suitable for real time generation and transmission of holographic video signals over existing distribution media. The proposed framework includes an aspect of extending the depth range of the object scene, which is important for the display of large-scale objects. [Figure not available: see fulltext.

  9. Mapping of MPEG-4 decoding on a flexible architecture platform

    NASA Astrophysics Data System (ADS)

    van der Tol, Erik B.; Jaspers, Egbert G.

    2001-12-01

    In the field of consumer electronics, the advent of new features such as Internet, games, video conferencing, and mobile communication has triggered the convergence of television and computers technologies. This requires a generic media-processing platform that enables simultaneous execution of very diverse tasks such as high-throughput stream-oriented data processing and highly data-dependent irregular processing with complex control flows. As a representative application, this paper presents the mapping of a Main Visual profile MPEG-4 for High-Definition (HD) video onto a flexible architecture platform. A stepwise approach is taken, going from the decoder application toward an implementation proposal. First, the application is decomposed into separate tasks with self-contained functionality, clear interfaces, and distinct characteristics. Next, a hardware-software partitioning is derived by analyzing the characteristics of each task such as the amount of inherent parallelism, the throughput requirements, the complexity of control processing, and the reuse potential over different applications and different systems. Finally, a feasible implementation is proposed that includes amongst others a very-long-instruction-word (VLIW) media processor, one or more RISC processors, and some dedicated processors. The mapping study of the MPEG-4 decoder proves the flexibility and extensibility of the media-processing platform. This platform enables an effective HW/SW co-design yielding a high performance density.

  10. Integrated microfluidic technology for sub-lethal and behavioral marine ecotoxicity biotests

    NASA Astrophysics Data System (ADS)

    Huang, Yushi; Reyes Aldasoro, Constantino Carlos; Persoone, Guido; Wlodkowic, Donald

    2015-06-01

    Changes in behavioral traits exhibited by small aquatic invertebrates are increasingly postulated as ethically acceptable and more sensitive endpoints for detection of water-born ecotoxicity than conventional mortality assays. Despite importance of such behavioral biotests, their implementation is profoundly limited by the lack of appropriate biocompatible automation, integrated optoelectronic sensors, and the associated electronics and analysis algorithms. This work outlines development of a proof-of-concept miniaturized Lab-on-a-Chip (LOC) platform for rapid water toxicity tests based on changes in swimming patterns exhibited by Artemia franciscana (Artoxkit M™) nauplii. In contrast to conventionally performed end-point analysis based on counting numbers of dead/immobile specimens we performed a time-resolved video data analysis to dynamically assess impact of a reference toxicant on swimming pattern of A. franciscana. Our system design combined: (i) innovative microfluidic device keeping free swimming Artemia sp. nauplii under continuous microperfusion as a mean of toxin delivery; (ii) mechatronic interface for user-friendly fluidic actuation of the chip; and (iii) miniaturized video acquisition for movement analysis of test specimens. The system was capable of performing fully programmable time-lapse and video-microscopy of multiple samples for rapid ecotoxicity analysis. It enabled development of a user-friendly and inexpensive test protocol to dynamically detect sub-lethal behavioral end-points such as changes in speed of movement or distance traveled by each animal.

  11. Nervous system examination on YouTube.

    PubMed

    Azer, Samy A; Aleshaiwi, Sarah M; Algrain, Hala A; Alkhelaif, Rana A

    2012-12-22

    Web 2.0 sites such as YouTube have become a useful resource for knowledge and are used by medical students as a learning resource. This study aimed at assessing videos covering the nervous system examination on YouTube. A research of YouTube was conducted from 2 November to 2 December 2011 using the following key words "nervous system examination", "nervous system clinical examination", "cranial nerves examination", "CNS examination", "examination of cerebellum", "balance and coordination examination". Only relevant videos in the English language were identified and related URL recorded. For each video, the following information was collected: title, author/s, duration, number of viewers, number of posted comments, and total number of days on YouTube. Using criteria comprising content, technical authority and pedagogy parameters, videos were rated independently by three assessors and grouped into educationally useful and non-educationally useful. A total of 2240 videos were screened; 129 were found to have relevant information to nervous system examination. Analysis revealed that 61 (47%) of the videos provided useful information on the nervous system examination. These videos scored (mean ± SD, 14.9 ± 0.2) and mainly covered examination of the whole nervous system (8 videos, 13%), cranial nerves (42 videos, 69%), upper limbs (6 videos, 10%), lower limbs (3 videos, 5%), balance and co-ordination (2 videos, 3%). The other 68 (53%) videos were not useful educationally; scoring (mean ± SD, 11.1 ± 3.0). The total viewers of all videos was 2,189,434. Useful videos were viewed by 1,050,445 viewers (48% of total viewers). The total viewership per day for useful videos was 1,794.5 and for non-useful videos 1,132.0. The differences between the three assessors were insignificant (less than 0.5 for the mean and 0.3 for the SD). Currently, YouTube provides an adequate resource for learning nervous system examination, which can be used by medical students. However, there were deficiencies in videos covering examination of the cerebellum and balance system. Useful videos can be used as learning resources to medical students.

  12. Nervous system examination on YouTube

    PubMed Central

    2012-01-01

    Background Web 2.0 sites such as YouTube have become a useful resource for knowledge and are used by medical students as a learning resource. This study aimed at assessing videos covering the nervous system examination on YouTube. Methods A research of YouTube was conducted from 2 November to 2 December 2011 using the following key words “nervous system examination”, “nervous system clinical examination”, “cranial nerves examination”, “CNS examination”, “examination of cerebellum”, “balance and coordination examination”. Only relevant videos in the English language were identified and related URL recorded. For each video, the following information was collected: title, author/s, duration, number of viewers, number of posted comments, and total number of days on YouTube. Using criteria comprising content, technical authority and pedagogy parameters, videos were rated independently by three assessors and grouped into educationally useful and non-educationally useful. Results A total of 2240 videos were screened; 129 were found to have relevant information to nervous system examination. Analysis revealed that 61 (47%) of the videos provided useful information on the nervous system examination. These videos scored (mean ± SD, 14.9 ± 0.2) and mainly covered examination of the whole nervous system (8 videos, 13%), cranial nerves (42 videos, 69%), upper limbs (6 videos, 10%), lower limbs (3 videos, 5%), balance and co-ordination (2 videos, 3%). The other 68 (53%) videos were not useful educationally; scoring (mean ± SD, 11.1 ± 3.0). The total viewers of all videos was 2,189,434. Useful videos were viewed by 1,050,445 viewers (48% of total viewers). The total viewership per day for useful videos was 1,794.5 and for non-useful videos 1,132.0. The differences between the three assessors were insignificant (less than 0.5 for the mean and 0.3 for the SD). Conclusions Currently, YouTube provides an adequate resource for learning nervous system examination, which can be used by medical students. However, there were deficiencies in videos covering examination of the cerebellum and balance system. Useful videos can be used as learning resources to medical students. PMID:23259768

  13. Moving mobile: using an open-sourced framework to enable a web-based health application on touch devices.

    PubMed

    Lindsay, Joseph; McLean, J Allen; Bains, Amrita; Ying, Tom; Kuo, M H

    2013-01-01

    Computer devices using touch-enabled technology are becoming more prevalent today. The application of a touch screen high definition surgical monitor could allow not only high definition video from an endoscopic camera to be displayed, but also the display and interaction with relevant patient and health related data. However, this technology has not been quickly embraced by all health care organizations. Although traditional keyboard or mouse-based software programs may function flawlessly on a touch-based device, many are not practical due to the usage of small buttons, fonts and very complex menu systems. This paper describes an approach taken to overcome these problems. A real case study was used to demonstrate the novelty and efficiency of the proposed method.

  14. Ethical use of covert videoing techniques in detecting Munchausen syndrome by proxy.

    PubMed Central

    Foreman, D M; Farsides, C

    1993-01-01

    Munchausen syndrome by proxy is an especially malignant form of child abuse in which the carer (usually the mother) fabricates or exacerbates illness in the child to obtain medical attention. It can result in serious illness and even death of the child and it is difficult to detect. Some investigators have used video to monitor the carer's interaction with the child without obtaining consent--covert videoing. The technique presents several ethical problems, including exposure of the child to further abuse and a breach of trust between carer, child, and the professionals. Although covert videoing can be justified in restricted circumstances, new abuse procedures under the Children Act now seem to make its use unethical in most cases. Sufficient evidence should mostly be obtained from separation of the child and carer or videoing with consent to enable action to be taken to protect the child under an assessment order. If the new statutory instruments prove ineffective in Munchausen syndrome by proxy covert videoing may need to be re-evaluated. PMID:8401021

  15. Network Analysis of an Emergent Massively Collaborative Creation on Video Sharing Website

    NASA Astrophysics Data System (ADS)

    Hamasaki, Masahiro; Takeda, Hideaki; Nishimura, Takuichi

    The Web technology enables numerous people to collaborate in creation. We designate it as massively collaborative creation via the Web. As an example of massively collaborative creation, we particularly examine video development on Nico Nico Douga, which is a video sharing website that is popular in Japan. We specifically examine videos on Hatsune Miku, a version of a singing synthesizer application software that has inspired not only song creation but also songwriting, illustration, and video editing. As described herein, creators of interact to create new contents through their social network. In this paper, we analyzed the process of developing thousands of videos based on creators' social networks and investigate relationships among creation activity and social networks. The social network reveals interesting features. Creators generate large and sparse social networks including some centralized communities, and such centralized community's members shared special tags. Different categories of creators have different roles in evolving the network, e.g., songwriters gather more links than other categories, implying that they are triggers to network evolution.

  16. Temporal flicker reduction and denoising in video using sparse directional transforms

    NASA Astrophysics Data System (ADS)

    Kanumuri, Sandeep; Guleryuz, Onur G.; Civanlar, M. Reha; Fujibayashi, Akira; Boon, Choong S.

    2008-08-01

    The bulk of the video content available today over the Internet and over mobile networks suffers from many imperfections caused during acquisition and transmission. In the case of user-generated content, which is typically produced with inexpensive equipment, these imperfections manifest in various ways through noise, temporal flicker and blurring, just to name a few. Imperfections caused by compression noise and temporal flicker are present in both studio-produced and user-generated video content transmitted at low bit-rates. In this paper, we introduce an algorithm designed to reduce temporal flicker and noise in video sequences. The algorithm takes advantage of the sparse nature of video signals in an appropriate transform domain that is chosen adaptively based on local signal statistics. When the signal corresponds to a sparse representation in this transform domain, flicker and noise, which are spread over the entire domain, can be reduced easily by enforcing sparsity. Our results show that the proposed algorithm reduces flicker and noise significantly and enables better presentation of compressed videos.

  17. Delivery of video-on-demand services using local storages within passive optical networks.

    PubMed

    Abeywickrama, Sandu; Wong, Elaine

    2013-01-28

    At present, distributed storage systems have been widely studied to alleviate Internet traffic build-up caused by high-bandwidth, on-demand applications. Distributed storage arrays located locally within the passive optical network were previously proposed to deliver Video-on-Demand services. As an added feature, a popularity-aware caching algorithm was also proposed to dynamically maintain the most popular videos in the storage arrays of such local storages. In this paper, we present a new dynamic bandwidth allocation algorithm to improve Video-on-Demand services over passive optical networks using local storages. The algorithm exploits the use of standard control packets to reduce the time taken for the initial request communication between the customer and the central office, and to maintain the set of popular movies in the local storage. We conduct packet level simulations to perform a comparative analysis of the Quality-of-Service attributes between two passive optical networks, namely the conventional passive optical network and one that is equipped with a local storage. Results from our analysis highlight that strategic placement of a local storage inside the network enables the services to be delivered with improved Quality-of-Service to the customer. We further formulate power consumption models of both architectures to examine the trade-off between enhanced Quality-of-Service performance versus the increased power requirement from implementing a local storage within the network.

  18. Automatic view synthesis by image-domain-warping.

    PubMed

    Stefanoski, Nikolce; Wang, Oliver; Lang, Manuel; Greisen, Pierre; Heinzle, Simon; Smolic, Aljosa

    2013-09-01

    Today, stereoscopic 3D (S3D) cinema is already mainstream, and almost all new display devices for the home support S3D content. S3D distribution infrastructure to the home is already established partly in the form of 3D Blu-ray discs, video on demand services, or television channels. The necessity to wear glasses is, however, often considered as an obstacle, which hinders broader acceptance of this technology in the home. Multiviewautostereoscopic displays enable a glasses free perception of S3D content for several observers simultaneously, and support head motion parallax in a limited range. To support multiviewautostereoscopic displays in an already established S3D distribution infrastructure, a synthesis of new views from S3D video is needed. In this paper, a view synthesis method based on image-domain-warping (IDW) is presented that automatically synthesizes new views directly from S3D video and functions completely. IDW relies on an automatic and robust estimation of sparse disparities and image saliency information, and enforces target disparities in synthesized images using an image warping framework. Two configurations of the view synthesizer in the scope of a transmission and view synthesis framework are analyzed and evaluated. A transmission and view synthesis system that uses IDW is recently submitted to MPEG's call for proposals on 3D video technology, where it is ranked among the four best performing proposals.

  19. Improving Photometric Calibration of Meteor Video Camera Systems

    NASA Technical Reports Server (NTRS)

    Ehlert, Steven; Kingery, Aaron; Suggs, Robert

    2016-01-01

    We present the results of new calibration tests performed by the NASA Meteoroid Environment Oce (MEO) designed to help quantify and minimize systematic uncertainties in meteor photometry from video camera observations. These systematic uncertainties can be categorized by two main sources: an imperfect understanding of the linearity correction for the MEO's Watec 902H2 Ultimate video cameras and uncertainties in meteor magnitudes arising from transformations between the Watec camera's Sony EX-View HAD bandpass and the bandpasses used to determine reference star magnitudes. To address the rst point, we have measured the linearity response of the MEO's standard meteor video cameras using two independent laboratory tests on eight cameras. Our empirically determined linearity correction is critical for performing accurate photometry at low camera intensity levels. With regards to the second point, we have calculated synthetic magnitudes in the EX bandpass for reference stars. These synthetic magnitudes enable direct calculations of the meteor's photometric ux within the camera band-pass without requiring any assumptions of its spectral energy distribution. Systematic uncertainties in the synthetic magnitudes of individual reference stars are estimated at 0:20 mag, and are limited by the available spectral information in the reference catalogs. These two improvements allow for zero-points accurate to 0:05 ?? 0:10 mag in both ltered and un ltered camera observations with no evidence for lingering systematics.

  20. Template-Based 3D Reconstruction of Non-rigid Deformable Object from Monocular Video

    NASA Astrophysics Data System (ADS)

    Liu, Yang; Peng, Xiaodong; Zhou, Wugen; Liu, Bo; Gerndt, Andreas

    2018-06-01

    In this paper, we propose a template-based 3D surface reconstruction system of non-rigid deformable objects from monocular video sequence. Firstly, we generate a semi-dense template of the target object with structure from motion method using a subsequence video. This video can be captured by rigid moving camera orienting the static target object or by a static camera observing the rigid moving target object. Then, with the reference template mesh as input and based on the framework of classical template-based methods, we solve an energy minimization problem to get the correspondence between the template and every frame to get the time-varying mesh to present the deformation of objects. The energy terms combine photometric cost, temporal and spatial smoothness cost as well as as-rigid-as-possible cost which can enable elastic deformation. In this paper, an easy and controllable solution to generate the semi-dense template for complex objects is presented. Besides, we use an effective iterative Schur based linear solver for the energy minimization problem. The experimental evaluation presents qualitative deformation objects reconstruction results with real sequences. Compare against the results with other templates as input, the reconstructions based on our template have more accurate and detailed results for certain regions. The experimental results show that the linear solver we used performs better efficiency compared to traditional conjugate gradient based solver.

  1. VAP/VAT: video analytics platform and test bed for testing and deploying video analytics

    NASA Astrophysics Data System (ADS)

    Gorodnichy, Dmitry O.; Dubrofsky, Elan

    2010-04-01

    Deploying Video Analytics in operational environments is extremely challenging. This paper presents a methodological approach developed by the Video Surveillance and Biometrics Section (VSB) of the Science and Engineering Directorate (S&E) of the Canada Border Services Agency (CBSA) to resolve these problems. A three-phase approach to enable VA deployment within an operational agency is presented and the Video Analytics Platform and Testbed (VAP/VAT) developed by the VSB section is introduced. In addition to allowing the integration of third party and in-house built VA codes into an existing video surveillance infrastructure, VAP/VAT also allows the agency to conduct an unbiased performance evaluation of the cameras and VA software available on the market. VAP/VAT consists of two components: EventCapture, which serves to Automatically detect a "Visual Event", and EventBrowser, which serves to Display & Peruse of "Visual Details" captured at the "Visual Event". To deal with Open architecture as well as with Closed architecture cameras, two video-feed capture mechanisms have been developed within the EventCapture component: IPCamCapture and ScreenCapture.

  2. 47 CFR 76.1514 - Bundling of video and local exchange services.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 4 2014-10-01 2014-10-01 false Bundling of video and local exchange services... RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Open Video Systems § 76.1514 Bundling of video and local exchange services. An open video system operator may offer video and local exchange...

  3. 47 CFR 76.1514 - Bundling of video and local exchange services.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 4 2013-10-01 2013-10-01 false Bundling of video and local exchange services... RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Open Video Systems § 76.1514 Bundling of video and local exchange services. An open video system operator may offer video and local exchange...

  4. 47 CFR 76.1514 - Bundling of video and local exchange services.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 4 2012-10-01 2012-10-01 false Bundling of video and local exchange services... RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Open Video Systems § 76.1514 Bundling of video and local exchange services. An open video system operator may offer video and local exchange...

  5. 47 CFR 76.1514 - Bundling of video and local exchange services.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 4 2010-10-01 2010-10-01 false Bundling of video and local exchange services... RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Open Video Systems § 76.1514 Bundling of video and local exchange services. An open video system operator may offer video and local exchange...

  6. 47 CFR 76.1514 - Bundling of video and local exchange services.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 4 2011-10-01 2011-10-01 false Bundling of video and local exchange services... RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Open Video Systems § 76.1514 Bundling of video and local exchange services. An open video system operator may offer video and local exchange...

  7. What Do 2nd and 10th Graders Have in Common? Worms and Technology: Using Technology to Collaborate across Boundaries

    ERIC Educational Resources Information Center

    Culver, Patti; Culbert, Angie; McEntyre, Judy; Clifton, Patrick; Herring, Donna F.; Notar, Charles E.

    2009-01-01

    The article is about the collaboration between two classrooms that enabled a second grade class to participate in a high school biology class. Through the use of modern video conferencing equipment, Mrs. Culbert, with the help of the Dalton State College Educational Technology Training Center (ETTC), set up a live, two way video and audio feed of…

  8. Leveraging the Affordances of YouTube: The Role of Pedagogical Knowledge and Mental Models of Technology Functions for Lesson Planning with Technology

    ERIC Educational Resources Information Center

    Krauskopf, Karsten; Zahn, Carmen; Hesse, Friedrich W.

    2012-01-01

    Web-based digital video tools enable learners to access video sources in constructive ways. To leverage these affordances teachers need to integrate their knowledge of a technology with their professional knowledge about teaching. We suggest that this is a cognitive process, which is strongly connected to a teacher's mental model of the tool's…

  9. Efficient Feature Extraction and Likelihood Fusion for Vehicle Tracking in Low Frame Rate Airborne Video

    DTIC Science & Technology

    2010-07-01

    imagery, persistent sensor array I. Introduction New device fabrication technologies and heterogeneous embedded processors have led to the emergence of a...geometric occlusions between target and sensor , motion blur, urban scene complexity, and high data volumes. In practical terms the targets are small...distributed airborne narrow-field-of-view video sensor networks. Airborne camera arrays combined with com- putational photography techniques enable the

  10. On Target: Organizing and Executing the Strategic Air Campaign Against Iraq

    DTIC Science & Technology

    2002-01-01

    possession, use, sale, creation or display of any porno graphic photograph, videotape, movie, drawing, book, or magazine or similar represen- tations. This...forward-looking infrared (FLIR) sensor to create daylight-quality video images of terrain and utilized terrain-following radar to enable the aircraft to...The Black Hole Planners had pleaded with CENTAF Intel to provide them with photos of targets, provide additional personnel to analyze PGM video

  11. A method of operation scheduling based on video transcoding for cluster equipment

    NASA Astrophysics Data System (ADS)

    Zhou, Haojie; Yan, Chun

    2018-04-01

    Because of the cluster technology in real-time video transcoding device, the application of facing the massive growth in the number of video assignments and resolution and bit rate of diversity, task scheduling algorithm, and analyze the current mainstream of cluster for real-time video transcoding equipment characteristics of the cluster, combination with the characteristics of the cluster equipment task delay scheduling algorithm is proposed. This algorithm enables the cluster to get better performance in the generation of the job queue and the lower part of the job queue when receiving the operation instruction. In the end, a small real-time video transcode cluster is constructed to analyze the calculation ability, running time, resource occupation and other aspects of various algorithms in operation scheduling. The experimental results show that compared with traditional clustering task scheduling algorithm, task delay scheduling algorithm has more flexible and efficient characteristics.

  12. Unmanned ground vehicles for integrated force protection

    NASA Astrophysics Data System (ADS)

    Carroll, Daniel M.; Mikell, Kenneth; Denewiler, Thomas

    2004-09-01

    The combination of Command and Control (C2) systems with Unmanned Ground Vehicles (UGVs) provides Integrated Force Protection from the Robotic Operation Command Center. Autonomous UGVs are directed as Force Projection units. UGV payloads and fixed sensors provide situational awareness while unattended munitions provide a less-than-lethal response capability. Remote resources serve as automated interfaces to legacy physical devices such as manned response vehicles, barrier gates, fence openings, garage doors, and remote power on/off capability for unmanned systems. The Robotic Operations Command Center executes the Multiple Resource Host Architecture (MRHA) to simultaneously control heterogeneous unmanned systems. The MRHA graphically displays video, map, and status for each resource using wireless digital communications for integrated data, video, and audio. Events are prioritized and the user is prompted with audio alerts and text instructions for alarms and warnings. A control hierarchy of missions and duty rosters support autonomous operations. This paper provides an overview of the key technology enablers for Integrated Force Protection with details on a force-on-force scenario to test and demonstrate concept of operations using Unmanned Ground Vehicles. Special attention is given to development and applications for the Remote Detection Challenge and Response (REDCAR) initiative for Integrated Base Defense.

  13. Robust Targeting for the Smartphone Video Guidance Sensor

    NASA Technical Reports Server (NTRS)

    Carter, Christopher

    2017-01-01

    The Smartphone Video Guidance Sensor (SVGS) is a miniature, self-contained autonomous rendezvous and docking sensor developed using a commercial off the shelf Android-based smartphone. It aims to provide a miniaturized solution for rendezvous and docking, enabling small satellites to conduct proximity operations and formation flying while minimizing interference with a primary payload. Previously, the sensor was limited by a slow (2 Hz) refresh rate and its use of retro-reflectors, both of which contributed to a limited operating environment. To advance the technology readiness level, a modified approach was developed, combining a multi-colored LED target with a focused target-detection algorithm. Alone, the use of an LED system was determined to be much more reliable, though slower, than the retro-reflector system. The focused target-detection system was developed in response to this problem to mitigate the speed reduction of using color. However, it also improved the reliability. In combination these two methods have been demonstrated to dramatically increase sensor speed and allow the sensor to select the target even with significant noise interfering with the sensor, providing millimeter level accuracy at a range of two meters with a 1U target.

  14. Robust Targeting for the Smartphone Video Guidance Sensor

    NASA Technical Reports Server (NTRS)

    Carter, C.

    2017-01-01

    The Smartphone Video Guidance Sensor (SVGS) is a miniature, self-contained autonomous rendezvous and docking sensor developed using a commercial off the shelf Android-based smartphone. It aims to provide a miniaturized solution for rendezvous and docking, enabling small satellites to conduct proximity operations and formation flying while minimizing interference with a primary payload. Previously, the sensor was limited by a slow (2 Hz) refresh rate and its use of retro-reflectors, both of which contributed to a limited operating environment. To advance the technology readiness level, a modified approach was developed, combining a multi-colored LED target with a focused target-detection algorithm. Alone, the use of an LED system was determined to be much more reliable, though slower, than the retro-reflector system. The focused target-detection system was developed in response to this problem to mitigate the speed reduction of using color. However it also improved the reliability. In combination these two methods have been demonstrated to dramatically increase sensor speed and allow the sensor to select the target even with significant noise interfering with the sensor, providing millimeter level precision at a range of two meters with a 1U target.

  15. Hand-movement-based in-vehicle driver/front-seat passenger discrimination for centre console controls

    NASA Astrophysics Data System (ADS)

    Herrmann, Enrico; Makrushin, Andrey; Dittmann, Jana; Vielhauer, Claus; Langnickel, Mirko; Kraetzer, Christian

    2010-01-01

    Successful user discrimination in a vehicle environment may yield a reduction of the number of switches, thus significantly reducing costs while increasing user convenience. The personalization of individual controls permits conditional passenger enable/driver disable and vice versa options which may yield safety improvement. The authors propose a prototypic optical sensing system based on hand movement segmentation in near-infrared image sequences implemented in an Audi A6 Avant. Analyzing the number of movements in special regions, the system recognizes the direction of the forearm and hand motion and decides whether driver or front-seat passenger touch a control. The experimental evaluation is performed independently for uniformly and non-uniformly illuminated video data as well as for the complete video data set which includes both subsets. The general test results in error rates of up to 14.41% FPR / 16.82% FNR and 17.61% FPR / 14.77% FNR for driver and passenger respectively. Finally, the authors discuss the causes of the most frequently occurring errors as well as the prospects and limitations of optical sensing for user discrimination in passenger compartments.

  16. The universal serial bus endoscope: design and initial clinical experience.

    PubMed

    Hernandez-Zendejas, Gregorio; Dobke, Marek K; Guerrerosantos, Jose

    2004-01-01

    Endoscopic forehead lift is a well-established procedure in aesthetic plastic surgery. Many agree that currently available video-endoscopic equipment is bulky, multipieced and sometimes cumbersome in the operating theater. A novel system, the Universal Serial Bus Endoscope (USBE) was designed to simplify and reduce the number of necessary equipment pieces in the endoscopic setup. The USBE is attached by a single cable to a Universal Serial Bus (USB) port of a laptop computer. A built-in miniaturized cold light source provides illumination. A built-in digital camera chip enables procedure recording. The real-time images and movies obtained with USBE are displayed on the computer's screen and recorded on the laptop's hard disk drive. In this study, 25 patients underwent endoscopic browlift using the USBE system to test its clinical usefulness, all with good results and without complications or need for revision. The USBE was found to be reliable and easier to use than current video-endoscope equipment. The operative time needed to complete the procedure by the authors was reduced approximately 50%. The design and main technical characteristics of the USBE are presented.

  17. Free viewpoint TV and its international standardization

    NASA Astrophysics Data System (ADS)

    Tanimoto, Masayuki

    2009-05-01

    We have developed a new type of television named FTV (Free-viewpoint TV). FTV is an innovative visual media that enables us to view a 3D scene by freely changing our viewpoints. We proposed the concept of FTV and constructed the world's first real-time system including the complete chain of operation from image capture to display. We also realized FTV on a single PC and FTV with free listening-point audio. FTV is based on the ray-space method that represents one ray in real space with one point in the ray-space. We have also developed new type of ray capture and display technologies such as a 360-degree mirror-scan ray capturing system and a 360 degree ray-reproducing display. MPEG regarded FTV as the most challenging 3D media and started the international standardization activities of FTV. The first phase of FTV is MVC (Multi-view Video Coding) and the second phase is 3DV (3D Video). MVC was completed in March 2009. 3DV is a standard that targets serving a variety of 3D displays. It will be completed within the next two years.

  18. Concept of Video Bookmark (Videomark) and Its Application to the Collaborative Indexing of Lecture Video in Video-Based Distance Education

    ERIC Educational Resources Information Center

    Haga, Hirohide

    2004-01-01

    This article describes the development of the video bookmark, hereinafter referred to as the videomark, and its application to the collaborative indexing of the lecture video in video-based distance education system. The combination of the videomark system with the bulletin board system (BBS), which is another network tool used for discussion, is…

  19. Consumer-based technology for distribution of surgical videos for objective evaluation.

    PubMed

    Gonzalez, Ray; Martinez, Jose M; Lo Menzo, Emanuele; Iglesias, Alberto R; Ro, Charles Y; Madan, Atul K

    2012-08-01

    The Global Operative Assessment of Laparoscopic Skill (GOALS) is one validated metric utilized to grade laparoscopic skills and has been utilized to score recorded operative videos. To facilitate easier viewing of these recorded videos, we are developing novel techniques to enable surgeons to view these videos. The objective of this study is to determine the feasibility of utilizing widespread current consumer-based technology to assist in distributing appropriate videos for objective evaluation. Videos from residents were recorded via a direct connection from the camera processor via an S-video output via a cable into a hub to connect to a standard laptop computer via a universal serial bus (USB) port. A standard consumer-based video editing program was utilized to capture the video and record in appropriate format. We utilized mp4 format, and depending on the size of the file, the videos were scaled down (compressed), their format changed (using a standard video editing program), or sliced into multiple videos. Standard available consumer-based programs were utilized to convert the video into a more appropriate format for handheld personal digital assistants. In addition, the videos were uploaded to a social networking website and video sharing websites. Recorded cases of laparoscopic cholecystectomy in a porcine model were utilized. Compression was required for all formats. All formats were accessed from home computers, work computers, and iPhones without difficulty. Qualitative analyses by four surgeons demonstrated appropriate quality to grade for these formats. Our preliminary results show promise that, utilizing consumer-based technology, videos can be easily distributed to surgeons to grade via GOALS via various methods. Easy accessibility may help make evaluation of resident videos less complicated and cumbersome.

  20. High speed imager test station

    DOEpatents

    Yates, George J.; Albright, Kevin L.; Turko, Bojan T.

    1995-01-01

    A test station enables the performance of a solid state imager (herein called a focal plane array or FPA) to be determined at high image frame rates. A programmable waveform generator is adapted to generate clock pulses at determinable rates for clock light-induced charges from a FPA. The FPA is mounted on an imager header board for placing the imager in operable proximity to level shifters for receiving the clock pulses and outputting pulses effective to clock charge from the pixels forming the FPA. Each of the clock level shifters is driven by leading and trailing edge portions of the clock pulses to reduce power dissipation in the FPA. Analog circuits receive output charge pulses clocked from the FPA pixels. The analog circuits condition the charge pulses to cancel noise in the pulses and to determine and hold a peak value of the charge for digitizing. A high speed digitizer receives the peak signal value and outputs a digital representation of each one of the charge pulses. A video system then displays an image associated with the digital representation of the output charge pulses clocked from the FPA. In one embodiment, the FPA image is formatted to a standard video format for display on conventional video equipment.

  1. High speed imager test station

    DOEpatents

    Yates, G.J.; Albright, K.L.; Turko, B.T.

    1995-11-14

    A test station enables the performance of a solid state imager (herein called a focal plane array or FPA) to be determined at high image frame rates. A programmable waveform generator is adapted to generate clock pulses at determinable rates for clock light-induced charges from a FPA. The FPA is mounted on an imager header board for placing the imager in operable proximity to level shifters for receiving the clock pulses and outputting pulses effective to clock charge from the pixels forming the FPA. Each of the clock level shifters is driven by leading and trailing edge portions of the clock pulses to reduce power dissipation in the FPA. Analog circuits receive output charge pulses clocked from the FPA pixels. The analog circuits condition the charge pulses to cancel noise in the pulses and to determine and hold a peak value of the charge for digitizing. A high speed digitizer receives the peak signal value and outputs a digital representation of each one of the charge pulses. A video system then displays an image associated with the digital representation of the output charge pulses clocked from the FPA. In one embodiment, the FPA image is formatted to a standard video format for display on conventional video equipment. 12 figs.

  2. A software-based tool for video motion tracking in the surgical skills assessment landscape.

    PubMed

    Ganni, Sandeep; Botden, Sanne M B I; Chmarra, Magdalena; Goossens, Richard H M; Jakimowicz, Jack J

    2018-01-16

    The use of motion tracking has been proved to provide an objective assessment in surgical skills training. Current systems, however, require the use of additional equipment or specialised laparoscopic instruments and cameras to extract the data. The aim of this study was to determine the possibility of using a software-based solution to extract the data. 6 expert and 23 novice participants performed a basic laparoscopic cholecystectomy procedure in the operating room. The recorded videos were analysed using Kinovea 0.8.15 and the following parameters calculated the path length, average instrument movement and number of sudden or extreme movements. The analysed data showed that experts had significantly shorter path length (median 127 cm vs. 187 cm, p = 0.01), smaller average movements (median 0.40 cm vs. 0.32 cm, p = 0.002) and fewer sudden movements (median 14.00 vs. 21.61, p = 0.001) than their novice counterparts. The use of software-based video motion tracking of laparoscopic cholecystectomy is a simple and viable method enabling objective assessment of surgical performance. It provides clear discrimination between expert and novice performance.

  3. Gaze-Aware Streaming Solutions for the Next Generation of Mobile VR Experiences.

    PubMed

    Lungaro, Pietro; Sjoberg, Rickard; Valero, Alfredo Jose Fanghella; Mittal, Ashutosh; Tollmar, Konrad

    2018-04-01

    This paper presents a novel approach to content delivery for video streaming services. It exploits information from connected eye-trackers embedded in the next generation of VR Head Mounted Displays (HMDs). The proposed solution aims to deliver high visual quality, in real time, around the users' fixations points while lowering the quality everywhere else. The goal of the proposed approach is to substantially reduce the overall bandwidth requirements for supporting VR video experiences while delivering high levels of user perceived quality. The prerequisites to achieve these results are: (1) mechanisms that can cope with different degrees of latency in the system and (2) solutions that support fast adaptation of video quality in different parts of a frame, without requiring a large increase in bitrate. A novel codec configuration, capable of supporting near-instantaneous video quality adaptation in specific portions of a video frame, is presented. The proposed method exploits in-built properties of HEVC encoders and while it introduces a moderate amount of error, these errors are indetectable by users. Fast adaptation is the key to enable gaze-aware streaming and its reduction in bandwidth. A testbed implementing gaze-aware streaming, together with a prototype HMD with in-built eye tracker, is presented and was used for testing with real users. The studies quantified the bandwidth savings achievable by the proposed approach and characterize the relationships between Quality of Experience (QoE) and network latency. The results showed that up to 83% less bandwidth is required to deliver high QoE levels to the users, as compared to conventional solutions.

  4. Perpetual Ocean - Gulf Stream

    NASA Image and Video Library

    2017-12-08

    This image shows ocean surface currents around the world during the period from June 2005 through Decmeber 2007. Go here to view a video of this data: www.flickr.com/photos/gsfc/7009056027/ NASA/Goddard Space Flight Center Scientific Visualization Studio NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  5. Video-Based Big Data Analytics in Cyberlearning

    ERIC Educational Resources Information Center

    Wang, Shuangbao; Kelly, William

    2017-01-01

    In this paper, we present a novel system, inVideo, for video data analytics, and its use in transforming linear videos into interactive learning objects. InVideo is able to analyze video content automatically without the need for initial viewing by a human. Using a highly efficient video indexing engine we developed, the system is able to analyze…

  6. Exploring Deep Space - Uncovering the Anatomy of Periventricular Structures to Reveal the Lateral Ventricles of the Human Brain.

    PubMed

    Colibaba, Alexandru S; Calma, Aicee Dawn B; Webb, Alexandra L; Valter, Krisztina

    2017-10-22

    Anatomy students are typically provided with two-dimensional (2D) sections and images when studying cerebral ventricular anatomy and students find this challenging. Because the ventricles are negative spaces located deep within the brain, the only way to understand their anatomy is by appreciating their boundaries formed by related structures. Looking at a 2D representation of these spaces, in any of the cardinal planes, will not enable visualisation of all of the structures that form the boundaries of the ventricles. Thus, using 2D sections alone requires students to compute their own mental image of the 3D ventricular spaces. The aim of this study was to develop a reproducible method for dissecting the human brain to create an educational resource to enhance student understanding of the intricate relationships between the ventricles and periventricular structures. To achieve this, we created a video resource that features a step-by-step guide using a fiber dissection method to reveal the lateral and third ventricles together with the closely related limbic system and basal ganglia structures. One of the advantages of this method is that it enables delineation of the white matter tracts that are difficult to distinguish using other dissection techniques. This video is accompanied by a written protocol that provides a systematic description of the process to aid in the reproduction of the brain dissection. This package offers a valuable anatomy teaching resource for educators and students alike. By following these instructions educators can create teaching resources and students can be guided to produce their own brain dissection as a hands-on practical activity. We recommend that this video guide be incorporated into neuroanatomy teaching to enhance student understanding of the morphology and clinical relevance of the ventricles.

  7. 47 CFR 76.1507 - Competitive access to satellite cable programming.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Open Video Systems § 76.1507 Competitive....1000 through 76.1003 shall also apply to an operator of an open video system and its affiliate which provides video programming on its open video system, except as limited by paragraph (a) (1)-(3) of this...

  8. 47 CFR 76.1507 - Competitive access to satellite cable programming.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Open Video Systems § 76.1507 Competitive....1000 through 76.1003 shall also apply to an operator of an open video system and its affiliate which provides video programming on its open video system, except as limited by paragraph (a) (1)-(3) of this...

  9. 47 CFR 76.1507 - Competitive access to satellite cable programming.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Open Video Systems § 76.1507 Competitive....1000 through 76.1003 shall also apply to an operator of an open video system and its affiliate which provides video programming on its open video system, except as limited by paragraph (a) (1)-(3) of this...

  10. 47 CFR 76.1507 - Competitive access to satellite cable programming.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Open Video Systems § 76.1507 Competitive....1000 through 76.1003 shall also apply to an operator of an open video system and its affiliate which provides video programming on its open video system, except as limited by paragraph (a) (1)-(3) of this...

  11. 47 CFR 76.1507 - Competitive access to satellite cable programming.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Open Video Systems § 76.1507 Competitive....1000 through 76.1003 shall also apply to an operator of an open video system and its affiliate which provides video programming on its open video system, except as limited by paragraph (a) (1)-(3) of this...

  12. Toward enhancing the distributed video coder under a multiview video codec framework

    NASA Astrophysics Data System (ADS)

    Lee, Shih-Chieh; Chen, Jiann-Jone; Tsai, Yao-Hong; Chen, Chin-Hua

    2016-11-01

    The advance of video coding technology enables multiview video (MVV) or three-dimensional television (3-D TV) display for users with or without glasses. For mobile devices or wireless applications, a distributed video coder (DVC) can be utilized to shift the encoder complexity to decoder under the MVV coding framework, denoted as multiview distributed video coding (MDVC). We proposed to exploit both inter- and intraview video correlations to enhance side information (SI) and improve the MDVC performance: (1) based on the multiview motion estimation (MVME) framework, a categorized block matching prediction with fidelity weights (COMPETE) was proposed to yield a high quality SI frame for better DVC reconstructed images. (2) The block transform coefficient properties, i.e., DCs and ACs, were exploited to design the priority rate control for the turbo code, such that the DVC decoding can be carried out with fewest parity bits. In comparison, the proposed COMPETE method demonstrated lower time complexity, while presenting better reconstructed video quality. Simulations show that the proposed COMPETE can reduce the time complexity of MVME to 1.29 to 2.56 times smaller, as compared to previous hybrid MVME methods, while the image peak signal to noise ratios (PSNRs) of a decoded video can be improved 0.2 to 3.5 dB, as compared to H.264/AVC intracoding.

  13. New generation of 3D desktop computer interfaces

    NASA Astrophysics Data System (ADS)

    Skerjanc, Robert; Pastoor, Siegmund

    1997-05-01

    Today's computer interfaces use 2-D displays showing windows, icons and menus and support mouse interactions for handling programs and data files. The interface metaphor is that of a writing desk with (partly) overlapping sheets of documents placed on its top. Recent advances in the development of 3-D display technology give the opportunity to take the interface concept a radical stage further by breaking the design limits of the desktop metaphor. The major advantage of the envisioned 'application space' is, that it offers an additional, immediately perceptible dimension to clearly and constantly visualize the structure and current state of interrelations between documents, videos, application programs and networked systems. In this context, we describe the development of a visual operating system (VOS). Under VOS, applications appear as objects in 3-D space. Users can (graphically connect selected objects to enable communication between the respective applications. VOS includes a general concept of visual and object oriented programming for tasks ranging from, e.g., low-level programming up to high-level application configuration. In order to enable practical operation in an office or at home for many hours, the system should be very comfortable to use. Since typical 3-D equipment used, e.g., in virtual-reality applications (head-mounted displays, data gloves) is rather cumbersome and straining, we suggest to use off-head displays and contact-free interaction techniques. In this article, we introduce an autostereoscopic 3-D display and connected video based interaction techniques which allow viewpoint-depending imaging (by head tracking) and visually controlled modification of data objects and links (by gaze tracking, e.g., to pick, 3-D objects just by looking at them).

  14. 47 CFR 76.1513 - Open video dispute resolution.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 4 2012-10-01 2012-10-01 false Open video dispute resolution. 76.1513 Section... MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Open Video Systems § 76.1513 Open video dispute resolution. (a... with the following additions or changes. (b) Alternate dispute resolution. An open video system...

  15. 47 CFR 76.1513 - Open video dispute resolution.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 4 2013-10-01 2013-10-01 false Open video dispute resolution. 76.1513 Section... MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Open Video Systems § 76.1513 Open video dispute resolution. (a... with the following additions or changes. (b) Alternate dispute resolution. An open video system...

  16. 47 CFR 76.1513 - Open video dispute resolution.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 4 2014-10-01 2014-10-01 false Open video dispute resolution. 76.1513 Section... MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Open Video Systems § 76.1513 Open video dispute resolution. (a... with the following additions or changes. (b) Alternate dispute resolution. An open video system...

  17. 47 CFR 76.1513 - Open video dispute resolution.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 4 2011-10-01 2011-10-01 false Open video dispute resolution. 76.1513 Section... MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Open Video Systems § 76.1513 Open video dispute resolution. (a... with the following additions or changes. (b) Alternate dispute resolution. An open video system...

  18. 47 CFR 76.1513 - Open video dispute resolution.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 4 2010-10-01 2010-10-01 false Open video dispute resolution. 76.1513 Section... MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Open Video Systems § 76.1513 Open video dispute resolution. (a... with the following additions or changes. (b) Alternate dispute resolution. An open video system...

  19. Our experiences with development of digitised video streams and their use in animal-free medical education.

    PubMed

    Cervinka, Miroslav; Cervinková, Zuzana; Novák, Jan; Spicák, Jan; Rudolf, Emil; Peychl, Jan

    2004-06-01

    Alternatives and their teaching are an essential part of the curricula at the Faculty of Medicine. Dynamic screen-based video recordings are the most important type of alternative models employed for teaching purposes. Currently, the majority of teaching materials for this purpose are based on PowerPoint presentations, which are very popular because of their high versatility and visual impact. Furthermore, current developments in the field of image capturing devices and software enable the use of digitised video streams, tailored precisely to the specific situation. Here, we demonstrate that with reasonable financial resources, it is possible to prepare video sequences and to introduce them into the PowerPoint presentation, thereby shaping the teaching process according to individual students' needs and specificities.

  20. Video Game Addiction and Life Style Changes: Implications for Caregivers Burden.

    PubMed

    Sharma, Manoj Kumar

    2016-01-01

    Limitation of available information on caregiver perspective on managing the users excessive use of technology. The present case series explore the caregiver burden related to users addictive use of video game. The users and caregivers approached the service of healthy use of technology (SHUT clinic) for management. They were assessed using Griffith criteria for video game; General Health questionnaire and family burden interview schedule. It demonstrate the addictive use of video game and its impact on users life style and the presence of psychiatric distress/family burden in the caregivers. Caregivers also reported presence of disturbance in psychosocial domains and helplessness to manage the excessive use. It has implications for building support group and service to handle parents' distress and enabling them to handle the dysfunction in users.

  1. SpotMetrics: An Open-Source Image-Analysis Software Plugin for Automatic Chromatophore Detection and Measurement.

    PubMed

    Hadjisolomou, Stavros P; El-Haddad, George

    2017-01-01

    Coleoid cephalopods (squid, octopus, and sepia) are renowned for their elaborate body patterning capabilities, which are employed for camouflage or communication. The specific chromatic appearance of a cephalopod, at any given moment, is a direct result of the combined action of their intradermal pigmented chromatophore organs and reflecting cells. Therefore, a lot can be learned about the cephalopod coloration system by video recording and analyzing the activation of individual chromatophores in time. The fact that adult cephalopods have small chromatophores, up to several hundred thousand in number, makes measurement and analysis over several seconds a difficult task. However, current advancements in videography enable high-resolution and high framerate recording, which can be used to record chromatophore activity in more detail and accuracy in both space and time domains. In turn, the additional pixel information and extra frames per video from such recordings result in large video files of several gigabytes, even when the recording spans only few minutes. We created a software plugin, "SpotMetrics," that can automatically analyze high resolution, high framerate video of chromatophore organ activation in time. This image analysis software can track hundreds of individual chromatophores over several hundred frames to provide measurements of size and color. This software may also be used to measure differences in chromatophore activation during different behaviors which will contribute to our understanding of the cephalopod sensorimotor integration system. In addition, this software can potentially be utilized to detect numbers of round objects and size changes in time, such as eye pupil size or number of bacteria in a sample. Thus, we are making this software plugin freely available as open-source because we believe it will be of benefit to other colleagues both in the cephalopod biology field and also within other disciplines.

  2. Quantitative analysis of tympanic membrane perforation: a simple and reliable method.

    PubMed

    Ibekwe, T S; Adeosun, A A; Nwaorgu, O G

    2009-01-01

    Accurate assessment of the features of tympanic membrane perforation, especially size, site, duration and aetiology, is important, as it enables optimum management. To describe a simple, cheap and effective method of quantitatively analysing tympanic membrane perforations. The system described comprises a video-otoscope (capable of generating still and video images of the tympanic membrane), adapted via a universal serial bus box to a computer screen, with images analysed using the Image J geometrical analysis software package. The reproducibility of results and their correlation with conventional otoscopic methods of estimation were tested statistically with the paired t-test and correlational tests, using the Statistical Package for the Social Sciences version 11 software. The following equation was generated: P/T x 100 per cent = percentage perforation, where P is the area (in pixels2) of the tympanic membrane perforation and T is the total area (in pixels2) for the entire tympanic membrane (including the perforation). Illustrations are shown. Comparison of blinded data on tympanic membrane perforation area obtained independently from assessments by two trained otologists, of comparative years of experience, using the video-otoscopy system described, showed similar findings, with strong correlations devoid of inter-observer error (p = 0.000, r = 1). Comparison with conventional otoscopic assessment also indicated significant correlation, comparing results for two trained otologists, but some inter-observer variation was present (p = 0.000, r = 0.896). Correlation between the two methods for each of the otologists was also highly significant (p = 0.000). A computer-adapted video-otoscope, with images analysed by Image J software, represents a cheap, reliable, technology-driven, clinical method of quantitative analysis of tympanic membrane perforations and injuries.

  3. A solid state video recorder as a direct replacement of a mechanically driven disc recording device in a security system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Terry, P.L.

    1989-01-01

    Whether upgrading or developing a security system, investing in a solid state video recorder may prove to be quite prudent. Even though the initial cost of a solid state recorder may be more expensive, when comparing it to a disc recorder it is practically maintenance free. Thus, the cost effectiveness of a solid state video recorder over an extended period of time more than justifies the initial expense. This document illustrates the use of a solid state video recorder as a direct replacement. It replaces a mechanically driven disc recorder that existed in a synchronized video recording system. The originalmore » system was called the Universal Video Disc Recorder System. The modified system will now be referred to as the Solid State Video Recording System. 5 figs.« less

  4. Tactical visualization module

    NASA Astrophysics Data System (ADS)

    Kachejian, Kerry C.; Vujcic, Doug

    1999-07-01

    The Tactical Visualization Module (TVM) research effort will develop and demonstrate a portable, tactical information system to enhance the situational awareness of individual warfighters and small military units by providing real-time access to manned and unmanned aircraft, tactically mobile robots, and unattended sensors. TVM consists of a family of portable and hand-held devices being advanced into a next- generation, embedded capability. It enables warfighters to visualize the tactical situation by providing real-time video, imagery, maps, floor plans, and 'fly-through' video on demand. When combined with unattended ground sensors, such as Combat- Q, TVM permits warfighters to validate and verify tactical targets. The use of TVM results in faster target engagement times, increased survivability, and reduction of the potential for fratricide. TVM technology can support both mounted and dismounted tactical forces involved in land, sea, and air warfighting operations. As a PCMCIA card, TVM can be embedded in portable, hand-held, and wearable PCs. Thus, it leverages emerging tactical displays including flat-panel, head-mounted displays. The end result of the program will be the demonstration of the system with U.S. Army and USMC personnel in an operational environment. Raytheon Systems Company, the U.S. Army Soldier Systems Command -- Natick RDE Center (SSCOM- NRDEC) and the Defense Advanced Research Projects Agency (DARPA) are partners in developing and demonstrating the TVM technology.

  5. Laser Range and Bearing Finder for Autonomous Missions

    NASA Technical Reports Server (NTRS)

    Granade, Stephen R.

    2004-01-01

    NASA has recently re-confirmed their interest in autonomous systems as an enabling technology for future missions. In order for autonomous missions to be possible, highly-capable relative sensor systems are needed to determine an object's distance, direction, and orientation. This is true whether the mission is autonomous in-space assembly, rendezvous and docking, or rover surface navigation. Advanced Optical Systems, Inc. has developed a wide-angle laser range and bearing finder (RBF) for autonomous space missions. The laser RBF has a number of features that make it well-suited for autonomous missions. It has an operating range of 10 m to 5 km, with a 5 deg field of view. Its wide field of view removes the need for scanning systems such as gimbals, eliminating moving parts and making the sensor simpler and space qualification easier. Its range accuracy is 1% or better. It is designed to operate either as a stand-alone sensor or in tandem with a sensor that returns range, bearing, and orientation at close ranges, such as NASA's Advanced Video Guidance Sensor. We have assembled the initial prototype and are currently testing it. We will discuss the laser RBF's design and specifications. Keywords: laser range and bearing finder, autonomous rendezvous and docking, space sensors, on-orbit sensors, advanced video guidance sensor

  6. Provision of QoS for Multimedia Services in IEEE 802.11 Wireless Network

    DTIC Science & Technology

    2006-10-01

    Provision of QoS for Multimedia Services in IEEE 802.11 Wireless Network. In Dynamic Communications Management (pp. 10-1 – 10-16). Meeting Proceedings...mechanisms have been used for managing a limited bandwidth link within the IPv6 military narrowband network. The detailed description of these...confirms that implemented video rate adaptation mechanism enables improvement of qaulity of video transfer. Provision of QoS for Multimedia Services in

  7. Embedded object concept: case balancing two-wheeled robot

    NASA Astrophysics Data System (ADS)

    Vallius, Tero; Röning, Juha

    2007-09-01

    This paper presents the Embedded Object Concept (EOC) and a telepresence robot system which is a test case for the EOC. The EOC utilizes common object-oriented methods used in software by applying them to combined Lego-like software-hardware entities. These entities represent objects in object-oriented design methods, and they are the building blocks of embedded systems. The goal of the EOC is to make the designing of embedded systems faster and easier. This concept enables people without comprehensive knowledge in electronics design to create new embedded systems, and for experts it shortens the design time of new embedded systems. We present the current status of a telepresence robot created with Atomi-objects, which is the name for our implementation of the embedded objects. The telepresence robot is a relatively complex test case for the EOC. The robot has been constructed using incremental device development, which is made possible by the architecture of the EOC. The robot contains video and audio exchange capability and a controlling system for driving with two wheels. The robot consists of Atomi-objects, demonstrating the suitability of the EOC for prototyping and easy modifications, and proving the capabilities of the EOC by realizing a function that normally requires a computer. The computer counterpart is a regular PC with audio and video capabilities running with a robot control application. The robot is functional and successfully tested.

  8. Collaborative video caching scheme over OFDM-based long-reach passive optical networks

    NASA Astrophysics Data System (ADS)

    Li, Yan; Dai, Shifang; Chang, Xiangmao

    2018-07-01

    Long-reach passive optical networks (LR-PONs) are now considered as a desirable access solution for cost-efficiently delivering broadband services by integrating metro network with access network, among which orthogonal frequency division multiplexing (OFDM)-based LR-PONs gain greater research interests due to their good robustness and high spectrum efficiency. In such attractive OFDM-based LR-PONs, however, it is still challenging to effectively provide video service, which is one of the most popular and profitable broadband services, for end users. Given that more video requesters (i.e., end users) far away from optical line terminal (OLT) are served in OFDM-based LR-PONs, it is efficiency-prohibitive to use traditional video delivery model, which relies on the OLT to transmit videos to requesters, for providing video service, due to the model will incur not only larger video playback delay but also higher downstream bandwidth consumption. In this paper, we propose a novel video caching scheme that to collaboratively cache videos on distributed optical network units (ONUs) which are closer to end users, and thus to timely and cost-efficiently provide videos for requesters by ONUs over OFDM-based LR-PONs. We firstly construct an OFDM-based LR-PON architecture to enable the cooperation among ONUs while caching videos. Given a limited storage capacity of each ONU, we then propose collaborative approaches to cache videos on ONUs with the aim to maximize the local video hit ratio (LVHR), i.e., the proportion of video requests that can be directly satisfied by ONUs, under diverse resources requirements and requests distributions of videos. Simulations are finally conducted to evaluate the efficiency of our proposed scheme.

  9. Wind Tunnel Test of NASA’s Most Powerful Rocket (360° Animation)

    NASA Image and Video Library

    2018-01-08

    What are wind tunnels? And how do they help researchers design and test next-generation aircraft and spacecraft? This interactive 360° animation takes you inside the Unitary Plan Wind Tunnel at NASA’s Ames Research Center in Silicon Valley. The facility is one of seven wind tunnels located at Ames for exploring the complex physics of flight. The video features a four percent scale model of NASA’s most powerful rocket, the Space Launch System, or SLS. Two SLS models--one silver and one pink--appear in the video. The latter is coated with a special paint to track surface pressure readings during testing. Once built, the SLS rocket will be capable of sending astronauts on bold new missions into deep space. How to watch 360 content in VR? YouTube and Google Cardboard 1. Open YouTube on your mobile device and select the video. 2. Click the Google Cardboard icon on the bottom right. 3. Insert the mobile device into the Google Cardboard device. 4. Watch through the headset. Samsung Gear VR 1. Download the 360 mp4 video file. 2. Create a folder in the root directory of your device or SD Card called “MilkVR” 3. Put the video file in that folder. 4. Open the Samsung VR application from the Oculus App 5. Insert the phone into the Gear VR 6. Put on the VR headset. 7. Navigate to the section called “Sideloaded” 8. Select the video from “Storage 1”. 9. The optimal viewing format is 360 x 360. Change the format by selecting thing format icon on the bottom right. PlayStation VR 1. Download the 360 mp4 video file from NASA.gov. 2. Create a folder on a USB drive, formatted in FAT32 or exFat. 3. Copy the video file into that folder. 4. Insert the USB drive in the PlayStation 4 5. Connect the PlayStation VR headset to the PlayStation 4 and turn on the power. 6. Put on the VR headset. 7. Open the PlayStation Media Player (updated to v2.50 or higher). 8. Be sure the Media Player is set to “VR Mode” by holding the “Option” button to enable it. 9. Open the video file and watch the video.

  10. Remotely accessible laboratory for MEMS testing

    NASA Astrophysics Data System (ADS)

    Sivakumar, Ganapathy; Mulsow, Matthew; Melinger, Aaron; Lacouture, Shelby; Dallas, Tim E.

    2010-02-01

    We report on the construction of a remotely accessible and interactive laboratory for testing microdevices (aka: MicroElectroMechancial Systems - MEMS). Enabling expanded utilization of microdevices for research, commercial, and educational purposes is very important for driving the creation of future MEMS devices and applications. Unfortunately, the relatively high costs associated with MEMS devices and testing infrastructure makes widespread access to the world of MEMS difficult. The creation of a virtual lab to control and actuate MEMS devices over the internet helps spread knowledge to a larger audience. A host laboratory has been established that contains a digital microscope, microdevices, controllers, and computers that can be logged into through the internet. The overall layout of the tele-operated MEMS laboratory system can be divided into two major parts: the server side and the client side. The server-side is present at Texas Tech University, and hosts a server machine that runs the Linux operating system and is used for interfacing the MEMS lab with the outside world via internet. The controls from the clients are transferred to the lab side through the server interface. The server interacts with the electronics required to drive the MEMS devices using a range of National Instruments hardware and LabView Virtual Instruments. An optical microscope (100 ×) with a CCD video camera is used to capture images of the operating MEMS. The server broadcasts the live video stream over the internet to the clients through the website. When the button is pressed on the website, the MEMS device responds and the video stream shows the movement in close to real time.

  11. The Eye Catching Property of Digital-Signage with Scent and a Scent-Emitting Video Display System

    NASA Astrophysics Data System (ADS)

    Tomono, Akira; Otake, Syunya

    In this paper, the effective method of inducing a glance aimed at the digital signage by emitting a scent is described. The simulation experiment was done using the immersive VR System because there were a lot of restrictions to the experiment in an actual passageway. In order to investigate the eye catching property of the digital signage, the passer-by's eye movement was analyzed. Through the experiment, they were clarified that the digital signage with the scent was paid to attention, and the strong impression remained in the memory. Next, a scent-emitting video display system applying to the digital signage is described. To this end, a scent-emitting device that is able to quickly change the scents it is releasing, and present them from a distance (by the non-contact method), thus maintaining a relationship between the scent and the image, must be developed. We propose a new method where a device that can release pressurized gases is placed behind the display screen filled with tiny pores. Scents are then ejected from this device, traveling through the pores to the front side of the screen. An excellent scent delivery characteristic was obtained because the distance to the user is close and the scent is presented from the front. We also present a method for inducing viewer reactions using on-screen images, thereby enabling scent release to coincide precisely with viewer inhalations. We anticipate that the simultaneous presentation of scents and video images will deepen viewers' comprehension of these images.

  12. Augmenting reality in Direct View Optical (DVO) overlay applications

    NASA Astrophysics Data System (ADS)

    Hogan, Tim; Edwards, Tim

    2014-06-01

    The integration of overlay displays into rifle scopes can transform precision Direct View Optical (DVO) sights into intelligent interactive fire-control systems. Overlay displays can provide ballistic solutions within the sight for dramatically improved targeting, can fuse sensor video to extend targeting into nighttime or dirty battlefield conditions, and can overlay complex situational awareness information over the real-world scene. High brightness overlay solutions for dismounted soldier applications have previously been hindered by excessive power consumption, weight and bulk making them unsuitable for man-portable, battery powered applications. This paper describes the advancements and capabilities of a high brightness, ultra-low power text and graphics overlay display module developed specifically for integration into DVO weapon sight applications. Central to the overlay display module was the development of a new general purpose low power graphics controller and dual-path display driver electronics. The graphics controller interface is a simple 2-wire RS-232 serial interface compatible with existing weapon systems such as the IBEAM ballistic computer and the RULR and STORM laser rangefinders (LRF). The module features include multiple graphics layers, user configurable fonts and icons, and parameterized vector rendering, making it suitable for general purpose DVO overlay applications. The module is configured for graphics-only operation for daytime use and overlays graphics with video for nighttime applications. The miniature footprint and ultra-low power consumption of the module enables a new generation of intelligent DVO systems and has been implemented for resolutions from VGA to SXGA, in monochrome and color, and in graphics applications with and without sensor video.

  13. Architectural Considerations for Highly Scalable Computing to Support On-demand Video Analytics

    DTIC Science & Technology

    2017-04-19

    enforcement . The system was tested in the wild using video files as well as a commercial Video Management System supporting more than 100 surveillance...research were used to implement a distributed on-demand video analytics system that was prototyped for the use of forensics investigators in law...cameras as video sources. The architectural considerations of this system are presented. Issues to be reckoned with in implementing a scalable

  14. Advanced Video Data-Acquisition System For Flight Research

    NASA Technical Reports Server (NTRS)

    Miller, Geoffrey; Richwine, David M.; Hass, Neal E.

    1996-01-01

    Advanced video data-acquisition system (AVDAS) developed to satisfy variety of requirements for in-flight video documentation. Requirements range from providing images for visualization of airflows around fighter airplanes at high angles of attack to obtaining safety-of-flight documentation. F/A-18 AVDAS takes advantage of very capable systems like NITE Hawk forward-looking infrared (FLIR) pod and recent video developments like miniature charge-couple-device (CCD) color video cameras and other flight-qualified video hardware.

  15. Automated Visual Event Detection, Tracking, and Data Management System for Cabled- Observatory Video

    NASA Astrophysics Data System (ADS)

    Edgington, D. R.; Cline, D. E.; Schlining, B.; Raymond, E.

    2008-12-01

    Ocean observatories and underwater video surveys have the potential to unlock important discoveries with new and existing camera systems. Yet the burden of video management and analysis often requires reducing the amount of video recorded through time-lapse video or similar methods. It's unknown how many digitized video data sets exist in the oceanographic community, but we suspect that many remain under analyzed due to lack of good tools or human resources to analyze the video. To help address this problem, the Automated Visual Event Detection (AVED) software and The Video Annotation and Reference System (VARS) have been under development at MBARI. For detecting interesting events in the video, the AVED software has been developed over the last 5 years. AVED is based on a neuromorphic-selective attention algorithm, modeled on the human vision system. Frames are decomposed into specific feature maps that are combined into a unique saliency map. This saliency map is then scanned to determine the most salient locations. The candidate salient locations are then segmented from the scene using algorithms suitable for the low, non-uniform light and marine snow typical of deep underwater video. For managing the AVED descriptions of the video, the VARS system provides an interface and database for describing, viewing, and cataloging the video. VARS was developed by the MBARI for annotating deep-sea video data and is currently being used to describe over 3000 dives by our remotely operated vehicles (ROV), making it well suited to this deepwater observatory application with only a few modifications. To meet the compute and data intensive job of video processing, a distributed heterogeneous network of computers is managed using the Condor workload management system. This system manages data storage, video transcoding, and AVED processing. Looking to the future, we see high-speed networks and Grid technology as an important element in addressing the problem of processing and accessing large video data sets.

  16. Design and Fabrication of Nereid-UI: A Remotely Operated Underwater Vehicle for Oceanographic Access Under Ice

    NASA Astrophysics Data System (ADS)

    Whitcomb, L. L.; Bowen, A. D.; Yoerger, D.; German, C. R.; Kinsey, J. C.; Mayer, L. A.; Jakuba, M. V.; Gomez-Ibanez, D.; Taylor, C. L.; Machado, C.; Howland, J. C.; Kaiser, C. L.; Heintz, M.; Pontbriand, C.; Suman, S.; O'hara, L.

    2013-12-01

    The Woods Hole Oceanographic Institution and collaborators from the Johns Hopkins University and the University of New Hampshire are developing for the Polar Science Community a remotely-controlled underwater robotic vehicle capable of being tele-operated under ice under remote real-time human supervision. The Nereid Under-Ice (Nereid-UI) vehicle will enable exploration and detailed examination of biological and physical environments at glacial ice-tongues and ice-shelf margins, delivering high-definition video in addition to survey data from on board acoustic, chemical, and biological sensors. Preliminary propulsion system testing indicates the vehicle will be able to attain standoff distances of up to 20 km from an ice-edge boundary, as dictated by the current maximum tether length. The goal of the Nereid-UI system is to provide scientific access to under-ice and ice-margin environments that is presently impractical or infeasible. FIBER-OPTIC TETHER: The heart of the Nereid-UI system is its expendable fiber optic telemetry system. The telemetry system utilizes many of the same components pioneered for the full-ocean depth capable HROV Nereus vehicle, with the addition of continuous fiber status monitoring, and new float-pack and depressor designs that enable single-body deployment. POWER SYSTEM: Nereid-UI is powered by a pressure-tolerant lithium-ion battery system composed of 30 Ah prismatic pouch cells, arranged on a 90 volt bus and capable of delivering 15 kW. The cells are contained in modules of 8 cells, and groups of 9 modules are housed together in oil-filled plastic boxes. The power distribution system uses pressure tolerant components extensively, each of which have been individually qualified to 10 kpsi and operation between -20 C and 40 C. THRUSTERS: Nereid-UI will employ eight identical WHOI-designed thrusters, each with a frameless motor, oil-filled and individually compensated, and designed for low-speed (500 rpm max) direct drive. We expect an end-to-end propulsive efficiency of between 0.3 and 0.4 at a transit speed of 1 m/s based on testing conducted at WHOI. CAMERAS: Video imagery is one of the principal products of Nereid-UI. Two fiber-optic telemetry wavelengths deliver 1.5 Gb/s uncompressed HDSDI video to the support vessel in real time, supporting a Kongsberg OE14-522 hyperspherical pan and tilt HD camera and several utility cameras. PROJECT STATUS: The first shallow-water vehicle trials are scheduled for September 2013. The trials are designed to test core vehicle systems particularly the power system, main computer and control system, thrusters, video and telemetry system, and to refine camera, lighting and acoustic sensor placement for piloted and closed-loop control, especially as pertains to working near the underside of ice. Remaining vehicle design tasks include finalizing the single-body deployment concept and depressor, populating the scientific sensing suite, and the software development necessary to implement the planned autonomous return strategy. Final design and fabrication for these remaining components of the vehicle system will proceed through fall 2013, with trials under lake ice in early 2014, and potential polar trials beginning in 2014-15. SUPPORT: NSF OPP (ANT-1126311), WHOI, James Family Foundation, and George Frederick Jewett Foundation East.

  17. Effects of video-game ownership on young boys' academic and behavioral functioning: a randomized, controlled study.

    PubMed

    Weis, Robert; Cerankosky, Brittany C

    2010-04-01

    Young boys who did not own video games were promised a video-game system and child-appropriate games in exchange for participating in an "ongoing study of child development." After baseline assessment of boys' academic achievement and parent- and teacher-reported behavior, boys were randomly assigned to receive the video-game system immediately or to receive the video-game system after follow-up assessment, 4 months later. Boys who received the system immediately spent more time playing video games and less time engaged in after-school academic activities than comparison children. Boys who received the system immediately also had lower reading and writing scores and greater teacher-reported academic problems at follow-up than comparison children. Amount of video-game play mediated the relationship between video-game ownership and academic outcomes. Results provide experimental evidence that video games may displace after-school activities that have educational value and may interfere with the development of reading and writing skills in some children.

  18. 47 CFR 76.1511 - Fees.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Open Video Systems § 76.1511 Fees. An open video system operator may be subject to the... open video system operator or its affiliates, including all revenues received from subscribers and all...

  19. 47 CFR 76.1511 - Fees.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Open Video Systems § 76.1511 Fees. An open video system operator may be subject to the... open video system operator or its affiliates, including all revenues received from subscribers and all...

  20. 47 CFR 76.1511 - Fees.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Open Video Systems § 76.1511 Fees. An open video system operator may be subject to the... open video system operator or its affiliates, including all revenues received from subscribers and all...

  1. 47 CFR 76.1511 - Fees.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Open Video Systems § 76.1511 Fees. An open video system operator may be subject to the... open video system operator or its affiliates, including all revenues received from subscribers and all...

  2. 47 CFR 76.1511 - Fees.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Open Video Systems § 76.1511 Fees. An open video system operator may be subject to the... open video system operator or its affiliates, including all revenues received from subscribers and all...

  3. The clinical information system GastroBase: integration of image processing and laboratory communication.

    PubMed

    Kocna, P

    1995-01-01

    GastroBase, a clinical information system, incorporates patient identification, medical records, images, laboratory data, patient history, physical examination, and other patient-related information. Program modules are written in C; all data is processed using Novell-Btrieve data manager. Patient identification database represents the main core of this information systems. A graphic library developed in the past year and graphic modules with a special video-card enables the storing, archiving, and linking of different images to the electronic patient-medical-record. GastroBase has been running for more than four years in daily routine and the database contains more than 25,000 medical records and 1,500 images. This new version of GastroBase is now incorporated into the clinical information system of University Clinic in Prague.

  4. Video-CRM: understanding customer behaviors in stores

    NASA Astrophysics Data System (ADS)

    Haritaoglu, Ismail; Flickner, Myron; Beymer, David

    2013-03-01

    This paper describes two real-time computer vision systems created 10 years ago that detect and track people in stores to obtain insights of customer behavior while shopping. The first system uses a single color camera to identify shopping groups in the checkout line. Shopping groups are identified by analyzing the inter-body distances coupled with the cashier's activities to detect checkout transactions start and end times. The second system uses multiple overhead narrow-baseline stereo cameras to detect and track people, their body posture and parts to understand customer interactions with products such as "customer picking a product from a shelf". In pilot studies both systems demonstrated real-time performance and sufficient accuracy to enable more detailed understanding of customer behavior and extract actionable real-time retail analytics.

  5. Automation of the targeting and reflective alignment concept

    NASA Technical Reports Server (NTRS)

    Redfield, Robin C.

    1992-01-01

    The automated alignment system, described herein, employs a reflective, passive (requiring no power) target and includes a PC-based imaging system and one camera mounted on a six degree of freedom robot manipulator. The system detects and corrects for manipulator misalignment in three translational and three rotational directions by employing the Targeting and Reflective Alignment Concept (TRAC), which simplifies alignment by decoupling translational and rotational alignment control. The concept uses information on the camera and the target's relative position based on video feedback from the camera. These relative positions are converted into alignment errors and minimized by motions of the robot. The system is robust to exogenous lighting by virtue of a subtraction algorithm which enables the camera to only see the target. These capabilities are realized with relatively minimal complexity and expense.

  6. 76 FR 23624 - In the Matter of Certain Video Game Systems and Wireless Controllers and Components Thereof...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-04-27

    ... INTERNATIONAL TRADE COMMISSION [Inv. No. 337-TA-770] In the Matter of Certain Video Game Systems... importation of certain video game systems and wireless controllers and components thereof by reason of... sale within the United States after importation of certain video game systems and wireless controllers...

  7. 47 CFR 76.1509 - Syndicated program exclusivity.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Open Video Systems § 76.1509 Syndicated program exclusivity. (a) Sections 76.151 through 76.163 shall apply to open video systems in accordance with the provisions... to an open video system. (c) Any provision of § 76.155 that refers to a “cable system operator” or...

  8. 47 CFR 76.1509 - Syndicated program exclusivity.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Open Video Systems § 76.1509 Syndicated program exclusivity. (a) Sections 76.151 through 76.163 shall apply to open video systems in accordance with the provisions... to an open video system. (c) Any provision of § 76.155 that refers to a “cable system operator” or...

  9. 47 CFR 76.1509 - Syndicated program exclusivity.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Open Video Systems § 76.1509 Syndicated program exclusivity. (a) Sections 76.151 through 76.163 shall apply to open video systems in accordance with the provisions... to an open video system. (c) Any provision of § 76.155 that refers to a “cable system operator” or...

  10. 47 CFR 76.1509 - Syndicated program exclusivity.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Open Video Systems § 76.1509 Syndicated program exclusivity. (a) Sections 76.151 through 76.163 shall apply to open video systems in accordance with the provisions... to an open video system. (c) Any provision of § 76.155 that refers to a “cable system operator” or...

  11. 47 CFR 76.1509 - Syndicated program exclusivity.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Open Video Systems § 76.1509 Syndicated program exclusivity. (a) Sections 76.151 through 76.163 shall apply to open video systems in accordance with the provisions... to an open video system. (c) Any provision of § 76.155 that refers to a “cable system operator” or...

  12. Automated detection of feeding strikes by larval fish using continuous high-speed digital video: a novel method to extract quantitative data from fast, sparse kinematic events.

    PubMed

    Shamur, Eyal; Zilka, Miri; Hassner, Tal; China, Victor; Liberzon, Alex; Holzman, Roi

    2016-06-01

    Using videography to extract quantitative data on animal movement and kinematics constitutes a major tool in biomechanics and behavioral ecology. Advanced recording technologies now enable acquisition of long video sequences encompassing sparse and unpredictable events. Although such events may be ecologically important, analysis of sparse data can be extremely time-consuming and potentially biased; data quality is often strongly dependent on the training level of the observer and subject to contamination by observer-dependent biases. These constraints often limit our ability to study animal performance and fitness. Using long videos of foraging fish larvae, we provide a framework for the automated detection of prey acquisition strikes, a behavior that is infrequent yet critical for larval survival. We compared the performance of four video descriptors and their combinations against manually identified feeding events. For our data, the best single descriptor provided a classification accuracy of 77-95% and detection accuracy of 88-98%, depending on fish species and size. Using a combination of descriptors improved the accuracy of classification by ∼2%, but did not improve detection accuracy. Our results indicate that the effort required by an expert to manually label videos can be greatly reduced to examining only the potential feeding detections in order to filter false detections. Thus, using automated descriptors reduces the amount of manual work needed to identify events of interest from weeks to hours, enabling the assembly of an unbiased large dataset of ecologically relevant behaviors. © 2016. Published by The Company of Biologists Ltd.

  13. Development of a web-based video management and application processing system

    NASA Astrophysics Data System (ADS)

    Chan, Shermann S.; Wu, Yi; Li, Qing; Zhuang, Yueting

    2001-07-01

    How to facilitate efficient video manipulation and access in a web-based environment is becoming a popular trend for video applications. In this paper, we present a web-oriented video management and application processing system, based on our previous work on multimedia database and content-based retrieval. In particular, we extend the VideoMAP architecture with specific web-oriented mechanisms, which include: (1) Concurrency control facilities for the editing of video data among different types of users, such as Video Administrator, Video Producer, Video Editor, and Video Query Client; different users are assigned various priority levels for different operations on the database. (2) Versatile video retrieval mechanism which employs a hybrid approach by integrating a query-based (database) mechanism with content- based retrieval (CBR) functions; its specific language (CAROL/ST with CBR) supports spatio-temporal semantics of video objects, and also offers an improved mechanism to describe visual content of videos by content-based analysis method. (3) Query profiling database which records the `histories' of various clients' query activities; such profiles can be used to provide the default query template when a similar query is encountered by the same kind of users. An experimental prototype system is being developed based on the existing VideoMAP prototype system, using Java and VC++ on the PC platform.

  14. Detection Thresholds for Rotation and Translation Gains in 360° Video-Based Telepresence Systems.

    PubMed

    Zhang, Jingxin; Langbehn, Eike; Krupke, Dennis; Katzakis, Nicholas; Steinicke, Frank

    2018-04-01

    Telepresence systems have the potential to overcome limits and distance constraints of the real-world by enabling people to remotely visit and interact with each other. However, current telepresence systems usually lack natural ways of supporting interaction and exploration of remote environments (REs). In particular, single webcams for capturing the RE provide only a limited illusion of spatial presence, and movement control of mobile platforms in today's telepresence systems are often restricted to simple interaction devices. One of the main challenges of telepresence systems is to allow users to explore a RE in an immersive, intuitive and natural way, e.g., by real walking in the user's local environment (LE), and thus controlling motions of the robot platform in the RE. However, the LE in which the user's motions are tracked usually provides a much smaller interaction space than the RE. In this context, redirected walking (RDW) is a very suitable approach to solve this problem. However, so far there is no previous work, which explored if and how RDW can be used in video-based 360° telepresence systems. In this article, we conducted two psychophysical experiments in which we have quantified how much humans can be unknowingly redirected on virtual paths in the RE, which are different from the physical paths that they actually walk in the LE. Experiment 1 introduces a discrimination task between local and remote translations, and in Experiment 2 we analyzed the discrimination between local and remote rotations. In Experiment 1 participants performed straightforward translations in the LE that were mapped to straightforward translations in the RE shown as 360° videos, which were manipulated by different gains. Then, participants had to estimate if the remotely perceived translation was faster or slower than the actual physically performed translation. Similarly, in Experiment 2 participants performed rotations in the LE that were mapped to the virtual rotations in a 360° video-based RE to which we applied different gains. Again, participants had to estimate whether the remotely perceived rotation was smaller or larger than the actual physically performed rotation. Our results show that participants are not able to reliably discriminate the difference between physical motion in the LE and the virtual motion from the 360° video RE when virtual translations are down-scaled by 5.8% and up-scaled by 9.7%, and virtual rotations are about 12.3% less or 9.2% more than the corresponding physical rotations in the LE.

  15. Introducing AgilePQ DCM (Digital Conversion Module) - Video Text Version

    Science.gov Websites

    monolith. There's not one thing you do to make things digitally secure. We talk in terms of confidentiality enable these nonfunctional attributes. There's not, as I said, one thing you can do to satisfy the whole subject of cyber security. And there's not one thing you can do to fully enable these nonfunctional

  16. Indexing method of digital audiovisual medical resources with semantic Web integration.

    PubMed

    Cuggia, Marc; Mougin, Fleur; Le Beux, Pierre

    2005-03-01

    Digitalization of audiovisual resources and network capability offer many possibilities which are the subject of intensive work in scientific and industrial sectors. Indexing such resources is a major challenge. Recently, the Motion Pictures Expert Group (MPEG) has developed MPEG-7, a standard for describing multimedia content. The goal of this standard is to develop a rich set of standardized tools to enable efficient retrieval from digital archives or the filtering of audiovisual broadcasts on the Internet. How could this kind of technology be used in the medical context? In this paper, we propose a simpler indexing system, based on the Dublin Core standard and compliant to MPEG-7. We use MeSH and the UMLS to introduce conceptual navigation. We also present a video-platform which enables encoding and gives access to audiovisual resources in streaming mode.

  17. Multilocation Video Conference By Optical Fiber

    NASA Astrophysics Data System (ADS)

    Gray, Donald J.

    1982-10-01

    An experimental system that permits interconnection of many offices in a single video conference is described. Video images transmitted to conference participants are selected by the conference chairman and switched by a microprocessor-controlled video switch. Speakers can, at their choice, transmit their own images or images of graphics they wish to display. Users are connected to the Switching Center by optical fiber subscriber loops that carry analog video, digitized telephone, data and signaling. The same system also provides user-selectable distribution of video program and video library material. Experience in the operation of the conference system is discussed.

  18. In situ process monitoring in selective laser sintering using optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Gardner, Michael R.; Lewis, Adam; Park, Jongwan; McElroy, Austin B.; Estrada, Arnold D.; Fish, Scott; Beaman, Joseph J.; Milner, Thomas E.

    2018-04-01

    Selective laser sintering (SLS) is an efficient process in additive manufacturing that enables rapid part production from computer-based designs. However, SLS is limited by its notable lack of in situ process monitoring when compared with other manufacturing processes. We report the incorporation of optical coherence tomography (OCT) into an SLS system in detail and demonstrate access to surface and subsurface features. Video frame rate cross-sectional imaging reveals areas of sintering uniformity and areas of excessive heat error with high temporal resolution. We propose a set of image processing techniques for SLS process monitoring with OCT and report the limitations and obstacles for further OCT integration with SLS systems.

  19. Enhancing Mother Infant Interactions through Video Feedback Enabled Interventions in Women with Schizophrenia: A Single Subject Research Design Study.

    PubMed

    Reddy, Pashapu Dharma; Desai, Geehta; Hamza, Ameer; Karthik, Sheshachala; Ananthanpillai, Supraja Thirumalai; Chandra, Prabha S

    2014-10-01

    It has been shown that mother infant interactions are often impaired in mothers with schizophrenia. Contributory factors include psychotic symptoms, negative symptoms and surrogate parenting by others. This study describes the effectiveness of video feedback in enhancing mother-infant interaction in mothers with schizophrenia who have impaired interaction with their infant. Two women with schizophrenia who were admitted for persistent psychotic symptoms and poor mothering skills, participated in the intervention. Pre intervention parenting assessment was done using video recording of mother infant interaction. Six sessions of mothering intervention were provided using video feedback and a repeat recording was done. Pre-and post-intervention videos were subsequently rated in a blind fashion by an independent expert in perinatal psychiatry using the pediatric infant parent exam (PIPE) scale. Pre and post intervention comparison of PIPE scores indicating significant improvement in several areas of mothering. Video feedback is a simple and inexpensive tool which can be used for improving mothering skills among mothers with postpartum psychosis or schizophrenia even in low resource settings.

  20. Video copy protection and detection framework (VPD) for e-learning systems

    NASA Astrophysics Data System (ADS)

    ZandI, Babak; Doustarmoghaddam, Danial; Pour, Mahsa R.

    2013-03-01

    This Article reviews and compares the copyright issues related to the digital video files, which can be categorized as contended based and Digital watermarking copy Detection. Then we describe how to protect a digital video by using a special Video data hiding method and algorithm. We also discuss how to detect the copy right of the file, Based on expounding Direction of the technology of the video copy detection, and Combining with the own research results, brings forward a new video protection and copy detection approach in terms of plagiarism and e-learning systems using the video data hiding technology. Finally we introduce a framework for Video protection and detection in e-learning systems (VPD Framework).

  1. 47 CFR 76.1512 - Programming information.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Open Video Systems § 76.1512 Programming information. (a) An open video system operator shall not unreasonably discriminate in favor of itself or its affiliates... for the purpose of selecting programming on the open video system, or in the way such material or...

  2. 47 CFR 76.1505 - Public, educational and governmental access.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Open Video Systems § 76.1505 Public, educational and governmental access. (a) An open video system operator shall be subject to public, educational and... video system operator must ensure that all subscribers receive any public, educational and governmental...

  3. 47 CFR 76.1512 - Programming information.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Open Video Systems § 76.1512 Programming information. (a) An open video system operator shall not unreasonably discriminate in favor of itself or its affiliates... for the purpose of selecting programming on the open video system, or in the way such material or...

  4. 47 CFR 76.1505 - Public, educational and governmental access.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Open Video Systems § 76.1505 Public, educational and governmental access. (a) An open video system operator shall be subject to public, educational and... video system operator must ensure that all subscribers receive any public, educational and governmental...

  5. 47 CFR 76.1505 - Public, educational and governmental access.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Open Video Systems § 76.1505 Public, educational and governmental access. (a) An open video system operator shall be subject to public, educational and... video system operator must ensure that all subscribers receive any public, educational and governmental...

  6. 47 CFR 76.1505 - Public, educational and governmental access.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Open Video Systems § 76.1505 Public, educational and governmental access. (a) An open video system operator shall be subject to public, educational and... video system operator must ensure that all subscribers receive any public, educational and governmental...

  7. 47 CFR 76.1505 - Public, educational and governmental access.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Open Video Systems § 76.1505 Public, educational and governmental access. (a) An open video system operator shall be subject to public, educational and... video system operator must ensure that all subscribers receive any public, educational and governmental...

  8. 47 CFR 76.1512 - Programming information.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Open Video Systems § 76.1512 Programming information. (a) An open video system operator shall not unreasonably discriminate in favor of itself or its affiliates... for the purpose of selecting programming on the open video system, or in the way such material or...

  9. 47 CFR 76.1512 - Programming information.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Open Video Systems § 76.1512 Programming information. (a) An open video system operator shall not unreasonably discriminate in favor of itself or its affiliates... for the purpose of selecting programming on the open video system, or in the way such material or...

  10. 47 CFR 63.02 - Exemptions for extensions of lines and for systems for the delivery of video programming.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... systems for the delivery of video programming. 63.02 Section 63.02 Telecommunication FEDERAL... systems for the delivery of video programming. (a) Any common carrier is exempt from the requirements of... with respect to the establishment or operation of a system for the delivery of video programming. [64...

  11. 47 CFR 63.02 - Exemptions for extensions of lines and for systems for the delivery of video programming.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... systems for the delivery of video programming. 63.02 Section 63.02 Telecommunication FEDERAL... systems for the delivery of video programming. (a) Any common carrier is exempt from the requirements of... with respect to the establishment or operation of a system for the delivery of video programming. [64...

  12. 47 CFR 63.02 - Exemptions for extensions of lines and for systems for the delivery of video programming.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... systems for the delivery of video programming. 63.02 Section 63.02 Telecommunication FEDERAL... systems for the delivery of video programming. (a) Any common carrier is exempt from the requirements of... with respect to the establishment or operation of a system for the delivery of video programming. [64...

  13. 47 CFR 63.02 - Exemptions for extensions of lines and for systems for the delivery of video programming.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... systems for the delivery of video programming. 63.02 Section 63.02 Telecommunication FEDERAL... systems for the delivery of video programming. (a) Any common carrier is exempt from the requirements of... with respect to the establishment or operation of a system for the delivery of video programming. [64...

  14. 47 CFR 63.02 - Exemptions for extensions of lines and for systems for the delivery of video programming.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... systems for the delivery of video programming. 63.02 Section 63.02 Telecommunication FEDERAL... systems for the delivery of video programming. (a) Any common carrier is exempt from the requirements of... with respect to the establishment or operation of a system for the delivery of video programming. [64...

  15. Joint Force Quarterly. Number 17, Autumn/Winter 1997-98

    DTIC Science & Technology

    1998-02-01

    capabilities ■ small conventional submarines with smart tor- pedoes , together with both simple and sophisticated sea mines ■ precision weaponry such as...sites, books, and training videos , industry products come with exten- sive support. And our people are more fa- miliar with it. While it may take an...object of IT–21 is to link U.S. forces and eventually allies in an asyn- chronous transfer mode network to enable voice, video , and data transmis- sions

  16. UAV field demonstration of social media enabled tactical data link

    NASA Astrophysics Data System (ADS)

    Olson, Christopher C.; Xu, Da; Martin, Sean R.; Castelli, Jonathan C.; Newman, Andrew J.

    2015-05-01

    This paper addresses the problem of enabling Command and Control (C2) and data exfiltration functions for missions using small, unmanned, airborne surveillance and reconnaissance platforms. The authors demonstrated the feasibility of using existing commercial wireless networks as the data transmission infrastructure to support Unmanned Aerial Vehicle (UAV) autonomy functions such as transmission of commands, imagery, metadata, and multi-vehicle coordination messages. The authors developed and integrated a C2 Android application for ground users with a common smart phone, a C2 and data exfiltration Android application deployed on-board the UAVs, and a web server with database to disseminate the collected data to distributed users using standard web browsers. The authors performed a mission-relevant field test and demonstration in which operators commanded a UAV from an Android device to search and loiter; and remote users viewed imagery, video, and metadata via web server to identify and track a vehicle on the ground. Social media served as the tactical data link for all command messages, images, videos, and metadata during the field demonstration. Imagery, video, and metadata were transmitted from the UAV to the web server via multiple Twitter, Flickr, Facebook, YouTube, and similar media accounts. The web server reassembled images and video with corresponding metadata for distributed users. The UAV autopilot communicated with the on-board Android device via on-board Bluetooth network.

  17. A High-Resolution Minimicroscope System for Wireless Real-Time Monitoring.

    PubMed

    Wang, Zongjie; Boddeda, Akash; Parker, Benjamin; Samanipour, Roya; Ghosh, Sanjoy; Menard, Frederic; Kim, Keekyoung

    2018-07-01

    Compact, cost-effective, and high-performance microscope that enables the real-time imaging of cells and lab-on-a-chip devices is highly demanded for cell biology and biomedical engineering. This paper aims to present the design and application of an inexpensive wireless minimicroscope with resolution up to 2592 × 1944 pixels and speed up to 90 f/s. The minimicroscope system was built on a commercial embedded system (Raspberry Pi). We modified a camera module and adopted an inverse dual lens system to obtain the clear field of view and appropriate magnification for tens of micrometer objects. The system was capable of capturing time-lapse images and transferring image data wirelessly. The entire system can be operated wirelessly and cordlessly in a conventional cell culturing incubator. The developed minimicroscope was used to monitor the attachment and proliferation of NIH-3T3 and HEK 293 cells inside an incubator for 50 h. In addition, the minimicroscope was used to monitor a droplet generation process in a microfluidic device. The high-quality images captured by the minimicroscope enabled us an automated analysis of experimental parameters. The successful applications prove the great potential of the developed minimicroscope for monitoring various biological samples and microfluidic devices. This paper presents the design of a high-resolution minimicroscope system that enables the wireless real-time imaging of cells inside the incubator. This system has been verified to be a useful tool to obtain high-quality images and videos for the automated quantitative analysis of biological samples and lab-on-a-chip devices in the long term.

  18. VQone MATLAB toolbox: A graphical experiment builder for image and video quality evaluations: VQone MATLAB toolbox.

    PubMed

    Nuutinen, Mikko; Virtanen, Toni; Rummukainen, Olli; Häkkinen, Jukka

    2016-03-01

    This article presents VQone, a graphical experiment builder, written as a MATLAB toolbox, developed for image and video quality ratings. VQone contains the main elements needed for the subjective image and video quality rating process. This includes building and conducting experiments and data analysis. All functions can be controlled through graphical user interfaces. The experiment builder includes many standardized image and video quality rating methods. Moreover, it enables the creation of new methods or modified versions from standard methods. VQone is distributed free of charge under the terms of the GNU general public license and allows code modifications to be made so that the program's functions can be adjusted according to a user's requirements. VQone is available for download from the project page (http://www.helsinki.fi/psychology/groups/visualcognition/).

  19. Video Relay Service for Signing Deaf - Lessons Learnt from a Pilot Study

    NASA Astrophysics Data System (ADS)

    Ponsard, Christophe; Sutera, Joelle; Henin, Michael

    The generalization of high speed Internet, efficient compression techniques and low cost hardware have resulted in low cost video communication since the year 2000. For the Deaf community, this enables native communication in sign language and a better communication with hearing people over the phone. This implies that Video Relay Service can take over the old Text Relay Service which is less natural and requires mastering written language. A number of such services have developed throughout the world. The objectives of this paper are to present the experience gained in the Walloon Region of Belgium, to share a number of lessons learnt, and to provide recommendations at the technical, user adoption and political levels. A survey of video relay services around the world is presented together with the feedback from users both before and after using the pilot service.

  20. Direct ophthalmoscopy on YouTube: analysis of instructional YouTube videos' content and approach to visualization.

    PubMed

    Borgersen, Nanna Jo; Henriksen, Mikael Johannes Vuokko; Konge, Lars; Sørensen, Torben Lykke; Thomsen, Ann Sofia Skou; Subhi, Yousif

    2016-01-01

    Direct ophthalmoscopy is well-suited for video-based instruction, particularly if the videos enable the student to see what the examiner sees when performing direct ophthalmoscopy. We evaluated the pedagogical effectiveness of instructional YouTube videos on direct ophthalmoscopy by evaluating their content and approach to visualization. In order to synthesize main themes and points for direct ophthalmoscopy, we formed a broad panel consisting of a medical student, junior and senior physicians, and took into consideration book chapters targeting medical students and physicians in general. We then systematically searched YouTube. Two authors reviewed eligible videos to assess eligibility and extract data on video statistics, content, and approach to visualization. Correlations between video statistics and contents were investigated using two-tailed Spearman's correlation. We screened 7,640 videos, of which 27 were found eligible for this study. Overall, a median of 12 out of 18 points (interquartile range: 8-14 key points) were covered; no videos covered all of the 18 points assessed. We found the most difficulties in the approach to visualization of how to approach the patient and how to examine the fundus. Time spent on fundus examination correlated with the number of views per week (Spearman's ρ=0.53; P=0.029). Videos may help overcome the pedagogical issues in teaching direct ophthalmoscopy; however, the few available videos on YouTube fail to address this particular issue adequately. There is a need for high-quality videos that include relevant points, provide realistic visualization of the examiner's view, and give particular emphasis on fundus examination.

  1. 77 FR 75659 - Certain Video Analytics Software, Systems, Components Thereof, and Products Containing Same...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-12-21

    ... INTERNATIONAL TRADE COMMISSION [Investigation No. 337-TA-852] Certain Video Analytics Software..., 2012, based on a complaint filed by ObjectVideo, Inc. (``ObjectVideo'') of Reston, Virginia. 77 FR... United States after importation of certain video analytics software systems, components thereof, and...

  2. Learning a Continuous-Time Streaming Video QoE Model.

    PubMed

    Ghadiyaram, Deepti; Pan, Janice; Bovik, Alan C

    2018-05-01

    Over-the-top adaptive video streaming services are frequently impacted by fluctuating network conditions that can lead to rebuffering events (stalling events) and sudden bitrate changes. These events visually impact video consumers' quality of experience (QoE) and can lead to consumer churn. The development of models that can accurately predict viewers' instantaneous subjective QoE under such volatile network conditions could potentially enable the more efficient design of quality-control protocols for media-driven services, such as YouTube, Amazon, Netflix, and so on. However, most existing models only predict a single overall QoE score on a given video and are based on simple global video features, without accounting for relevant aspects of human perception and behavior. We have created a QoE evaluator, called the time-varying QoE Indexer, that accounts for interactions between stalling events, analyzes the spatial and temporal content of a video, predicts the perceptual video quality, models the state of the client-side data buffer, and consequently predicts continuous-time quality scores that agree quite well with human opinion scores. The new QoE predictor also embeds the impact of relevant human cognitive factors, such as memory and recency, and their complex interactions with the video content being viewed. We evaluated the proposed model on three different video databases and attained standout QoE prediction performance.

  3. Maximizing Resource Utilization in Video Streaming Systems

    ERIC Educational Resources Information Center

    Alsmirat, Mohammad Abdullah

    2013-01-01

    Video streaming has recently grown dramatically in popularity over the Internet, Cable TV, and wire-less networks. Because of the resource demanding nature of video streaming applications, maximizing resource utilization in any video streaming system is a key factor to increase the scalability and decrease the cost of the system. Resources to…

  4. Improving Photometric Calibration of Meteor Video Camera Systems.

    PubMed

    Ehlert, Steven; Kingery, Aaron; Suggs, Robert

    2017-09-01

    We present the results of new calibration tests performed by the NASA Meteoroid Environment Office (MEO) designed to help quantify and minimize systematic uncertainties in meteor photometry from video camera observations. These systematic uncertainties can be categorized by two main sources: an imperfect understanding of the linearity correction for the MEO's Watec 902H2 Ultimate video cameras and uncertainties in meteor magnitudes arising from transformations between the Watec camera's Sony EX-View HAD bandpass and the bandpasses used to determine reference star magnitudes. To address the first point, we have measured the linearity response of the MEO's standard meteor video cameras using two independent laboratory tests on eight cameras. Our empirically determined linearity correction is critical for performing accurate photometry at low camera intensity levels. With regards to the second point, we have calculated synthetic magnitudes in the EX bandpass for reference stars. These synthetic magnitudes enable direct calculations of the meteor's photometric flux within the camera band pass without requiring any assumptions of its spectral energy distribution. Systematic uncertainties in the synthetic magnitudes of individual reference stars are estimated at ∼ 0.20 mag, and are limited by the available spectral information in the reference catalogs. These two improvements allow for zero-points accurate to ∼ 0.05 - 0.10 mag in both filtered and unfiltered camera observations with no evidence for lingering systematics. These improvements are essential to accurately measuring photometric masses of individual meteors and source mass indexes.

  5. Improving Photometric Calibration of Meteor Video Camera Systems

    NASA Technical Reports Server (NTRS)

    Ehlert, Steven; Kingery, Aaron; Suggs, Robert

    2017-01-01

    We present the results of new calibration tests performed by the NASA Meteoroid Environment Office (MEO) designed to help quantify and minimize systematic uncertainties in meteor photometry from video camera observations. These systematic uncertainties can be categorized by two main sources: an imperfect understanding of the linearity correction for the MEO's Watec 902H2 Ultimate video cameras and uncertainties in meteor magnitudes arising from transformations between the Watec camera's Sony EX-View HAD bandpass and the bandpasses used to determine reference star magnitudes. To address the first point, we have measured the linearity response of the MEO's standard meteor video cameras using two independent laboratory tests on eight cameras. Our empirically determined linearity correction is critical for performing accurate photometry at low camera intensity levels. With regards to the second point, we have calculated synthetic magnitudes in the EX bandpass for reference stars. These synthetic magnitudes enable direct calculations of the meteor's photometric flux within the camera bandpass without requiring any assumptions of its spectral energy distribution. Systematic uncertainties in the synthetic magnitudes of individual reference stars are estimated at approx. 0.20 mag, and are limited by the available spectral information in the reference catalogs. These two improvements allow for zero-points accurate to 0.05 - 0.10 mag in both filtered and unfiltered camera observations with no evidence for lingering systematics. These improvements are essential to accurately measuring photometric masses of individual meteors and source mass indexes.

  6. Video-based eye tracking for neuropsychiatric assessment.

    PubMed

    Adhikari, Sam; Stark, David E

    2017-01-01

    This paper presents a video-based eye-tracking method, ideally deployed via a mobile device or laptop-based webcam, as a tool for measuring brain function. Eye movements and pupillary motility are tightly regulated by brain circuits, are subtly perturbed by many disease states, and are measurable using video-based methods. Quantitative measurement of eye movement by readily available webcams may enable early detection and diagnosis, as well as remote/serial monitoring, of neurological and neuropsychiatric disorders. We successfully extracted computational and semantic features for 14 testing sessions, comprising 42 individual video blocks and approximately 17,000 image frames generated across several days of testing. Here, we demonstrate the feasibility of collecting video-based eye-tracking data from a standard webcam in order to assess psychomotor function. Furthermore, we were able to demonstrate through systematic analysis of this data set that eye-tracking features (in particular, radial and tangential variance on a circular visual-tracking paradigm) predict performance on well-validated psychomotor tests. © 2017 New York Academy of Sciences.

  7. Automated tracking of whiskers in videos of head fixed rodents.

    PubMed

    Clack, Nathan G; O'Connor, Daniel H; Huber, Daniel; Petreanu, Leopoldo; Hires, Andrew; Peron, Simon; Svoboda, Karel; Myers, Eugene W

    2012-01-01

    We have developed software for fully automated tracking of vibrissae (whiskers) in high-speed videos (>500 Hz) of head-fixed, behaving rodents trimmed to a single row of whiskers. Performance was assessed against a manually curated dataset consisting of 1.32 million video frames comprising 4.5 million whisker traces. The current implementation detects whiskers with a recall of 99.998% and identifies individual whiskers with 99.997% accuracy. The average processing rate for these images was 8 Mpx/s/cpu (2.6 GHz Intel Core2, 2 GB RAM). This translates to 35 processed frames per second for a 640 px×352 px video of 4 whiskers. The speed and accuracy achieved enables quantitative behavioral studies where the analysis of millions of video frames is required. We used the software to analyze the evolving whisking strategies as mice learned a whisker-based detection task over the course of 6 days (8148 trials, 25 million frames) and measure the forces at the sensory follicle that most underlie haptic perception.

  8. Automated Tracking of Whiskers in Videos of Head Fixed Rodents

    PubMed Central

    Clack, Nathan G.; O'Connor, Daniel H.; Huber, Daniel; Petreanu, Leopoldo; Hires, Andrew; Peron, Simon; Svoboda, Karel; Myers, Eugene W.

    2012-01-01

    We have developed software for fully automated tracking of vibrissae (whiskers) in high-speed videos (>500 Hz) of head-fixed, behaving rodents trimmed to a single row of whiskers. Performance was assessed against a manually curated dataset consisting of 1.32 million video frames comprising 4.5 million whisker traces. The current implementation detects whiskers with a recall of 99.998% and identifies individual whiskers with 99.997% accuracy. The average processing rate for these images was 8 Mpx/s/cpu (2.6 GHz Intel Core2, 2 GB RAM). This translates to 35 processed frames per second for a 640 px×352 px video of 4 whiskers. The speed and accuracy achieved enables quantitative behavioral studies where the analysis of millions of video frames is required. We used the software to analyze the evolving whisking strategies as mice learned a whisker-based detection task over the course of 6 days (8148 trials, 25 million frames) and measure the forces at the sensory follicle that most underlie haptic perception. PMID:22792058

  9. STS-11/41-B Post Flight Press Conference

    NASA Technical Reports Server (NTRS)

    1984-01-01

    This NASA KSC video release begins with opening remarks from Mission Commander Vance D. Brand followed by the other 4 spacecrew panel members (Robert L. Gibson, Pilot, and Mission Specialists, Bruce McCandless II, Ronald E. McNair, Robert L. Stewart) commenting on a home-video that includes highlights of the entire flight from take-off to landing. This video includes actual footage of the deployment of the Westar-VI and PALAPA-B2 satellites as well as preparation for and the actual EVA's that featured a Spacepak that enabled the astronauts to move outside the orbiter untethered. This video is followed by a slide presentation made-up of images taken from approximately 2000 still photographs taken during the mission. All of the slides are described by members of the space crew and include images of the Earth seen from Challenger. A question and answer period rounds out the video, which include problems encountered with the deployment of the satellites as well as the possibilities of sending civilians into space.

  10. Computer assisted video analysis of swimming performance in a forced swim test: simultaneous assessment of duration of immobility and swimming style in mice selected for high and low swim-stress induced analgesia.

    PubMed

    Juszczak, Grzegorz R; Lisowski, Paweł; Sliwa, Adam T; Swiergiel, Artur H

    2008-10-20

    In behavioral pharmacology, two problems are encountered when quantifying animal behavior: 1) reproducibility of the results across laboratories, especially in the case of manual scoring of animal behavior; 2) presence of different behavioral idiosyncrasies, common in genetically different animals, that mask or mimic the effects of the experimental treatments. This study aimed to develop an automated method enabling simultaneous assessment of the duration of immobility in mice and the depth of body submersion during swimming by means of computer assisted video analysis system (EthoVision from Noldus). We tested and compared parameters of immobility based either on the speed of an object (animal) movement or based on the percentage change in the object's area between the consecutive video frames. We also examined the effects of an erosion-dilation filtering procedure on the results obtained with both parameters of immobility. Finally, we proposed an automated method enabling assessment of depth of body submersion that reflects swimming performance. It was found that both parameters of immobility were sensitive to the effect of an antidepressant, desipramine, and that they yielded similar results when applied to mice that are good swimmers. The speed parameter was, however, more sensitive and more reliable because it depended less on random noise of the video image. Also, it was established that applying the erosion-dilation filtering procedure increased the reliability of both parameters of immobility. In case of mice that were poor swimmers, the assessed duration of immobility differed depending on a chosen parameter, thus resulting in the presence or lack of differences between two lines of mice that differed in swimming performance. These results substantiate the need for assessing swimming performance when the duration of immobility in the FST is compared in lines that differ in their swimming "styles". Testing swimming performance can also be important in the studies investigating the effects of swim stress on other behavioral or physiological parameters because poor swimming abilities displayed by some lines can increase severity of swim stress, masking the between-line differences or the main treatment effects.

  11. Application of robust face recognition in video surveillance systems

    NASA Astrophysics Data System (ADS)

    Zhang, De-xin; An, Peng; Zhang, Hao-xiang

    2018-03-01

    In this paper, we propose a video searching system that utilizes face recognition as searching indexing feature. As the applications of video cameras have great increase in recent years, face recognition makes a perfect fit for searching targeted individuals within the vast amount of video data. However, the performance of such searching depends on the quality of face images recorded in the video signals. Since the surveillance video cameras record videos without fixed postures for the object, face occlusion is very common in everyday video. The proposed system builds a model for occluded faces using fuzzy principal component analysis (FPCA), and reconstructs the human faces with the available information. Experimental results show that the system has very high efficiency in processing the real life videos, and it is very robust to various kinds of face occlusions. Hence it can relieve people reviewers from the front of the monitors and greatly enhances the efficiency as well. The proposed system has been installed and applied in various environments and has already demonstrated its power by helping solving real cases.

  12. 75 FR 68379 - In the Matter of: Certain Video Game Systems and Controllers; Notice of Investigation

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-05

    ... INTERNATIONAL TRADE COMMISSION [Inv. No. 337-TA-743] In the Matter of: Certain Video Game Systems... within the United States after importation of certain video game systems and controllers by reason of... certain video game systems and controllers that infringe one or more of claims 16, 27-32, 44, 57, 68, 81...

  13. Playing with food. A novel approach to understanding nutritional behaviour development.

    PubMed

    Lynch, Meghan

    2010-06-01

    This study explored the use of a novel method of collecting data on nutritional behaviour development in young children: videos posted on the Internet site YouTube. YouTube videos (n=115) of children alone and interacting with parents in toy kitchen settings were analyzed using constant comparison analysis. Results revealed that in the videos of play nutritional behaviours, children showed influences of their real social environments, and that this medium enabled the observation of parent-child interactions in a more natural context without the researcher's presence. These findings encourage further research in the development and validity of alternative methods of data collection. Copyright 2010 Elsevier Ltd. All rights reserved.

  14. A novel multiple description scalable coding scheme for mobile wireless video transmission

    NASA Astrophysics Data System (ADS)

    Zheng, Haifeng; Yu, Lun; Chen, Chang Wen

    2005-03-01

    We proposed in this paper a novel multiple description scalable coding (MDSC) scheme based on in-band motion compensation temporal filtering (IBMCTF) technique in order to achieve high video coding performance and robust video transmission. The input video sequence is first split into equal-sized groups of frames (GOFs). Within a GOF, each frame is hierarchically decomposed by discrete wavelet transform. Since there is a direct relationship between wavelet coefficients and what they represent in the image content after wavelet decomposition, we are able to reorganize the spatial orientation trees to generate multiple bit-streams and employed SPIHT algorithm to achieve high coding efficiency. We have shown that multiple bit-stream transmission is very effective in combating error propagation in both Internet video streaming and mobile wireless video. Furthermore, we adopt the IBMCTF scheme to remove the redundancy for inter-frames along the temporal direction using motion compensated temporal filtering, thus high coding performance and flexible scalability can be provided in this scheme. In order to make compressed video resilient to channel error and to guarantee robust video transmission over mobile wireless channels, we add redundancy to each bit-stream and apply error concealment strategy for lost motion vectors. Unlike traditional multiple description schemes, the integration of these techniques enable us to generate more than two bit-streams that may be more appropriate for multiple antenna transmission of compressed video. Simulate results on standard video sequences have shown that the proposed scheme provides flexible tradeoff between coding efficiency and error resilience.

  15. Active eye-tracking for an adaptive optics scanning laser ophthalmoscope

    PubMed Central

    Sheehy, Christy K.; Tiruveedhula, Pavan; Sabesan, Ramkumar; Roorda, Austin

    2015-01-01

    We demonstrate a system that combines a tracking scanning laser ophthalmoscope (TSLO) and an adaptive optics scanning laser ophthalmoscope (AOSLO) system resulting in both optical (hardware) and digital (software) eye-tracking capabilities. The hybrid system employs the TSLO for active eye-tracking at a rate up to 960 Hz for real-time stabilization of the AOSLO system. AOSLO videos with active eye-tracking signals showed, at most, an amplitude of motion of 0.20 arcminutes for horizontal motion and 0.14 arcminutes for vertical motion. Subsequent real-time digital stabilization limited residual motion to an average of only 0.06 arcminutes (a 95% reduction). By correcting for high amplitude, low frequency drifts of the eye, the active TSLO eye-tracking system enabled the AOSLO system to capture high-resolution retinal images over a larger range of motion than previously possible with just the AOSLO imaging system alone. PMID:26203370

  16. Converting laserdisc video to digital video: a demonstration project using brain animations.

    PubMed

    Jao, C S; Hier, D B; Brint, S U

    1995-01-01

    Interactive laserdiscs are of limited value in large group learning situations due to the expense of establishing multiple workstations. The authors implemented an alternative to laserdisc video by using indexed digital video combined with an expert system. High-quality video was captured from a laserdisc player and combined with waveform audio into an audio-video-interleave (AVI) file format in the Microsoft Video-for-Windows environment (Microsoft Corp., Seattle, WA). With the use of an expert system, a knowledge-based computer program provided random access to these indexed AVI files. The program can be played on any multimedia computer without the need for laserdiscs. This system offers a high level of interactive video without the overhead and cost of a laserdisc player.

  17. Video Monitoring a Simulation-Based Quality Improvement Program in Bihar, India.

    PubMed

    Dyer, Jessica; Spindler, Hilary; Christmas, Amelia; Shah, Malay Bharat; Morgan, Melissa; Cohen, Susanna R; Sterne, Jason; Mahapatra, Tanmay; Walker, Dilys

    2018-04-01

    Simulation-based training has become an accepted clinical training andragogy in high-resource settings with its use increasing in low-resource settings. Video recordings of simulated scenarios are commonly used by facilitators. Beyond using the videos during debrief sessions, researchers can also analyze the simulation videos to quantify technical and nontechnical skills during simulated scenarios over time. Little is known about the feasibility and use of large-scale systems to video record and analyze simulation and debriefing data for monitoring and evaluation in low-resource settings. This manuscript describes the process of designing and implementing a large-scale video monitoring system. Mentees and Mentors were consented and all simulations and debriefs conducted at 320 Primary Health Centers (PHCs) were video recorded. The system design, number of video recordings, and inter-rater reliability of the coded videos were assessed. The final dataset included a total of 11,278 videos. Overall, a total of 2,124 simulation videos were coded and 183 (12%) were blindly double-coded. For the double-coded sample, the average inter-rater reliability (IRR) scores were 80% for nontechnical skills, and 94% for clinical technical skills. Among 4,450 long debrief videos received, 216 were selected for coding and all were double-coded. Data quality of simulation videos was found to be very good in terms of recorded instances of "unable to see" and "unable to hear" in Phases 1 and 2. This study demonstrates that video monitoring systems can be effectively implemented at scale in resource limited settings. Further, video monitoring systems can play several vital roles within program implementation, including monitoring and evaluation, provision of actionable feedback to program implementers, and assurance of program fidelity.

  18. Utilization of KSC Present Broadband Communications Data System for Digital Video Services

    NASA Technical Reports Server (NTRS)

    Andrawis, Alfred S.

    2002-01-01

    This report covers a visibility study of utilizing present KSC broadband communications data system (BCDS) for digital video services. Digital video services include compressed digital TV delivery and video-on-demand. Furthermore, the study examines the possibility of providing interactive video on demand to desktop personal computers via KSC computer network.

  19. Utilization of KSC Present Broadband Communications Data System For Digital Video Services

    NASA Technical Reports Server (NTRS)

    Andrawis, Alfred S.

    2001-01-01

    This report covers a visibility study of utilizing present KSC broadband communications data system (BCDS) for digital video services. Digital video services include compressed digital TV delivery and video-on-demand. Furthermore, the study examines the possibility of providing interactive video on demand to desktop personal computers via KSC computer network.

  20. 47 CFR 76.1510 - Application of certain Title VI provisions.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Open Video Systems § 76.1510 Application of certain Title VI provisions. The following sections within part 76 shall also apply to open video systems..., that these sections shall apply to open video systems only to the extent that they do not conflict with...

  1. 47 CFR 76.1510 - Application of certain Title VI provisions.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Open Video Systems § 76.1510 Application of certain Title VI provisions. The following sections within part 76 shall also apply to open video systems..., that these sections shall apply to open video systems only to the extent that they do not conflict with...

  2. 47 CFR 76.1510 - Application of certain Title VI provisions.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Open Video Systems § 76.1510 Application of certain Title VI provisions. The following sections within part 76 shall also apply to open video systems..., that these sections shall apply to open video systems only to the extent that they do not conflict with...

  3. 47 CFR 76.1510 - Application of certain Title VI provisions.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Open Video Systems § 76.1510 Application of certain Title VI provisions. The following sections within part 76 shall also apply to open video systems..., that these sections shall apply to open video systems only to the extent that they do not conflict with...

  4. 47 CFR 76.1510 - Application of certain Title VI provisions.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Open Video Systems § 76.1510 Application of certain Title VI provisions. The following sections within part 76 shall also apply to open video systems..., that these sections shall apply to open video systems only to the extent that they do not conflict with...

  5. Using Deep Learning Algorithm to Enhance Image-review Software for Surveillance Cameras

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cui, Yonggang; Thomas, Maikael A.

    We propose the development of proven deep learning algorithms to flag objects and events of interest in Next Generation Surveillance System (NGSS) surveillance to make IAEA image review more efficient. Video surveillance is one of the core monitoring technologies used by the IAEA Department of Safeguards when implementing safeguards at nuclear facilities worldwide. The current image review software GARS has limited automated functions, such as scene-change detection, black image detection and missing scene analysis, but struggles with highly cluttered backgrounds. A cutting-edge algorithm to be developed in this project will enable efficient and effective searches in images and video streamsmore » by identifying and tracking safeguards relevant objects and detect anomalies in their vicinity. In this project, we will develop the algorithm, test it with the IAEA surveillance cameras and data sets collected at simulated nuclear facilities at BNL and SNL, and implement it in a software program for potential integration into the IAEA’s IRAP (Integrated Review and Analysis Program).« less

  6. From video to computation of biological fluid-structure interaction problems

    NASA Astrophysics Data System (ADS)

    Dillard, Seth I.; Buchholz, James H. J.; Udaykumar, H. S.

    2016-04-01

    This work deals with the techniques necessary to obtain a purely Eulerian procedure to conduct CFD simulations of biological systems with moving boundary flow phenomena. Eulerian approaches obviate difficulties associated with mesh generation to describe or fit flow meshes to body surfaces. The challenges associated with constructing embedded boundary information, body motions and applying boundary conditions on the moving bodies for flow computation are addressed in the work. The overall approach is applied to the study of a fluid-structure interaction problem, i.e., the hydrodynamics of swimming of an American eel, where the motion of the eel is derived from video imaging. It is shown that some first-blush approaches do not work, and therefore, careful consideration of appropriate techniques to connect moving images to flow simulations is necessary and forms the main contribution of the paper. A combination of level set-based active contour segmentation with optical flow and image morphing is shown to enable the image-to-computation process.

  7. Is partially automated driving a bad idea? Observations from an on-road study.

    PubMed

    Banks, Victoria A; Eriksson, Alexander; O'Donoghue, Jim; Stanton, Neville A

    2018-04-01

    The automation of longitudinal and lateral control has enabled drivers to become "hands and feet free" but they are required to remain in an active monitoring state with a requirement to resume manual control if required. This represents the single largest allocation of system function problem with vehicle automation as the literature suggests that humans are notoriously inefficient at completing prolonged monitoring tasks. To further explore whether partially automated driving solutions can appropriately support the driver in completing their new monitoring role, video observations were collected as part of an on-road study using a Tesla Model S being operated in Autopilot mode. A thematic analysis of video data suggests that drivers are not being properly supported in adhering to their new monitoring responsibilities and instead demonstrate behaviour indicative of complacency and over-trust. These attributes may encourage drivers to take more risks whilst out on the road. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Tight Loops Close-Up [video

    NASA Image and Video Library

    2014-05-19

    NASA's Solar Dynamics Observatory (SDO) zoomed in almost to its maximum level to watch tight, bright loops and much longer, softer loops shift and sway above an active region on the sun, while a darker blob of plasma in their midst was pulled about every which way (May 13-14, 2014). The video clip covers just over a day beginning at 14:19 UT on May 13. The frames were taken in the 171-angstroms wavelength of extreme ultraviolet light, but colorized red, instead of its usual bronze tone. This type of dynamic activity continues almost non-stop on the sun as opposing magnetic forces tangle with each other. Credit: NASA/Solar Dynamics Observatory NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  9. The video fluorescent device for diagnostics of cancer of human reproductive system

    NASA Astrophysics Data System (ADS)

    Brysin, Nickolay N.; Linkov, Kirill G.; Stratonnikov, Alexander A.; Savelieva, Tatiana A.; Loschenov, Victor B.

    2008-06-01

    Photodynamic therapy (PDT) is one of the advanced methods of treatment of skin cancer and surfaces of internal organs. The basic advantages of PDT are high efficiency and low cost of treatment. PDT technique is needed for providing fluorescent diagnostics. Laser-based systems are widely applied to the fluorescence excitations for diagnostic because of a narrow spectrum of fluorescence excitation and high density of radiation. Application of laser systems for carrying out fluorescent diagnostics gives the image of a tumor distorted by speckles that does not give an opportunity to obtain full information about the form of a tumor quickly. Besides, these laser excitation systems have complicated structure and high cost. As a base for the development and creation of a video fluorescent device one of commercially produced colposcopes was chosen. It allows to decrease cost of the device, and also has enabled to make modernization for already used colposcopes. A LED-based light source was offered to be used for fluorescence excitation in this work. The maximum in a spectrum of radiation of LEDs corresponds to the general spectral maximum of protoporphyrin IX (PPIX) absorption. Irradiance in the center of a light spot is 31 mW/cm2. The receiving optical system of the fluorescent channel is adjusted at 635 nm where a general spectral maximum of fluorescence PPIX is located. Also the device contains a RGB video channel, a white light source and a USB spectrometer LESA-01-BIOSPEC, for measurement of spectra of fluorescence and diffusion reflections in treatment area. The software is developed for maintenance of the device. Some studies on laboratory animals were made. As a result, areas with the increased concentration of a PPIX were correctly detected. At present, the device is used for diagnostics of cancer of female reproductive system in Research Centre for Obstetrics, Gynecology and Perinatology of the Russian Academy of Medical Sciences (Moscow, Russia).

  10. Experimental application of simulation tools for evaluating UAV video change detection

    NASA Astrophysics Data System (ADS)

    Saur, Günter; Bartelsen, Jan

    2015-10-01

    Change detection is one of the most important tasks when unmanned aerial vehicles (UAV) are used for video reconnaissance and surveillance. In this paper, we address changes on short time scale, i.e. the observations are taken within time distances of a few hours. Each observation is a short video sequence corresponding to the near-nadir overflight of the UAV above the interesting area and the relevant changes are e.g. recently added or removed objects. The change detection algorithm has to distinguish between relevant and non-relevant changes. Examples for non-relevant changes are versatile objects like trees and compression or transmission artifacts. To enable the usage of an automatic change detection within an interactive workflow of an UAV video exploitation system, an evaluation and assessment procedure has to be performed. Large video data sets which contain many relevant objects with varying scene background and altering influence parameters (e.g. image quality, sensor and flight parameters) including image metadata and ground truth data are necessary for a comprehensive evaluation. Since the acquisition of real video data is limited by cost and time constraints, from our point of view, the generation of synthetic data by simulation tools has to be considered. In this paper the processing chain of Saur et al. (2014) [1] and the interactive workflow for video change detection is described. We have selected the commercial simulation environment Virtual Battle Space 3 (VBS3) to generate synthetic data. For an experimental setup, an example scenario "road monitoring" has been defined and several video clips have been produced with varying flight and sensor parameters and varying objects in the scene. Image registration and change mask extraction, both components of the processing chain, are applied to corresponding frames of different video clips. For the selected examples, the images could be registered, the modelled changes could be extracted and the artifacts of the image rendering considered as noise (slight differences of heading angles, disparity of vegetation, 3D parallax) could be suppressed. We conclude that these image data could be considered to be realistic enough to serve as evaluation data for the selected processing components. Future work will extend the evaluation to other influence parameters and may include the human operator for mission planning and sensor control.

  11. Comparison of Three Optical Methods for Measuring Model Deformation

    NASA Technical Reports Server (NTRS)

    Burner, A. W.; Fleming, G. A.; Hoppe, J. C.

    2000-01-01

    The objective of this paper is to compare the current state-of-the-art of the following three optical techniques under study by NASA for measuring model deformation in wind tunnels: (1) video photogrammetry, (2) projection moire interferometry, and (3) the commercially available Optotrak system. An objective comparison of these three techniques should enable the selection of the best technique for a particular test undertaken at various NASA facilities. As might be expected, no one technique is best for all applications. The techniques are also not necessarily mutually exclusive and in some cases can be complementary to one another.

  12. Perioperative nurse training in cardiothoracic surgical robotics.

    PubMed

    Connor, M A; Reinbolt, J A; Handley, P J

    2001-12-01

    The exponential growth of OR technology during the past 10 years has placed increased demands on perioperative nurses. Proficiency is required not only in patient care but also in the understanding, operating, and troubleshooting of video systems, computers, and cutting edge medical devices. The formation of a surgical team dedicated to robotically assisted cardiac surgery requires careful selection, education, and hands-on practice. This article details the six-week training process undertaken at Sarasota Memorial Hospital, Sarasota, Fla, which enabled staff members to deliver excellent patient care with a high degree of confidence in themselves and the robotic technology.

  13. A new visual navigation system for exploring biomedical Open Educational Resource (OER) videos

    PubMed Central

    Zhao, Baoquan; Xu, Songhua; Lin, Shujin; Luo, Xiaonan; Duan, Lian

    2016-01-01

    Objective Biomedical videos as open educational resources (OERs) are increasingly proliferating on the Internet. Unfortunately, seeking personally valuable content from among the vast corpus of quality yet diverse OER videos is nontrivial due to limitations of today’s keyword- and content-based video retrieval techniques. To address this need, this study introduces a novel visual navigation system that facilitates users’ information seeking from biomedical OER videos in mass quantity by interactively offering visual and textual navigational clues that are both semantically revealing and user-friendly. Materials and Methods The authors collected and processed around 25 000 YouTube videos, which collectively last for a total length of about 4000 h, in the broad field of biomedical sciences for our experiment. For each video, its semantic clues are first extracted automatically through computationally analyzing audio and visual signals, as well as text either accompanying or embedded in the video. These extracted clues are subsequently stored in a metadata database and indexed by a high-performance text search engine. During the online retrieval stage, the system renders video search results as dynamic web pages using a JavaScript library that allows users to interactively and intuitively explore video content both efficiently and effectively. Results The authors produced a prototype implementation of the proposed system, which is publicly accessible at https://patentq.njit.edu/oer. To examine the overall advantage of the proposed system for exploring biomedical OER videos, the authors further conducted a user study of a modest scale. The study results encouragingly demonstrate the functional effectiveness and user-friendliness of the new system for facilitating information seeking from and content exploration among massive biomedical OER videos. Conclusion Using the proposed tool, users can efficiently and effectively find videos of interest, precisely locate video segments delivering personally valuable information, as well as intuitively and conveniently preview essential content of a single or a collection of videos. PMID:26335986

  14. Development of low-noise CCD drive electronics for the world space observatory ultraviolet spectrograph subsystem

    NASA Astrophysics Data System (ADS)

    Salter, Mike; Clapp, Matthew; King, James; Morse, Tom; Mihalcea, Ionut; Waltham, Nick; Hayes-Thakore, Chris

    2016-07-01

    World Space Observatory Ultraviolet (WSO-UV) is a major Russian-led international collaboration to develop a large space-borne 1.7 m Ritchey-Chrétien telescope and instrumentation to study the universe at ultraviolet wavelengths between 115 nm and 320 nm, exceeding the current capabilities of ground-based instruments. The WSO Ultraviolet Spectrograph subsystem (WUVS) is led by the Institute of Astronomy of the Russian Academy of Sciences and consists of two high resolution spectrographs covering the Far-UV range of 115-176 nm and the Near-UV range of 174-310 nm, and a long-slit spectrograph covering the wavelength range of 115-305 nm. The custom-designed CCD sensors and cryostat assemblies are being provided by e2v technologies (UK). STFC RAL Space is providing the Camera Electronics Boxes (CEBs) which house the CCD drive electronics for each of the three WUVS channels. This paper presents the results of the detailed characterisation of the WUVS CCD drive electronics. The electronics include a novel high-performance video channel design that utilises Digital Correlated Double Sampling (DCDS) to enable low-noise readout of the CCD at a range of pixel frequencies, including a baseline requirement of less than 3 electrons rms readout noise for the combined CCD and electronics system at a readout rate of 50 kpixels/s. These results illustrate the performance of this new video architecture as part of a wider electronics sub-system that is designed for use in the space environment. In addition to the DCDS video channels, the CEB provides all the bias voltages and clocking waveforms required to operate the CCD and the system is fully programmable via a primary and redundant SpaceWire interface. The development of the CEB electronics design has undergone critical design review and the results presented were obtained using the engineering-grade electronics box. A variety of parameters and tests are included ranging from general system metrics, such as the power and mass, to more detailed analysis of the video performance including noise, linearity, crosstalk, gain stability and transient response.

  15. A Standard-Compliant Virtual Meeting System with Active Video Object Tracking

    NASA Astrophysics Data System (ADS)

    Lin, Chia-Wen; Chang, Yao-Jen; Wang, Chih-Ming; Chen, Yung-Chang; Sun, Ming-Ting

    2002-12-01

    This paper presents an H.323 standard compliant virtual video conferencing system. The proposed system not only serves as a multipoint control unit (MCU) for multipoint connection but also provides a gateway function between the H.323 LAN (local-area network) and the H.324 WAN (wide-area network) users. The proposed virtual video conferencing system provides user-friendly object compositing and manipulation features including 2D video object scaling, repositioning, rotation, and dynamic bit-allocation in a 3D virtual environment. A reliable, and accurate scheme based on background image mosaics is proposed for real-time extracting and tracking foreground video objects from the video captured with an active camera. Chroma-key insertion is used to facilitate video objects extraction and manipulation. We have implemented a prototype of the virtual conference system with an integrated graphical user interface to demonstrate the feasibility of the proposed methods.

  16. Remote-Sensing Time Series Analysis, a Vegetation Monitoring Tool

    NASA Technical Reports Server (NTRS)

    McKellip, Rodney; Prados, Donald; Ryan, Robert; Ross, Kenton; Spruce, Joseph; Gasser, Gerald; Greer, Randall

    2008-01-01

    The Time Series Product Tool (TSPT) is software, developed in MATLAB , which creates and displays high signal-to- noise Vegetation Indices imagery and other higher-level products derived from remotely sensed data. This tool enables automated, rapid, large-scale regional surveillance of crops, forests, and other vegetation. TSPT temporally processes high-revisit-rate satellite imagery produced by the Moderate Resolution Imaging Spectroradiometer (MODIS) and by other remote-sensing systems. Although MODIS imagery is acquired daily, cloudiness and other sources of noise can greatly reduce the effective temporal resolution. To improve cloud statistics, the TSPT combines MODIS data from multiple satellites (Aqua and Terra). The TSPT produces MODIS products as single time-frame and multitemporal change images, as time-series plots at a selected location, or as temporally processed image videos. Using the TSPT program, MODIS metadata is used to remove and/or correct bad and suspect data. Bad pixel removal, multiple satellite data fusion, and temporal processing techniques create high-quality plots and animated image video sequences that depict changes in vegetation greenness. This tool provides several temporal processing options not found in other comparable imaging software tools. Because the framework to generate and use other algorithms is established, small modifications to this tool will enable the use of a large range of remotely sensed data types. An effective remote-sensing crop monitoring system must be able to detect subtle changes in plant health in the earliest stages, before the effects of a disease outbreak or other adverse environmental conditions can become widespread and devastating. The integration of the time series analysis tool with ground-based information, soil types, crop types, meteorological data, and crop growth models in a Geographic Information System, could provide the foundation for a large-area crop-surveillance system that could identify a variety of plant phenomena and improve monitoring capabilities.

  17. On-board processing satellite network architecture and control study

    NASA Technical Reports Server (NTRS)

    Campanella, S. Joseph; Pontano, B.; Chalmers, H.

    1987-01-01

    For satellites to remain a vital part of future national and international communications, system concepts that use their inherent advantages to the fullest must be created. Network architectures that take maximum advantage of satellites equipped with onboard processing are explored. Satellite generations must accommodate various services for which satellites constitute the preferred vehicle of delivery. Such services tend to be those that are widely dispersed and present thin to medium loads to the system. Typical systems considered are thin and medium route telephony, maritime, land and aeronautical radio, VSAT data, low bit rate video teleconferencing, and high bit rate broadcast of high definition video. Delivery of services by TDMA and FDMA multiplexing techniques and combinations of the two for individual and mixed service types are studied. The possibilities offered by onboard circuit switched and packet switched architectures are examined and the results strongly support a preference for the latter. A detailed design architecture encompassing the onboard packet switch and its control, the related demand assigned TDMA burst structures, and destination packet protocols for routing traffic are presented. Fundamental onboard hardware requirements comprising speed, memory size, chip count, and power are estimated. The study concludes with identification of key enabling technologies and identifies a plan to develop a POC model.

  18. Telecommunication Support System Using Keywords and Their Relevant Information in Videoconferencing — Presentation Method for Keeping Audience's Concentration at Distance Lectures

    NASA Astrophysics Data System (ADS)

    Asai, Kikuo; Kondo, Kimio; Kobayashi, Hideaki; Saito, Fumihiko

    We developed a prototype system to support telecommunication by using keywords selected by the speaker in a videoconference. In the traditional presentation style, a speaker talks and uses audiovisual materials, and the audience at remote sites looks at these materials. Unfortunately, the audience often loses concentration and attention during the talk. To overcome this problem, we investigate a keyword presentation style, in which the speaker holds keyword cards that enable the audience to see additional information. Although keyword captions were originally intended for use in video materials for learning foreign languages, they can also be used to improve the quality of distance lectures in videoconferences. Our prototype system recognizes printed keywords in a video image at a server, and transfers the data to clients as multimedia functions such as language translation, three-dimensional (3D) model visualization, and audio reproduction. The additional information is collocated to the keyword cards in the display window, thus forming a spatial relationship between them. We conducted an experiment to investigate the properties of the keyword presentation style for an audience. The results suggest the potential of the keyword presentation style for improving the audience's concentration and attention in distance lectures by providing an environment that facilitates eye contact during videoconferencing.

  19. Digital Video Over Space Systems and Networks

    NASA Technical Reports Server (NTRS)

    Grubbs, Rodney

    2010-01-01

    This slide presentation reviews the use of digital video with space systems and networks. The earliest use of video was the use of film precluding live viewing, which gave way to live television from space. This has given way to digital video using internet protocol for transmission. This has provided for many improvements with new challenges. Some of these ehallenges are reviewed. The change to digital video transmitted over space systems can provide incredible imagery, however the process must be viewed as an entire system, rather than piece-meal.

  20. Registration of multiple video images to preoperative CT for image-guided surgery

    NASA Astrophysics Data System (ADS)

    Clarkson, Matthew J.; Rueckert, Daniel; Hill, Derek L.; Hawkes, David J.

    1999-05-01

    In this paper we propose a method which uses multiple video images to establish the pose of a CT volume with respect to video camera coordinates for use in image guided surgery. The majority of neurosurgical procedures require the neurosurgeon to relate the pre-operative MR/CT data to the intra-operative scene. Registration of 2D video images to the pre-operative 3D image enables a perspective projection of the pre-operative data to be overlaid onto the video image. Our registration method is based on image intensity and uses a simple iterative optimization scheme to maximize the mutual information between a video image and a rendering from the pre-operative data. Video images are obtained from a stereo operating microscope, with a field of view of approximately 110 X 80 mm. We have extended an existing information theoretical framework for 2D-3D registration, so that multiple video images can be registered simultaneously to the pre-operative data. Experiments were performed on video and CT images of a skull phantom. We took three video images, and our algorithm registered these individually to the 3D image. The mean projection error varied between 4.33 and 9.81 millimeters (mm), and the mean 3D error varied between 4.47 and 11.92 mm. Using our novel techniques we then registered five video views simultaneously to the 3D model. This produced an accurate and robust registration with a mean projection error of 0.68 mm and a mean 3D error of 1.05 mm.

Top