Sample records for video streaming system

  1. Maximizing Resource Utilization in Video Streaming Systems

    ERIC Educational Resources Information Center

    Alsmirat, Mohammad Abdullah

    2013-01-01

    Video streaming has recently grown dramatically in popularity over the Internet, Cable TV, and wire-less networks. Because of the resource demanding nature of video streaming applications, maximizing resource utilization in any video streaming system is a key factor to increase the scalability and decrease the cost of the system. Resources to…

  2. Scalable Video Streaming in Wireless Mesh Networks for Education

    ERIC Educational Resources Information Center

    Liu, Yan; Wang, Xinheng; Zhao, Liqiang

    2011-01-01

    In this paper, a video streaming system for education based on a wireless mesh network is proposed. A wireless mesh network is a self-organizing, self-managing and reliable intelligent network, which allows educators to deploy a network quickly. Video streaming plays an important role in this system for multimedia data transmission. This new…

  3. Subjective evaluation of H.265/HEVC based dynamic adaptive video streaming over HTTP (HEVC-DASH)

    NASA Astrophysics Data System (ADS)

    Irondi, Iheanyi; Wang, Qi; Grecos, Christos

    2015-02-01

    The Dynamic Adaptive Streaming over HTTP (DASH) standard is becoming increasingly popular for real-time adaptive HTTP streaming of internet video in response to unstable network conditions. Integration of DASH streaming techniques with the new H.265/HEVC video coding standard is a promising area of research. The performance of HEVC-DASH systems has been previously evaluated by a few researchers using objective metrics, however subjective evaluation would provide a better measure of the user's Quality of Experience (QoE) and overall performance of the system. This paper presents a subjective evaluation of an HEVC-DASH system implemented in a hardware testbed. Previous studies in this area have focused on using the current H.264/AVC (Advanced Video Coding) or H.264/SVC (Scalable Video Coding) codecs and moreover, there has been no established standard test procedure for the subjective evaluation of DASH adaptive streaming. In this paper, we define a test plan for HEVC-DASH with a carefully justified data set employing longer video sequences that would be sufficient to demonstrate the bitrate switching operations in response to various network condition patterns. We evaluate the end user's real-time QoE online by investigating the perceived impact of delay, different packet loss rates, fluctuating bandwidth, and the perceived quality of using different DASH video stream segment sizes on a video streaming session using different video sequences. The Mean Opinion Score (MOS) results give an insight into the performance of the system and expectation of the users. The results from this study show the impact of different network impairments and different video segments on users' QoE and further analysis and study may help in optimizing system performance.

  4. Telemetry and Communication IP Video Player

    NASA Technical Reports Server (NTRS)

    OFarrell, Zachary L.

    2011-01-01

    Aegis Video Player is the name of the video over IP system for the Telemetry and Communications group of the Launch Services Program. Aegis' purpose is to display video streamed over a network connection to be viewed during launches. To accomplish this task, a VLC ActiveX plug-in was used in C# to provide the basic capabilities of video streaming. The program was then customized to be used during launches. The VLC plug-in can be configured programmatically to display a single stream, but for this project multiple streams needed to be accessed. To accomplish this, an easy to use, informative menu system was added to the program to enable users to quickly switch between videos. Other features were added to make the player more useful, such as watching multiple videos and watching a video in full screen.

  5. Layer-based buffer aware rate adaptation design for SHVC video streaming

    NASA Astrophysics Data System (ADS)

    Gudumasu, Srinivas; Hamza, Ahmed; Asbun, Eduardo; He, Yong; Ye, Yan

    2016-09-01

    This paper proposes a layer based buffer aware rate adaptation design which is able to avoid abrupt video quality fluctuation, reduce re-buffering latency and improve bandwidth utilization when compared to a conventional simulcast based adaptive streaming system. The proposed adaptation design schedules DASH segment requests based on the estimated bandwidth, dependencies among video layers and layer buffer fullness. Scalable HEVC video coding is the latest state-of-art video coding technique that can alleviate various issues caused by simulcast based adaptive video streaming. With scalable coded video streams, the video is encoded once into a number of layers representing different qualities and/or resolutions: a base layer (BL) and one or more enhancement layers (EL), each incrementally enhancing the quality of the lower layers. Such layer based coding structure allows fine granularity rate adaptation for the video streaming applications. Two video streaming use cases are presented in this paper. The first use case is to stream HD SHVC video over a wireless network where available bandwidth varies, and the performance comparison between proposed layer-based streaming approach and conventional simulcast streaming approach is provided. The second use case is to stream 4K/UHD SHVC video over a hybrid access network that consists of a 5G millimeter wave high-speed wireless link and a conventional wired or WiFi network. The simulation results verify that the proposed layer based rate adaptation approach is able to utilize the bandwidth more efficiently. As a result, a more consistent viewing experience with higher quality video content and minimal video quality fluctuations can be presented to the user.

  6. Content-based TV sports video retrieval using multimodal analysis

    NASA Astrophysics Data System (ADS)

    Yu, Yiqing; Liu, Huayong; Wang, Hongbin; Zhou, Dongru

    2003-09-01

    In this paper, we propose content-based video retrieval, which is a kind of retrieval by its semantical contents. Because video data is composed of multimodal information streams such as video, auditory and textual streams, we describe a strategy of using multimodal analysis for automatic parsing sports video. The paper first defines the basic structure of sports video database system, and then introduces a new approach that integrates visual stream analysis, speech recognition, speech signal processing and text extraction to realize video retrieval. The experimental results for TV sports video of football games indicate that the multimodal analysis is effective for video retrieval by quickly browsing tree-like video clips or inputting keywords within predefined domain.

  7. Web Audio/Video Streaming Tool

    NASA Technical Reports Server (NTRS)

    Guruvadoo, Eranna K.

    2003-01-01

    In order to promote NASA-wide educational outreach program to educate and inform the public of space exploration, NASA, at Kennedy Space Center, is seeking efficient ways to add more contents to the web by streaming audio/video files. This project proposes a high level overview of a framework for the creation, management, and scheduling of audio/video assets over the web. To support short-term goals, the prototype of a web-based tool is designed and demonstrated to automate the process of streaming audio/video files. The tool provides web-enabled users interfaces to manage video assets, create publishable schedules of video assets for streaming, and schedule the streaming events. These operations are performed on user-defined and system-derived metadata of audio/video assets stored in a relational database while the assets reside on separate repository. The prototype tool is designed using ColdFusion 5.0.

  8. Proxy-assisted multicasting of video streams over mobile wireless networks

    NASA Astrophysics Data System (ADS)

    Nguyen, Maggie; Pezeshkmehr, Layla; Moh, Melody

    2005-03-01

    This work addresses the challenge of providing seamless multimedia services to mobile users by proposing a proxy-assisted multicast architecture for delivery of video streams. We propose a hybrid system of streaming proxies, interconnected by an application-layer multicast tree, where each proxy acts as a cluster head to stream out content to its stationary and mobile users. The architecture is based on our previously proposed Enhanced-NICE protocol, which uses an application-layer multicast tree to deliver layered video streams to multiple heterogeneous receivers. We targeted the study on placements of streaming proxies to enable efficient delivery of live and on-demand video, supporting both stationary and mobile users. The simulation results are evaluated and compared with two other baseline scenarios: one with a centralized proxy system serving the entire population and one with mini-proxies each to serve its local users. The simulations are implemented using the J-SIM simulator. The results show that even though proxies in the hybrid scenario experienced a slightly longer delay, they had the lowest drop rate of video content. This finding illustrates the significance of task sharing in multiple proxies. The resulted load balancing among proxies has provided a better video quality delivered to a larger audience.

  9. Effect of video server topology on contingency capacity requirements

    NASA Astrophysics Data System (ADS)

    Kienzle, Martin G.; Dan, Asit; Sitaram, Dinkar; Tetzlaff, William H.

    1996-03-01

    Video servers need to assign a fixed set of resources to each video stream in order to guarantee on-time delivery of the video data. If a server has insufficient resources to guarantee the delivery, it must reject the stream request rather than slowing down all existing streams. Large scale video servers are being built as clusters of smaller components, so as to be economical, scalable, and highly available. This paper uses a blocking model developed for telephone systems to evaluate video server cluster topologies. The goal is to achieve high utilization of the components and low per-stream cost combined with low blocking probability and high user satisfaction. The analysis shows substantial economies of scale achieved by larger server images. Simple distributed server architectures can result in partitioning of resources with low achievable resource utilization. By comparing achievable resource utilization of partitioned and monolithic servers, we quantify the cost of partitioning. Next, we present an architecture for a distributed server system that avoids resource partitioning and results in highly efficient server clusters. Finally, we show how, in these server clusters, further optimizations can be achieved through caching and batching of video streams.

  10. Objective video presentation QoE predictor for smart adaptive video streaming

    NASA Astrophysics Data System (ADS)

    Wang, Zhou; Zeng, Kai; Rehman, Abdul; Yeganeh, Hojatollah; Wang, Shiqi

    2015-09-01

    How to deliver videos to consumers over the network for optimal quality-of-experience (QoE) has been the central goal of modern video delivery services. Surprisingly, regardless of the large volume of videos being delivered everyday through various systems attempting to improve visual QoE, the actual QoE of end consumers is not properly assessed, not to say using QoE as the key factor in making critical decisions at the video hosting, network and receiving sites. Real-world video streaming systems typically use bitrate as the main video presentation quality indicator, but using the same bitrate to encode different video content could result in drastically different visual QoE, which is further affected by the display device and viewing condition of each individual consumer who receives the video. To correct this, we have to put QoE back to the driver's seat and redesign the video delivery systems. To achieve this goal, a major challenge is to find an objective video presentation QoE predictor that is accurate, fast, easy-to-use, display device adaptive, and provides meaningful QoE predictions across resolution and content. We propose to use the newly developed SSIMplus index (https://ece.uwaterloo.ca/~z70wang/research/ssimplus/) for this role. We demonstrate that based on SSIMplus, one can develop a smart adaptive video streaming strategy that leads to much smoother visual QoE impossible to achieve using existing adaptive bitrate video streaming approaches. Furthermore, SSIMplus finds many more applications, in live and file-based quality monitoring, in benchmarking video encoders and transcoders, and in guiding network resource allocations.

  11. Objective assessment of MPEG-2 video quality

    NASA Astrophysics Data System (ADS)

    Gastaldo, Paolo; Zunino, Rodolfo; Rovetta, Stefano

    2002-07-01

    The increasing use of video compression standards in broadcasting television systems has required, in recent years, the development of video quality measurements that take into account artifacts specifically caused by digital compression techniques. In this paper we present a methodology for the objective quality assessment of MPEG video streams by using circular back-propagation feedforward neural networks. Mapping neural networks can render nonlinear relationships between objective features and subjective judgments, thus avoiding any simplifying assumption on the complexity of the model. The neural network processes an instantaneous set of input values, and yields an associated estimate of perceived quality. Therefore, the neural-network approach turns objective quality assessment into adaptive modeling of subjective perception. The objective features used for the estimate are chosen according to the assessed relevance to perceived quality and are continuously extracted in real time from compressed video streams. The overall system mimics perception but does not require any analytical model of the underlying physical phenomenon. The capability to process compressed video streams represents an important advantage over existing approaches, like avoiding the stream-decoding process greatly enhances real-time performance. Experimental results confirm that the system provides satisfactory, continuous-time approximations for actual scoring curves concerning real test videos.

  12. ATLAS Live: Collaborative Information Streams

    NASA Astrophysics Data System (ADS)

    Goldfarb, Steven; ATLAS Collaboration

    2011-12-01

    I report on a pilot project launched in 2010 focusing on facilitating communication and information exchange within the ATLAS Collaboration, through the combination of digital signage software and webcasting. The project, called ATLAS Live, implements video streams of information, ranging from detailed detector and data status to educational and outreach material. The content, including text, images, video and audio, is collected, visualised and scheduled using digital signage software. The system is robust and flexible, utilizing scripts to input data from remote sources, such as the CERN Document Server, Indico, or any available URL, and to integrate these sources into professional-quality streams, including text scrolling, transition effects, inter and intra-screen divisibility. Information is published via the encoding and webcasting of standard video streams, viewable on all common platforms, using a web browser or other common video tool. Authorisation is enforced at the level of the streaming and at the web portals, using the CERN SSO system.

  13. Empirical evaluation of H.265/HEVC-based dynamic adaptive video streaming over HTTP (HEVC-DASH)

    NASA Astrophysics Data System (ADS)

    Irondi, Iheanyi; Wang, Qi; Grecos, Christos

    2014-05-01

    Real-time HTTP streaming has gained global popularity for delivering video content over Internet. In particular, the recent MPEG-DASH (Dynamic Adaptive Streaming over HTTP) standard enables on-demand, live, and adaptive Internet streaming in response to network bandwidth fluctuations. Meanwhile, emerging is the new-generation video coding standard, H.265/HEVC (High Efficiency Video Coding) promises to reduce the bandwidth requirement by 50% at the same video quality when compared with the current H.264/AVC standard. However, little existing work has addressed the integration of the DASH and HEVC standards, let alone empirical performance evaluation of such systems. This paper presents an experimental HEVC-DASH system, which is a pull-based adaptive streaming solution that delivers HEVC-coded video content through conventional HTTP servers where the client switches to its desired quality, resolution or bitrate based on the available network bandwidth. Previous studies in DASH have focused on H.264/AVC, whereas we present an empirical evaluation of the HEVC-DASH system by implementing a real-world test bed, which consists of an Apache HTTP Server with GPAC, an MP4Client (GPAC) with open HEVC-based DASH client and a NETEM box in the middle emulating different network conditions. We investigate and analyze the performance of HEVC-DASH by exploring the impact of various network conditions such as packet loss, bandwidth and delay on video quality. Furthermore, we compare the Intra and Random Access profiles of HEVC coding with the Intra profile of H.264/AVC when the correspondingly encoded video is streamed with DASH. Finally, we explore the correlation among the quality metrics and network conditions, and empirically establish under which conditions the different codecs can provide satisfactory performance.

  14. A real-time remote video streaming platform for ultrasound imaging.

    PubMed

    Ahmadi, Mehdi; Gross, Warren J; Kadoury, Samuel

    2016-08-01

    Ultrasound is a viable imaging technology in remote and resources-limited areas. Ultrasonography is a user-dependent skill which depends on a high degree of training and hands-on experience. However, there is a limited number of skillful sonographers located in remote areas. In this work, we aim to develop a real-time video streaming platform which allows specialist physicians to remotely monitor ultrasound exams. To this end, an ultrasound stream is captured and transmitted through a wireless network into remote computers, smart-phones and tablets. In addition, the system is equipped with a camera to track the position of the ultrasound probe. The main advantage of our work is using an open source platform for video streaming which gives us more control over streaming parameters than the available commercial products. The transmission delays of the system are evaluated for several ultrasound video resolutions and the results show that ultrasound videos close to the high-definition (HD) resolution can be received and displayed on an Android tablet with the delay of 0.5 seconds which is acceptable for accurate real-time diagnosis.

  15. A Stream Runs through IT: Using Streaming Video to Teach Information Technology

    ERIC Educational Resources Information Center

    Nicholson, Jennifer; Nicholson, Darren B.

    2010-01-01

    Purpose: The purpose of this paper is to report student and faculty perceptions from an introductory management information systems course that uses multimedia, specifically streaming video, as a vehicle for teaching students skills in Microsoft Excel and Access. Design/methodology/approach: Student perceptions are captured via a qualitative…

  16. Implementation and Analysis of Real-Time Streaming Protocols.

    PubMed

    Santos-González, Iván; Rivero-García, Alexandra; Molina-Gil, Jezabel; Caballero-Gil, Pino

    2017-04-12

    Communication media have become the primary way of interaction thanks to the discovery and innovation of many new technologies. One of the most widely used communication systems today is video streaming, which is constantly evolving. Such communications are a good alternative to face-to-face meetings, and are therefore very useful for coping with many problems caused by distance. However, they suffer from different issues such as bandwidth limitation, network congestion, energy efficiency, cost, reliability and connectivity. Hence, the quality of service and the quality of experience are considered the two most important issues for this type of communication. This work presents a complete comparative study of two of the most used protocols of video streaming, Real Time Streaming Protocol (RTSP) and the Web Real-Time Communication (WebRTC). In addition, this paper proposes two new mobile applications that implement those protocols in Android whose objective is to know how they are influenced by the aspects that most affect the streaming quality of service, which are the connection establishment time and the stream reception time. The new video streaming applications are also compared with the most popular video streaming applications for Android, and the experimental results of the analysis show that the developed WebRTC implementation improves the performance of the most popular video streaming applications with respect to the stream packet delay.

  17. In-camera video-stream processing for bandwidth reduction in web inspection

    NASA Astrophysics Data System (ADS)

    Jullien, Graham A.; Li, QiuPing; Hajimowlana, S. Hossain; Morvay, J.; Conflitti, D.; Roberts, James W.; Doody, Brian C.

    1996-02-01

    Automated machine vision systems are now widely used for industrial inspection tasks where video-stream data information is taken in by the camera and then sent out to the inspection system for future processing. In this paper we describe a prototype system for on-line programming of arbitrary real-time video data stream bandwidth reduction algorithms; the output of the camera only contains information that has to be further processed by a host computer. The processing system is built into a DALSA CCD camera and uses a microcontroller interface to download bit-stream data to a XILINXTM FPGA. The FPGA is directly connected to the video data-stream and outputs data to a low bandwidth output bus. The camera communicates to a host computer via an RS-232 link to the microcontroller. Static memory is used to both generate a FIFO interface for buffering defect burst data, and for off-line examination of defect detection data. In addition to providing arbitrary FPGA architectures, the internal program of the microcontroller can also be changed via the host computer and a ROM monitor. This paper describes a prototype system board, mounted inside a DALSA camera, and discusses some of the algorithms currently being implemented for web inspection applications.

  18. Video streaming into the mainstream.

    PubMed

    Garrison, W

    2001-12-01

    Changes in Internet technology are making possible the delivery of a richer mixture of media through data streaming. High-quality, dynamic content, such as video and audio, can be incorporated into Websites simply, flexibly and interactively. Technologies such as G3 mobile communication, ADSL, cable and satellites enable new ways of delivering medical services, information and learning. Systems such as Quicktime, Windows Media and Real Video provide reliable data streams as video-on-demand and users can tailor the experience to their own interests. The Learning Development Centre at the University of Portsmouth have used streaming technologies together with e-learning tools such as dynamic HTML, Flash, 3D objects and online assessment successfully to deliver on-line course content in economics and earth science. The Lifesign project--to develop, catalogue and stream health sciences media for teaching--is described and future medical applications are discussed.

  19. A Usability Survey of a Contents-Based Video Retrieval System by Combining Digital Video and an Electronic Bulletin Board

    ERIC Educational Resources Information Center

    Haga, Hirohide; Kaneda, Shigeo

    2005-01-01

    This article describes the survey of the usability of a novel content-based video retrieval system. This system combines video streaming and an electronic bulletin board system (BBS). Comments submitted to the BBS are used to index video data. Following the development of the prototype system an experimental survey with ten subjects was performed.…

  20. Implementation and Analysis of Real-Time Streaming Protocols

    PubMed Central

    Santos-González, Iván; Rivero-García, Alexandra; Molina-Gil, Jezabel; Caballero-Gil, Pino

    2017-01-01

    Communication media have become the primary way of interaction thanks to the discovery and innovation of many new technologies. One of the most widely used communication systems today is video streaming, which is constantly evolving. Such communications are a good alternative to face-to-face meetings, and are therefore very useful for coping with many problems caused by distance. However, they suffer from different issues such as bandwidth limitation, network congestion, energy efficiency, cost, reliability and connectivity. Hence, the quality of service and the quality of experience are considered the two most important issues for this type of communication. This work presents a complete comparative study of two of the most used protocols of video streaming, Real Time Streaming Protocol (RTSP) and the Web Real-Time Communication (WebRTC). In addition, this paper proposes two new mobile applications that implement those protocols in Android whose objective is to know how they are influenced by the aspects that most affect the streaming quality of service, which are the connection establishment time and the stream reception time. The new video streaming applications are also compared with the most popular video streaming applications for Android, and the experimental results of the analysis show that the developed WebRTC implementation improves the performance of the most popular video streaming applications with respect to the stream packet delay. PMID:28417949

  1. SIRSALE: integrated video database management tools

    NASA Astrophysics Data System (ADS)

    Brunie, Lionel; Favory, Loic; Gelas, J. P.; Lefevre, Laurent; Mostefaoui, Ahmed; Nait-Abdesselam, F.

    2002-07-01

    Video databases became an active field of research during the last decade. The main objective in such systems is to provide users with capabilities to friendly search, access and playback distributed stored video data in the same way as they do for traditional distributed databases. Hence, such systems need to deal with hard issues : (a) video documents generate huge volumes of data and are time sensitive (streams must be delivered at a specific bitrate), (b) contents of video data are very hard to be automatically extracted and need to be humanly annotated. To cope with these issues, many approaches have been proposed in the literature including data models, query languages, video indexing etc. In this paper, we present SIRSALE : a set of video databases management tools that allow users to manipulate video documents and streams stored in large distributed repositories. All the proposed tools are based on generic models that can be customized for specific applications using ad-hoc adaptation modules. More precisely, SIRSALE allows users to : (a) browse video documents by structures (sequences, scenes, shots) and (b) query the video database content by using a graphical tool, adapted to the nature of the target video documents. This paper also presents an annotating interface which allows archivists to describe the content of video documents. All these tools are coupled to a video player integrating remote VCR functionalities and are based on active network technology. So, we present how dedicated active services allow an optimized video transport for video streams (with Tamanoir active nodes). We then describe experiments of using SIRSALE on an archive of news video and soccer matches. The system has been demonstrated to professionals with a positive feedback. Finally, we discuss open issues and present some perspectives.

  2. An Innovative Streaming Video System With a Point-of-View Head Camera Transmission of Surgeries to Smartphones and Tablets: An Educational Utility.

    PubMed

    Chaves, Rafael Oliveira; de Oliveira, Pedro Armando Valente; Rocha, Luciano Chaves; David, Joacy Pedro Franco; Ferreira, Sanmari Costa; Santos, Alex de Assis Santos Dos; Melo, Rômulo Müller Dos Santos; Yasojima, Edson Yuzur; Brito, Marcus Vinicius Henriques

    2017-10-01

    In order to engage medical students and residents from public health centers to utilize the telemedicine features of surgery on their own smartphones and tablets as an educational tool, an innovative streaming system was developed with the purpose of streaming live footage from open surgeries to smartphones and tablets, allowing the visualization of the surgical field from the surgeon's perspective. The current study aims to describe the results of an evaluation on level 1 of Kirkpatrick's Model for Evaluation of the streaming system usage during gynecological surgeries, based on the perception of medical students and gynecology residents. Consisted of a live video streaming (from the surgeon's point of view) of gynecological surgeries for smartphones and tablets, one for each volunteer. The volunteers were able to connect to the local wireless network, created by the streaming system, through an access password and watch the video transmission on a web browser on their smartphones. Then, they answered a Likert-type questionnaire containing 14 items about the educational applicability of the streaming system, as well as comparing it to watching an in loco procedure. This study is formally approved by the local ethics commission (Certificate No. 53175915.7.0000.5171/2016). Twenty-one volunteers participated, totalizing 294 items answered, in which 94.2% were in agreement with the items affirmative, 4.1% were neutral, and only 1.7% answers corresponded to negative impressions. Cronbach's α was .82, which represents a good reliability level. Spearman's coefficients were highly significant in 4 comparisons and moderately significant in the other 20 comparisons. This study presents a local streaming video system of live surgeries to smartphones and tablets and shows its educational utility, low cost, and simple usage, which offers convenience and satisfactory image resolution, thus being potentially applicable in surgical teaching.

  3. Graphics to H.264 video encoding for 3D scene representation and interaction on mobile devices using region of interest

    NASA Astrophysics Data System (ADS)

    Le, Minh Tuan; Nguyen, Congdu; Yoon, Dae-Il; Jung, Eun Ku; Jia, Jie; Kim, Hae-Kwang

    2007-12-01

    In this paper, we propose a method of 3D graphics to video encoding and streaming that are embedded into a remote interactive 3D visualization system for rapidly representing a 3D scene on mobile devices without having to download it from the server. In particular, a 3D graphics to video framework is presented that increases the visual quality of regions of interest (ROI) of the video by performing more bit allocation to ROI during H.264 video encoding. The ROI are identified by projection 3D objects to a 2D plane during rasterization. The system offers users to navigate the 3D scene and interact with objects of interests for querying their descriptions. We developed an adaptive media streaming server that can provide an adaptive video stream in term of object-based quality to the client according to the user's preferences and the variation of network bandwidth. Results show that by doing ROI mode selection, PSNR of test sample slightly change while visual quality of objects increases evidently.

  4. Performance Evaluation of Peer-to-Peer Progressive Download in Broadband Access Networks

    NASA Astrophysics Data System (ADS)

    Shibuya, Megumi; Ogishi, Tomohiko; Yamamoto, Shu

    P2P (Peer-to-Peer) file sharing architectures have scalable and cost-effective features. Hence, the application of P2P architectures to media streaming is attractive and expected to be an alternative to the current video streaming using IP multicast or content delivery systems because the current systems require expensive network infrastructures and large scale centralized cache storage systems. In this paper, we investigate the P2P progressive download enabling Internet video streaming services. We demonstrated the capability of the P2P progressive download in both laboratory test network as well as in the Internet. Through the experiments, we clarified the contribution of the FTTH links to the P2P progressive download in the heterogeneous access networks consisting of FTTH and ADSL links. We analyzed the cause of some download performance degradation occurred in the experiment and discussed about the effective methods to provide the video streaming service using P2P progressive download in the current heterogeneous networks.

  5. Using Video Conferencing in Lecture Classes

    ERIC Educational Resources Information Center

    Gibbs, Bill; Larson, Erik

    2007-01-01

    Duquesne University's department of journalism and multimedia arts supports many of its classes with Mediasite Live, a video conferencing system that captures the output of presentation devices and streams it live to the Web, as well as recording presentations for Web streaming or recording to CD or DVD. Bill Gibbs and Erik Larson examine the…

  6. Real-time video streaming using H.264 scalable video coding (SVC) in multihomed mobile networks: a testbed approach

    NASA Astrophysics Data System (ADS)

    Nightingale, James; Wang, Qi; Grecos, Christos

    2011-03-01

    Users of the next generation wireless paradigm known as multihomed mobile networks expect satisfactory quality of service (QoS) when accessing streamed multimedia content. The recent H.264 Scalable Video Coding (SVC) extension to the Advanced Video Coding standard (AVC), offers the facility to adapt real-time video streams in response to the dynamic conditions of multiple network paths encountered in multihomed wireless mobile networks. Nevertheless, preexisting streaming algorithms were mainly proposed for AVC delivery over multipath wired networks and were evaluated by software simulation. This paper introduces a practical, hardware-based testbed upon which we implement and evaluate real-time H.264 SVC streaming algorithms in a realistic multihomed wireless mobile networks environment. We propose an optimised streaming algorithm with multi-fold technical contributions. Firstly, we extended the AVC packet prioritisation schemes to reflect the three-dimensional granularity of SVC. Secondly, we designed a mechanism for evaluating the effects of different streamer 'read ahead window' sizes on real-time performance. Thirdly, we took account of the previously unconsidered path switching and mobile networks tunnelling overheads encountered in real-world deployments. Finally, we implemented a path condition monitoring and reporting scheme to facilitate the intelligent path switching. The proposed system has been experimentally shown to offer a significant improvement in PSNR of the received stream compared with representative existing algorithms.

  7. COTS technologies for telemedicine applications.

    PubMed

    Triunfo, Riccardo; Tumbarello, Roberto; Sulis, Alessandro; Zanetti, Gianluigi; Lianas, Luca; Meloni, Vittorio; Frexia, Francesca

    2010-01-01

    To demonstrate a simple low-cost system for tele-echocardiology, focused on paediatric cardiology applications. The system was realized using open-source software and COTS technologies. It is based on the transmission of two simultaneous video streams, obtained by direct digitization of the output of an ultrasound machine and by a netcam showing the examination that is taking place. These streams are then embedded into a web page so they are accessible, together with basic video controls, via a standard web browser. The system can also record video streams on a server for further use. The system was tested on a small group of neonatal cases with suspected cardiopathies for a preliminary assessment of its features and diagnostic capabilities. Both the clinical and technological results were encouraging and are leading the way for further experimentation. The presented system can transfer clinical images and videos in an efficient way and in real time. It can be used in the same hospital to support internal consultancy requests, in remote areas using Internet connections and for didactic purposes using low cost COTS appliances and simple interfaces for end users. The solution proposed can be extended to control different medical appliances in those remote hospitals.

  8. Influence of video compression on the measurement error of the television system

    NASA Astrophysics Data System (ADS)

    Sotnik, A. V.; Yarishev, S. N.; Korotaev, V. V.

    2015-05-01

    Video data require a very large memory capacity. Optimal ratio quality / volume video encoding method is one of the most actual problem due to the urgent need to transfer large amounts of video over various networks. The technology of digital TV signal compression reduces the amount of data used for video stream representation. Video compression allows effective reduce the stream required for transmission and storage. It is important to take into account the uncertainties caused by compression of the video signal in the case of television measuring systems using. There are a lot digital compression methods. The aim of proposed work is research of video compression influence on the measurement error in television systems. Measurement error of the object parameter is the main characteristic of television measuring systems. Accuracy characterizes the difference between the measured value abd the actual parameter value. Errors caused by the optical system can be selected as a source of error in the television systems measurements. Method of the received video signal processing is also a source of error. Presence of error leads to large distortions in case of compression with constant data stream rate. Presence of errors increases the amount of data required to transmit or record an image frame in case of constant quality. The purpose of the intra-coding is reducing of the spatial redundancy within a frame (or field) of television image. This redundancy caused by the strong correlation between the elements of the image. It is possible to convert an array of image samples into a matrix of coefficients that are not correlated with each other, if one can find corresponding orthogonal transformation. It is possible to apply entropy coding to these uncorrelated coefficients and achieve a reduction in the digital stream. One can select such transformation that most of the matrix coefficients will be almost zero for typical images . Excluding these zero coefficients also possible reducing of the digital stream. Discrete cosine transformation is most widely used among possible orthogonal transformation. Errors of television measuring systems and data compression protocols analyzed In this paper. The main characteristics of measuring systems and detected sources of their error detected. The most effective methods of video compression are determined. The influence of video compression error on television measuring systems was researched. Obtained results will increase the accuracy of the measuring systems. In television image quality measuring system reduces distortion identical distortion in analog systems and specific distortions resulting from the process of coding / decoding digital video signal and errors in the transmission channel. By the distortions associated with encoding / decoding signal include quantization noise, reducing resolution, mosaic effect, "mosquito" effect edging on sharp drops brightness, blur colors, false patterns, the effect of "dirty window" and other defects. The size of video compression algorithms used in television measuring systems based on the image encoding with intra- and inter prediction individual fragments. The process of encoding / decoding image is non-linear in space and in time, because the quality of the playback of a movie at the reception depends on the pre- and post-history of a random, from the preceding and succeeding tracks, which can lead to distortion of the inadequacy of the sub-picture and a corresponding measuring signal.

  9. Data streaming in telepresence environments.

    PubMed

    Lamboray, Edouard; Würmlin, Stephan; Gross, Markus

    2005-01-01

    In this paper, we discuss data transmission in telepresence environments for collaborative virtual reality applications. We analyze data streams in the context of networked virtual environments and classify them according to their traffic characteristics. Special emphasis is put on geometry-enhanced (3D) video. We review architectures for real-time 3D video pipelines and derive theoretical bounds on the minimal system latency as a function of the transmission and processing delays. Furthermore, we discuss bandwidth issues of differential update coding for 3D video. In our telepresence system-the blue-c-we use a point-based 3D video technology which allows for differentially encoded 3D representations of human users. While we discuss the considerations which lead to the design of our three-stage 3D video pipeline, we also elucidate some critical implementation details regarding decoupling of acquisition, processing and rendering frame rates, and audio/video synchronization. Finally, we demonstrate the communication and networking features of the blue-c system in its full deployment. We show how the system can possibly be controlled to face processing or networking bottlenecks by adapting the multiple system components like audio, application data, and 3D video.

  10. Stream On: Video Servers in the Real World.

    ERIC Educational Resources Information Center

    Tristram, Claire

    1995-01-01

    Despite plans for corporate training networks, digital ad-insertion systems, hotel video-on-demand, and interactive television, only small scale video networks presently work. Four case studies examine the design and implementation decisions for different markets: corporate; advertising; hotel; and commercial video via cable, satellite or…

  11. Joint Doctrine for Unmanned Aircraft Systems: The Air Force and the Army Hold the Key to Success

    DTIC Science & Technology

    2010-05-03

    concept, coupled with sensor technologies that provide multiple video streams to multiple ground units, delivers increased capability and capacity to...airborne surveillance” allow one UAS to collect up to ten video transmissions, sending them to ten different users on the ground. Future iterations...of this technology, dubbed Gorgon Stare, will increase to as many as 65 video streams per UAS by 2014. 31 Being able to send multiple views of an

  12. Wireless live streaming video of laparoscopic surgery: a bandwidth analysis for handheld computers.

    PubMed

    Gandsas, Alex; McIntire, Katherine; George, Ivan M; Witzke, Wayne; Hoskins, James D; Park, Adrian

    2002-01-01

    Over the last six years, streaming media has emerged as a powerful tool for delivering multimedia content over networks. Concurrently, wireless technology has evolved, freeing users from desktop boundaries and wired infrastructures. At the University of Kentucky Medical Center, we have integrated these technologies to develop a system that can wirelessly transmit live surgery from the operating room to a handheld computer. This study establishes the feasibility of using our system to view surgeries and describes the effect of bandwidth on image quality. A live laparoscopic ventral hernia repair was transmitted to a single handheld computer using five encoding speeds at a constant frame rate, and the quality of the resulting streaming images was evaluated. No video images were rendered when video data were encoded at 28.8 kilobytes per second (Kbps), the slowest encoding bitrate studied. The highest quality images were rendered at encoding speeds greater than or equal to 150 Kbps. Of note, a 15 second transmission delay was experienced using all four encoding schemes that rendered video images. We believe that the wireless transmission of streaming video to handheld computers has tremendous potential to enhance surgical education. For medical students and residents, the ability to view live surgeries, lectures, courses and seminars on handheld computers means a larger number of learning opportunities. In addition, we envision that wireless enabled devices may be used to telemonitor surgical procedures. However, bandwidth availability and streaming delay are major issues that must be addressed before wireless telementoring becomes a reality.

  13. Dynamic video encryption algorithm for H.264/AVC based on a spatiotemporal chaos system.

    PubMed

    Xu, Hui; Tong, Xiao-Jun; Zhang, Miao; Wang, Zhu; Li, Ling-Hao

    2016-06-01

    Video encryption schemes mostly employ the selective encryption method to encrypt parts of important and sensitive video information, aiming to ensure the real-time performance and encryption efficiency. The classic block cipher is not applicable to video encryption due to the high computational overhead. In this paper, we propose the encryption selection control module to encrypt video syntax elements dynamically which is controlled by the chaotic pseudorandom sequence. A novel spatiotemporal chaos system and binarization method is used to generate a key stream for encrypting the chosen syntax elements. The proposed scheme enhances the resistance against attacks through the dynamic encryption process and high-security stream cipher. Experimental results show that the proposed method exhibits high security and high efficiency with little effect on the compression ratio and time cost.

  14. Internet Teleprescence by Real-Time View-Dependent Image Generation with Omnidirectional Video Camera

    NASA Astrophysics Data System (ADS)

    Morita, Shinji; Yamazawa, Kazumasa; Yokoya, Naokazu

    2003-01-01

    This paper describes a new networked telepresence system which realizes virtual tours into a visualized dynamic real world without significant time delay. Our system is realized by the following three steps: (1) video-rate omnidirectional image acquisition, (2) transportation of an omnidirectional video stream via internet, and (3) real-time view-dependent perspective image generation from the omnidirectional video stream. Our system is applicable to real-time telepresence in the situation where the real world to be seen is far from an observation site, because the time delay from the change of user"s viewing direction to the change of displayed image is small and does not depend on the actual distance between both sites. Moreover, multiple users can look around from a single viewpoint in a visualized dynamic real world in different directions at the same time. In experiments, we have proved that the proposed system is useful for internet telepresence.

  15. A QoS Aware Resource Allocation Strategy for 3D A/V Streaming in OFDMA Based Wireless Systems

    PubMed Central

    Chung, Young-uk; Choi, Yong-Hoon; Park, Suwon; Lee, Hyukjoon

    2014-01-01

    Three-dimensional (3D) video is expected to be a “killer app” for OFDMA-based broadband wireless systems. The main limitation of 3D video streaming over a wireless system is the shortage of radio resources due to the large size of the 3D traffic. This paper presents a novel resource allocation strategy to address this problem. In the paper, the video-plus-depth 3D traffic type is considered. The proposed resource allocation strategy focuses on the relationship between 2D video and the depth map, handling them with different priorities. It is formulated as an optimization problem and is solved using a suboptimal heuristic algorithm. Numerical results show that the proposed scheme provides a better quality of service compared to conventional schemes. PMID:25250377

  16. Evaluating the Use of Streaming Video To Support Student Learning in a First-Year Life Sciences Course for Student Nurses.

    ERIC Educational Resources Information Center

    Green, Sue M.; Voegeli, David; Harrison, Maureen; Phillips, Jackie; Knowles, Jess; Weaver, Mike; Shepard, Kerry

    2003-01-01

    Nursing students (n=656) used streaming videos on immune, endocrine, and neurological systems using Blackboard software. Of students who viewed all three, 32% found access easy, 59% enjoyed them, and 25% felt very confident in their learning. Results were consistent across three different types and embedding methods. Technical and access problems…

  17. High-resolution streaming video integrated with UGS systems

    NASA Astrophysics Data System (ADS)

    Rohrer, Matthew

    2010-04-01

    Imagery has proven to be a valuable complement to Unattended Ground Sensor (UGS) systems. It provides ultimate verification of the nature of detected targets. However, due to the power, bandwidth, and technological limitations inherent to UGS, sacrifices have been made to the imagery portion of such systems. The result is that these systems produce lower resolution images in small quantities. Currently, a high resolution, wireless imaging system is being developed to bring megapixel, streaming video to remote locations to operate in concert with UGS. This paper will provide an overview of how using Wifi radios, new image based Digital Signal Processors (DSP) running advanced target detection algorithms, and high resolution cameras gives the user an opportunity to take high-powered video imagers to areas where power conservation is a necessity.

  18. Industrial-Strength Streaming Video.

    ERIC Educational Resources Information Center

    Avgerakis, George; Waring, Becky

    1997-01-01

    Corporate training, financial services, entertainment, and education are among the top applications for streaming video servers, which send video to the desktop without downloading the whole file to the hard disk, saving time and eliminating copyrights questions. Examines streaming video technology, lists ten tips for better net video, and ranks…

  19. Integrating Time-Synchronized Video with Other Geospatial and Temporal Data for Remote Science Operations

    NASA Technical Reports Server (NTRS)

    Cohen, Tamar E.; Lees, David S.; Deans, Matthew C.; Lim, Darlene S. S.; Lee, Yeon Jin Grace

    2018-01-01

    Exploration Ground Data Systems (xGDS) supports rapid scientific decision making by synchronizing video in context with map, instrument data visualization, geo-located notes and any other collected data. xGDS is an open source web-based software suite developed at NASA Ames Research Center to support remote science operations in analog missions and prototype solutions for remote planetary exploration. (See Appendix B) Typical video systems are designed to play or stream video only, independent of other data collected in the context of the video. Providing customizable displays for monitoring live video and data as well as replaying recorded video and data helps end users build up a rich situational awareness. xGDS was designed to support remote field exploration with unreliable networks. Commercial digital recording systems operate under the assumption that there is a stable and reliable network between the source of the video and the recording system. In many field deployments and space exploration scenarios, this is not the case - there are both anticipated and unexpected network losses. xGDS' Video Module handles these interruptions, storing the available video, organizing and characterizing the dropouts, and presenting the video for streaming or replay to the end user including visualization of the dropouts. Scientific instruments often require custom or expensive software to analyze and visualize collected data. This limits the speed at which the data can be visualized and limits access to the data to those users with the software. xGDS' Instrument Module integrates with instruments that collect and broadcast data in a single snapshot or that continually collect and broadcast a stream of data. While seeing a visualization of collected instrument data is informative, showing the context for the collected data, other data collected nearby along with events indicating current status helps remote science teams build a better understanding of the environment. Further, sharing geo-located, tagged notes recorded by the scientists and others on the team spurs deeper analysis of the data.

  20. Design and develop a video conferencing framework for real-time telemedicine applications using secure group-based communication architecture.

    PubMed

    Mat Kiah, M L; Al-Bakri, S H; Zaidan, A A; Zaidan, B B; Hussain, Muzammil

    2014-10-01

    One of the applications of modern technology in telemedicine is video conferencing. An alternative to traveling to attend a conference or meeting, video conferencing is becoming increasingly popular among hospitals. By using this technology, doctors can help patients who are unable to physically visit hospitals. Video conferencing particularly benefits patients from rural areas, where good doctors are not always available. Telemedicine has proven to be a blessing to patients who have no access to the best treatment. A telemedicine system consists of customized hardware and software at two locations, namely, at the patient's and the doctor's end. In such cases, the video streams of the conferencing parties may contain highly sensitive information. Thus, real-time data security is one of the most important requirements when designing video conferencing systems. This study proposes a secure framework for video conferencing systems and a complete management solution for secure video conferencing groups. Java Media Framework Application Programming Interface classes are used to design and test the proposed secure framework. Real-time Transport Protocol over User Datagram Protocol is used to transmit the encrypted audio and video streams, and RSA and AES algorithms are used to provide the required security services. Results show that the encryption algorithm insignificantly increases the video conferencing computation time.

  1. Utilizing Current Commercial-off-the-Shelf Facial Recognition and Public Live Video Streaming to Enhance National Security

    DTIC Science & Technology

    2014-09-01

    biometrics technologies. 14. SUBJECT TERMS Facial recognition, systems engineering, live video streaming, security cameras, national security ...national security by sharing biometric facial recognition data in real-time utilizing infrastructures currently in place. It should be noted that the...9/11),law enforcement (LE) and Intelligence community (IC)authorities responsible for protecting citizens from threats against national security

  2. Streaming Audio and Video: New Challenges and Opportunities for Museums.

    ERIC Educational Resources Information Center

    Spadaccini, Jim

    Streaming audio and video present new challenges and opportunities for museums. Streaming media is easier to author and deliver to Internet audiences than ever before; digital video editing is commonplace now that the tools--computers, digital video cameras, and hard drives--are so affordable; the cost of serving video files across the Internet…

  3. A system for endobronchial video analysis

    NASA Astrophysics Data System (ADS)

    Byrnes, Patrick D.; Higgins, William E.

    2017-03-01

    Image-guided bronchoscopy is a critical component in the treatment of lung cancer and other pulmonary disorders. During bronchoscopy, a high-resolution endobronchial video stream facilitates guidance through the lungs and allows for visual inspection of a patient's airway mucosal surfaces. Despite the detailed information it contains, little effort has been made to incorporate recorded video into the clinical workflow. Follow-up procedures often required in cancer assessment or asthma treatment could significantly benefit from effectively parsed and summarized video. Tracking diagnostic regions of interest (ROIs) could potentially better equip physicians to detect early airway-wall cancer or improve asthma treatments, such as bronchial thermoplasty. To address this need, we have developed a system for the postoperative analysis of recorded endobronchial video. The system first parses an input video stream into endoscopic shots, derives motion information, and selects salient representative key frames. Next, a semi-automatic method for CT-video registration creates data linkages between a CT-derived airway-tree model and the input video. These data linkages then enable the construction of a CT-video chest model comprised of a bronchoscopy path history (BPH) - defining all airway locations visited during a procedure - and texture-mapping information for rendering registered video frames onto the airwaytree model. A suite of analysis tools is included to visualize and manipulate the extracted data. Video browsing and retrieval is facilitated through a video table of contents (TOC) and a search query interface. The system provides a variety of operational modes and additional functionality, including the ability to define regions of interest. We demonstrate the potential of our system using two human case study examples.

  4. Performance analysis of medical video streaming over mobile WiMAX.

    PubMed

    Alinejad, Ali; Philip, N; Istepanian, R H

    2010-01-01

    Wireless medical ultrasound streaming is considered one of the emerging application within the broadband mobile healthcare domain. These applications are considered as bandwidth demanding services that required high data rates with acceptable diagnostic quality of the transmitted medical images. In this paper, we present the performance analysis of a medical ultrasound video streaming acquired via special robotic ultrasonography system over emulated WiMAX wireless network. The experimental set-up of this application is described together with the performance of the relevant medical quality of service (m-QoS) metrics.

  5. The Effectiveness of Streaming Video on Medical Student Learning: A Case Study

    PubMed Central

    Bridge, Patrick D.; Jackson, Matt; Robinson, Leah

    2009-01-01

    Information technology helps meet today's medical students’ needs by providing multiple curriculum delivery methods. Video streaming is an e-learning technology that uses the Internet to deliver curriculum while giving the student control of the content's delivery. There have been few studies conducted on the effectiveness of streaming video in medical schools. A 5-year retrospective study was conducted using three groups of students (n = 1736) to determine if the availability of streaming video in Years 1–2 of the basic science curriculum affected overall Step 1 scores for first-time test-takers. The results demonstrated a positive effect on program outcomes as streaming video became more readily available to students. Based on these findings, streaming video technology seems to be a viable tool to complement in-class delivery methods, to accommodate the needs of medical students, and to provide options for meeting the challenges of delivering the undergraduate medical curriculum. Further studies need to be conducted to continue validating the effectiveness of streaming video technology. PMID:20165525

  6. The effectiveness of streaming video on medical student learning: a case study.

    PubMed

    Bridge, Patrick D; Jackson, Matt; Robinson, Leah

    2009-08-19

    Information technology helps meet today's medical students' needs by providing multiple curriculum delivery methods. Video streaming is an e-learning technology that uses the Internet to deliver curriculum while giving the student control of the content's delivery. There have been few studies conducted on the effectiveness of streaming video in medical schools. A 5-year retrospective study was conducted using three groups of students (n = 1736) to determine if the availability of streaming video in Years 1-2 of the basic science curriculum affected overall Step 1 scores for first-time test-takers. The results demonstrated a positive effect on program outcomes as streaming video became more readily available to students. Based on these findings, streaming video technology seems to be a viable tool to complement in-class delivery methods, to accommodate the needs of medical students, and to provide options for meeting the challenges of delivering the undergraduate medical curriculum. Further studies need to be conducted to continue validating the effectiveness of streaming video technology.

  7. Ubiquitous UAVs: a cloud based framework for storing, accessing and processing huge amount of video footage in an efficient way

    NASA Astrophysics Data System (ADS)

    Efstathiou, Nectarios; Skitsas, Michael; Psaroudakis, Chrysostomos; Koutras, Nikolaos

    2017-09-01

    Nowadays, video surveillance cameras are used for the protection and monitoring of a huge number of facilities worldwide. An important element in such surveillance systems is the use of aerial video streams originating from onboard sensors located on Unmanned Aerial Vehicles (UAVs). Video surveillance using UAVs represent a vast amount of video to be transmitted, stored, analyzed and visualized in a real-time way. As a result, the introduction and development of systems able to handle huge amount of data become a necessity. In this paper, a new approach for the collection, transmission and storage of aerial videos and metadata is introduced. The objective of this work is twofold. First, the integration of the appropriate equipment in order to capture and transmit real-time video including metadata (i.e. position coordinates, target) from the UAV to the ground and, second, the utilization of the ADITESS Versatile Media Content Management System (VMCMS-GE) for storing of the video stream and the appropriate metadata. Beyond the storage, VMCMS-GE provides other efficient management capabilities such as searching and processing of videos, along with video transcoding. For the evaluation and demonstration of the proposed framework we execute a use case where the surveillance of critical infrastructure and the detection of suspicious activities is performed. Collected video Transcodingis subject of this evaluation as well.

  8. Streaming Video--The Wave of the Video Future!

    ERIC Educational Resources Information Center

    Brown, Laura

    2004-01-01

    Videos and DVDs give the teachers more flexibility than slide projectors, filmstrips, and 16mm films but teachers and students are excited about a new technology called streaming. Streaming allows the educators to view videos on demand via the Internet, which works through the transfer of digital media like video, and voice data that is received…

  9. Robust media processing on programmable power-constrained systems

    NASA Astrophysics Data System (ADS)

    McVeigh, Jeff

    2005-03-01

    To achieve consumer-level quality, media systems must process continuous streams of audio and video data while maintaining exacting tolerances on sampling rate, jitter, synchronization, and latency. While it is relatively straightforward to design fixed-function hardware implementations to satisfy worst-case conditions, there is a growing trend to utilize programmable multi-tasking solutions for media applications. The flexibility of these systems enables support for multiple current and future media formats, which can reduce design costs and time-to-market. This paper provides practical engineering solutions to achieve robust media processing on such systems, with specific attention given to power-constrained platforms. The techniques covered in this article utilize the fundamental concepts of algorithm and software optimization, software/hardware partitioning, stream buffering, hierarchical prioritization, and system resource and power management. A novel enhancement to dynamically adjust processor voltage and frequency based on buffer fullness to reduce system power consumption is examined in detail. The application of these techniques is provided in a case study of a portable video player implementation based on a general-purpose processor running a non real-time operating system that achieves robust playback of synchronized H.264 video and MP3 audio from local storage and streaming over 802.11.

  10. [Assessment of learning activities using streaming video for laboratory practice education: aiming for development of E-learning system that promotes self-learning].

    PubMed

    Takeda, Naohito; Takeuchi, Isao; Haruna, Mitsumasa

    2007-12-01

    In order to develop an e-learning system that promotes self-learning, lectures and basic operations in laboratory practice of chemistry were recorded and edited on DVD media, consisting of 8 streaming videos as learning materials. Twenty-six students wanted to watch the DVD, and answered the following questions after they had watched it: "Do you think the video would serve to encourage you to study independently in the laboratory practice?" Almost all students (95%) approved of its usefulness, and more than 60% of them watched the videos repeatedly in order to acquire deeper knowledge and skill of the experimental operations. More than 60% answered that the demonstration-experiment should be continued in the laboratory practice, in spite of distribution of the DVD media.

  11. A systematic review of usability test metrics for mobile video streaming apps

    NASA Astrophysics Data System (ADS)

    Hussain, Azham; Mkpojiogu, Emmanuel O. C.

    2016-08-01

    This paper presents the results of a systematic review regarding the usability test metrics for mobile video streaming apps. In the study, 238 studies were found, but only 51 relevant papers were eventually selected for the review. The study reveals that time taken for video streaming and the video quality were the two most popular metrics used in the usability tests for mobile video streaming apps. Besides, most of the studies concentrated on the usability of mobile TV as users are switching from traditional TV to mobile TV.

  12. Real-time visual communication to aid disaster recovery in a multi-segment hybrid wireless networking system

    NASA Astrophysics Data System (ADS)

    Al Hadhrami, Tawfik; Wang, Qi; Grecos, Christos

    2012-06-01

    When natural disasters or other large-scale incidents occur, obtaining accurate and timely information on the developing situation is vital to effective disaster recovery operations. High-quality video streams and high-resolution images, if available in real time, would provide an invaluable source of current situation reports to the incident management team. Meanwhile, a disaster often causes significant damage to the communications infrastructure. Therefore, another essential requirement for disaster management is the ability to rapidly deploy a flexible incident area communication network. Such a network would facilitate the transmission of real-time video streams and still images from the disrupted area to remote command and control locations. In this paper, a comprehensive end-to-end video/image transmission system between an incident area and a remote control centre is proposed and implemented, and its performance is experimentally investigated. In this study a hybrid multi-segment communication network is designed that seamlessly integrates terrestrial wireless mesh networks (WMNs), distributed wireless visual sensor networks, an airborne platform with video camera balloons, and a Digital Video Broadcasting- Satellite (DVB-S) system. By carefully integrating all of these rapidly deployable, interworking and collaborative networking technologies, we can fully exploit the joint benefits provided by WMNs, WSNs, balloon camera networks and DVB-S for real-time video streaming and image delivery in emergency situations among the disaster hit area, the remote control centre and the rescue teams in the field. The whole proposed system is implemented in a proven simulator. Through extensive simulations, the real-time visual communication performance of this integrated system has been numerically evaluated, towards a more in-depth understanding in supporting high-quality visual communications in such a demanding context.

  13. We All Stream for Video

    ERIC Educational Resources Information Center

    Technology & Learning, 2008

    2008-01-01

    More than ever, teachers are using digital video to enhance their lessons. In fact, the number of schools using video streaming increased from 30 percent to 45 percent between 2004 and 2006, according to Market Data Retrieval. Why the popularity? For starters, video-streaming products are easy to use. They allow teachers to punctuate lessons with…

  14. Real-Time Transmission and Storage of Video, Audio, and Health Data in Emergency and Home Care Situations

    NASA Astrophysics Data System (ADS)

    Barbieri, Ivano; Lambruschini, Paolo; Raggio, Marco; Stagnaro, Riccardo

    2007-12-01

    The increase in the availability of bandwidth for wireless links, network integration, and the computational power on fixed and mobile platforms at affordable costs allows nowadays for the handling of audio and video data, their quality making them suitable for medical application. These information streams can support both continuous monitoring and emergency situations. According to this scenario, the authors have developed and implemented the mobile communication system which is described in this paper. The system is based on ITU-T H.323 multimedia terminal recommendation, suitable for real-time data/video/audio and telemedical applications. The audio and video codecs, respectively, H.264 and G723.1, were implemented and optimized in order to obtain high performance on the system target processors. Offline media streaming storage and retrieval functionalities were supported by integrating a relational database in the hospital central system. The system is based on low-cost consumer technologies such as general packet radio service (GPRS) and wireless local area network (WLAN or WiFi) for lowband data/video transmission. Implementation and testing were carried out for medical emergency and telemedicine application. In this paper, the emergency case study is described.

  15. Optimisation Issues of High Throughput Medical Data and Video Streaming Traffic in 3G Wireless Environments.

    PubMed

    Istepanian, R S H; Philip, N

    2005-01-01

    In this paper we describe some of the optimisation issues relevant to the requirements of high throughput of medical data and video streaming traffic in 3G wireless environments. In particular we present a challenging 3G mobile health care application that requires a demanding 3G medical data throughput. We also describe the 3G QoS requirement of mObile Tele-Echography ultra-Light rObot system (OTELO that is designed to provide seamless 3G connectivity for real-time ultrasound medical video streams and diagnosis from a remote site (robotic and patient station) manipulated by an expert side (specialists) that is controlling the robotic scanning operation and presenting a real-time feedback diagnosis using 3G wireless communication links.

  16. Task-oriented situation recognition

    NASA Astrophysics Data System (ADS)

    Bauer, Alexander; Fischer, Yvonne

    2010-04-01

    From the advances in computer vision methods for the detection, tracking and recognition of objects in video streams, new opportunities for video surveillance arise: In the future, automated video surveillance systems will be able to detect critical situations early enough to enable an operator to take preventive actions, instead of using video material merely for forensic investigations. However, problems such as limited computational resources, privacy regulations and a constant change in potential threads have to be addressed by a practical automated video surveillance system. In this paper, we show how these problems can be addressed using a task-oriented approach. The system architecture of the task-oriented video surveillance system NEST and an algorithm for the detection of abnormal behavior as part of the system are presented and illustrated for the surveillance of guests inside a video-monitored building.

  17. The Use of Smart Glasses for Surgical Video Streaming.

    PubMed

    Hiranaka, Takafumi; Nakanishi, Yuta; Fujishiro, Takaaki; Hida, Yuichi; Tsubosaka, Masanori; Shibata, Yosaku; Okimura, Kenjiro; Uemoto, Harunobu

    2017-04-01

    Observation of surgical procedures performed by experts is extremely important for acquisition and improvement of surgical skills. Smart glasses are small computers, which comprise a head-mounted monitor and video camera, and can be connected to the internet. They can be used for remote observation of surgeries by video streaming. Although Google Glass is the most commonly used smart glasses for medical purposes, it is still unavailable commercially and has some limitations. This article reports the use of a different type of smart glasses, InfoLinker, for surgical video streaming. InfoLinker has been commercially available in Japan for industrial purposes for more than 2 years. It is connected to a video server via wireless internet directly, and streaming video can be seen anywhere an internet connection is available. We have attempted live video streaming of knee arthroplasty operations that were viewed at several different locations, including foreign countries, on a common web browser. Although the quality of video images depended on the resolution and dynamic range of the video camera, speed of internet connection, and the wearer's attention to minimize image shaking, video streaming could be easily performed throughout the procedure. The wearer could confirm the quality of the video as the video was being shot by the head-mounted display. The time and cost for observation of surgical procedures can be reduced by InfoLinker, and further improvement of hardware as well as the wearer's video shooting technique is expected. We believe that this can be used in other medical settings.

  18. Effects of Video Streaming Technology on Public Speaking Students' Communication Apprehension and Competence

    ERIC Educational Resources Information Center

    Dupagne, Michel; Stacks, Don W.; Giroux, Valerie Manno

    2007-01-01

    This study examines whether video streaming can reduce trait and state communication apprehension, as well as improve communication competence, in public speaking classes. Video streaming technology has been touted as the next generation of video feedback for public speaking students because it is not limited by time or space and allows Internet…

  19. Scalable Video Streaming Relay for Smart Mobile Devices in Wireless Networks

    PubMed Central

    Kwon, Dongwoo; Je, Huigwang; Kim, Hyeonwoo; Ju, Hongtaek; An, Donghyeok

    2016-01-01

    Recently, smart mobile devices and wireless communication technologies such as WiFi, third generation (3G), and long-term evolution (LTE) have been rapidly deployed. Many smart mobile device users can access the Internet wirelessly, which has increased mobile traffic. In 2014, more than half of the mobile traffic around the world was devoted to satisfying the increased demand for the video streaming. In this paper, we propose a scalable video streaming relay scheme. Because many collisions degrade the scalability of video streaming, we first separate networks to prevent excessive contention between devices. In addition, the member device controls the video download rate in order to adapt to video playback. If the data are sufficiently buffered, the member device stops the download. If not, it requests additional video data. We implemented apps to evaluate the proposed scheme and conducted experiments with smart mobile devices. The results showed that our scheme improves the scalability of video streaming in a wireless local area network (WLAN). PMID:27907113

  20. Scalable Video Streaming Relay for Smart Mobile Devices in Wireless Networks.

    PubMed

    Kwon, Dongwoo; Je, Huigwang; Kim, Hyeonwoo; Ju, Hongtaek; An, Donghyeok

    2016-01-01

    Recently, smart mobile devices and wireless communication technologies such as WiFi, third generation (3G), and long-term evolution (LTE) have been rapidly deployed. Many smart mobile device users can access the Internet wirelessly, which has increased mobile traffic. In 2014, more than half of the mobile traffic around the world was devoted to satisfying the increased demand for the video streaming. In this paper, we propose a scalable video streaming relay scheme. Because many collisions degrade the scalability of video streaming, we first separate networks to prevent excessive contention between devices. In addition, the member device controls the video download rate in order to adapt to video playback. If the data are sufficiently buffered, the member device stops the download. If not, it requests additional video data. We implemented apps to evaluate the proposed scheme and conducted experiments with smart mobile devices. The results showed that our scheme improves the scalability of video streaming in a wireless local area network (WLAN).

  1. Cooperation stimulation strategies for peer-to-peer wireless live video-sharing social networks.

    PubMed

    Lin, W Sabrina; Zhao, H Vicky; Liu, K J Ray

    2010-07-01

    Human behavior analysis in video sharing social networks is an emerging research area, which analyzes the behavior of users who share multimedia content and investigates the impact of human dynamics on video sharing systems. Users watching live streaming in the same wireless network share the same limited bandwidth of backbone connection to the Internet, thus, they might want to cooperate with each other to obtain better video quality. These users form a wireless live-streaming social network. Every user wishes to watch video with high quality while paying as little as possible cost to help others. This paper focuses on providing incentives for user cooperation. We propose a game-theoretic framework to model user behavior and to analyze the optimal strategies for user cooperation simulation in wireless live streaming. We first analyze the Pareto optimality and the time-sensitive bargaining equilibrium of the two-person game. We then extend the solution to the multiuser scenario. We also consider potential selfish users' cheating behavior and malicious users' attacking behavior and analyze the performance of the proposed strategies with the existence of cheating users and malicious attackers. Both our analytical and simulation results show that the proposed strategies can effectively stimulate user cooperation, achieve cheat free and attack resistance, and help provide reliable services for wireless live streaming applications.

  2. Real-time WebRTC-based design for a telepresence wheelchair.

    PubMed

    Van Kha Ly Ha; Rifai Chai; Nguyen, Hung T

    2017-07-01

    This paper presents a novel approach to the telepresence wheelchair system which is capable of real-time video communication and remote interaction. The investigation of this emerging technology aims at providing a low-cost and efficient way for assisted-living of people with disabilities. The proposed system has been designed and developed by deploying the JavaScript with Hyper Text Markup Language 5 (HTML5) and Web Real-time Communication (WebRTC) in which the adaptive rate control algorithm for video transmission is invoked. We conducted experiments in real-world environments, and the wheelchair was controlled from a distance using the Internet browser to compare with existing methods. The results show that the adaptively encoded video streaming rate matches the available bandwidth. The video streaming is high-quality with approximately 30 frames per second (fps) and round trip time less than 20 milliseconds (ms). These performance results confirm that the WebRTC approach is a potential method for developing a telepresence wheelchair system.

  3. Network Characteristics of Video Streaming Traffic

    DTIC Science & Technology

    2011-11-01

    Silverlight, Flash, or HTML5 ) used for video streaming. In particular, we identify three different streaming strategies that produce traffic... HTML5 , Flash. 1. INTRODUCTION The popularity of video streaming has considerably increased in the last decade. Indeed, recent studies have shown...applications for mobile devices), and the container (Flash [10], HTML5 [18], Silverlight [4]), on the charac- teristics of the traffic between the

  4. Two-Stream Transformer Networks for Video-based Face Alignment.

    PubMed

    Liu, Hao; Lu, Jiwen; Feng, Jianjiang; Zhou, Jie

    2017-08-01

    In this paper, we propose a two-stream transformer networks (TSTN) approach for video-based face alignment. Unlike conventional image-based face alignment approaches which cannot explicitly model the temporal dependency in videos and motivated by the fact that consistent movements of facial landmarks usually occur across consecutive frames, our TSTN aims to capture the complementary information of both the spatial appearance on still frames and the temporal consistency information across frames. To achieve this, we develop a two-stream architecture, which decomposes the video-based face alignment into spatial and temporal streams accordingly. Specifically, the spatial stream aims to transform the facial image to the landmark positions by preserving the holistic facial shape structure. Accordingly, the temporal stream encodes the video input as active appearance codes, where the temporal consistency information across frames is captured to help shape refinements. Experimental results on the benchmarking video-based face alignment datasets show very competitive performance of our method in comparisons to the state-of-the-arts.

  5. Characterization, adaptive traffic shaping, and multiplexing of real-time MPEG II video

    NASA Astrophysics Data System (ADS)

    Agrawal, Sanjay; Barry, Charles F.; Binnai, Vinay; Kazovsky, Leonid G.

    1997-01-01

    We obtain network traffic model for real-time MPEG-II encoded digital video by analyzing video stream samples from real-time encoders from NUKO Information Systems. MPEG-II sample streams include a resolution intensive movie, City of Joy, an action intensive movie, Aliens, a luminance intensive (black and white) movie, Road To Utopia, and a chrominance intensive (color) movie, Dick Tracy. From our analysis we obtain a heuristic model for the encoded video traffic which uses a 15-stage Markov process to model the I,B,P frame sequences within a group of pictures (GOP). A jointly-correlated Gaussian process is used to model the individual frame sizes. Scene change arrivals are modeled according to a gamma process. Simulations show that our MPEG-II traffic model generates, I,B,P frame sequences and frame sizes that closely match the sample MPEG-II stream traffic characteristics as they relate to latency and buffer occupancy in network queues. To achieve high multiplexing efficiency we propose a traffic shaping scheme which sets preferred 1-frame generation times among a group of encoders so as to minimize the overall variation in total offered traffic while still allowing the individual encoders to react to scene changes. Simulations show that our scheme results in multiplexing gains of up to 10% enabling us to multiplex twenty 6 Mbps MPEG-II video streams instead of 18 streams over an ATM/SONET OC3 link without latency or cell loss penalty. This scheme is due for a patent.

  6. Real-time video streaming in mobile cloud over heterogeneous wireless networks

    NASA Astrophysics Data System (ADS)

    Abdallah-Saleh, Saleh; Wang, Qi; Grecos, Christos

    2012-06-01

    Recently, the concept of Mobile Cloud Computing (MCC) has been proposed to offload the resource requirements in computational capabilities, storage and security from mobile devices into the cloud. Internet video applications such as real-time streaming are expected to be ubiquitously deployed and supported over the cloud for mobile users, who typically encounter a range of wireless networks of diverse radio access technologies during their roaming. However, real-time video streaming for mobile cloud users across heterogeneous wireless networks presents multiple challenges. The network-layer quality of service (QoS) provision to support high-quality mobile video delivery in this demanding scenario remains an open research question, and this in turn affects the application-level visual quality and impedes mobile users' perceived quality of experience (QoE). In this paper, we devise a framework to support real-time video streaming in this new mobile video networking paradigm and evaluate the performance of the proposed framework empirically through a lab-based yet realistic testing platform. One particular issue we focus on is the effect of users' mobility on the QoS of video streaming over the cloud. We design and implement a hybrid platform comprising of a test-bed and an emulator, on which our concept of mobile cloud computing, video streaming and heterogeneous wireless networks are implemented and integrated to allow the testing of our framework. As representative heterogeneous wireless networks, the popular WLAN (Wi-Fi) and MAN (WiMAX) networks are incorporated in order to evaluate effects of handovers between these different radio access technologies. The H.264/AVC (Advanced Video Coding) standard is employed for real-time video streaming from a server to mobile users (client nodes) in the networks. Mobility support is introduced to enable continuous streaming experience for a mobile user across the heterogeneous wireless network. Real-time video stream packets are captured for analytical purposes on the mobile user node. Experimental results are obtained and analysed. Future work is identified towards further improvement of the current design and implementation. With this new mobile video networking concept and paradigm implemented and evaluated, results and observations obtained from this study would form the basis of a more in-depth, comprehensive understanding of various challenges and opportunities in supporting high-quality real-time video streaming in mobile cloud over heterogeneous wireless networks.

  7. Architecture of portable electronic medical records system integrated with streaming media.

    PubMed

    Chen, Wei; Shih, Chien-Chou

    2012-02-01

    Due to increasing occurrence of accidents and illness during business trips, travel, or overseas studies, the requirement for portable EMR (Electronic Medical Records) has increased. This study proposes integrating streaming media technology into the EMR system to facilitate referrals, contracted laboratories, and disease notification among hospitals. The current study encoded static and dynamic medical images of patients into a streaming video format and stored them in a Flash Media Server (FMS). Based on the Taiwan Electronic Medical Record Template (TMT) standard, EMR records can be converted into XML documents and used to integrate description fields with embedded streaming videos. This investigation implemented a web-based portable EMR interchanging system using streaming media techniques to expedite exchanging medical image information among hospitals. The proposed architecture of the portable EMR retrieval system not only provides local hospital users the ability to acquire EMR text files from a previous hospital, but also helps access static and dynamic medical images as reference for clinical diagnosis and treatment. The proposed method protects property rights of medical images through information security mechanisms of the Medical Record Interchange Service Center and Health Certificate Authorization to facilitate proper, efficient, and continuous treatment of patients.

  8. Android Video Streaming

    DTIC Science & Technology

    2014-05-01

    natural choice. In this document, we describe several aspects of video streaming and the challenges of performing video streaming between Android-based...client application was needed. Typically something like VideoLAN Client ( VLC ) is used for this purpose in a desktop environment. However, while VLC is...a very mature application on Windows and Linux, VLC for Android is still in a beta testing phase, and versions have only been developed to work

  9. Exploring inter-frame correlation analysis and wavelet-domain modeling for real-time caption detection in streaming video

    NASA Astrophysics Data System (ADS)

    Li, Jia; Tian, Yonghong; Gao, Wen

    2008-01-01

    In recent years, the amount of streaming video has grown rapidly on the Web. Often, retrieving these streaming videos offers the challenge of indexing and analyzing the media in real time because the streams must be treated as effectively infinite in length, thus precluding offline processing. Generally speaking, captions are important semantic clues for video indexing and retrieval. However, existing caption detection methods often have difficulties to make real-time detection for streaming video, and few of them concern on the differentiation of captions from scene texts and scrolling texts. In general, these texts have different roles in streaming video retrieval. To overcome these difficulties, this paper proposes a novel approach which explores the inter-frame correlation analysis and wavelet-domain modeling for real-time caption detection in streaming video. In our approach, the inter-frame correlation information is used to distinguish caption texts from scene texts and scrolling texts. Moreover, wavelet-domain Generalized Gaussian Models (GGMs) are utilized to automatically remove non-text regions from each frame and only keep caption regions for further processing. Experiment results show that our approach is able to offer real-time caption detection with high recall and low false alarm rate, and also can effectively discern caption texts from the other texts even in low resolutions.

  10. Online Class Review: Using Streaming-Media Technology

    ERIC Educational Resources Information Center

    Loudon, Marc; Sharp, Mark

    2006-01-01

    We present an automated system that allows students to replay both audio and video from a large nonmajors' organic chemistry class as streaming RealMedia. Once established, this system requires no technical intervention and is virtually transparent to the instructor. This gives students access to online class review at any time. Assessment has…

  11. Efficient data replication for the delivery of high-quality video content over P2P VoD advertising networks

    NASA Astrophysics Data System (ADS)

    Ho, Chien-Peng; Yu, Jen-Yu; Lee, Suh-Yin

    2011-12-01

    Recent advances in modern television systems have had profound consequences for the scalability, stability, and quality of transmitted digital data signals. This is of particular significance for peer-to-peer (P2P) video-on-demand (VoD) related platforms, faced with an immediate and growing demand for reliable service delivery. In response to demands for high-quality video, the key objectives in the construction of the proposed framework were user satisfaction with perceived video quality and the effective utilization of available resources on P2P VoD networks. This study developed a peer-based promoter to support online advertising in P2P VoD networks based on an estimation of video distortion prior to the replication of data stream chunks. The proposed technology enables the recovery of lost video using replicated stream chunks in real time. Load balance is achieved by adjusting the replication level of each candidate group according to the degree-of-distortion, thereby enabling a significant reduction in server load and increased scalability in the P2P VoD system. This approach also promotes the use of advertising as an efficient tool for commercial promotion. Results indicate that the proposed system efficiently satisfies the given fault tolerances.

  12. Identifying hidden voice and video streams

    NASA Astrophysics Data System (ADS)

    Fan, Jieyan; Wu, Dapeng; Nucci, Antonio; Keralapura, Ram; Gao, Lixin

    2009-04-01

    Given the rising popularity of voice and video services over the Internet, accurately identifying voice and video traffic that traverse their networks has become a critical task for Internet service providers (ISPs). As the number of proprietary applications that deliver voice and video services to end users increases over time, the search for the one methodology that can accurately detect such services while being application independent still remains open. This problem becomes even more complicated when voice and video service providers like Skype, Microsoft, and Google bundle their voice and video services with other services like file transfer and chat. For example, a bundled Skype session can contain both voice stream and file transfer stream in the same layer-3/layer-4 flow. In this context, traditional techniques to identify voice and video streams do not work. In this paper, we propose a novel self-learning classifier, called VVS-I , that detects the presence of voice and video streams in flows with minimum manual intervention. Our classifier works in two phases: training phase and detection phase. In the training phase, VVS-I first extracts the relevant features, and subsequently constructs a fingerprint of a flow using the power spectral density (PSD) analysis. In the detection phase, it compares the fingerprint of a flow to the existing fingerprints learned during the training phase, and subsequently classifies the flow. Our classifier is not only capable of detecting voice and video streams that are hidden in different flows, but is also capable of detecting different applications (like Skype, MSN, etc.) that generate these voice/video streams. We show that our classifier can achieve close to 100% detection rate while keeping the false positive rate to less that 1%.

  13. Design of video interface conversion system based on FPGA

    NASA Astrophysics Data System (ADS)

    Zhao, Heng; Wang, Xiang-jun

    2014-11-01

    This paper presents a FPGA based video interface conversion system that enables the inter-conversion between digital and analog video. Cyclone IV series EP4CE22F17C chip from Altera Corporation is used as the main video processing chip, and single-chip is used as the information interaction control unit between FPGA and PC. The system is able to encode/decode messages from the PC. Technologies including video decoding/encoding circuits, bus communication protocol, data stream de-interleaving and de-interlacing, color space conversion and the Camera Link timing generator module of FPGA are introduced. The system converts Composite Video Broadcast Signal (CVBS) from the CCD camera into Low Voltage Differential Signaling (LVDS), which will be collected by the video processing unit with Camera Link interface. The processed video signals will then be inputted to system output board and displayed on the monitor.The current experiment shows that it can achieve high-quality video conversion with minimum board size.

  14. The quality of video information on burn first aid available on YouTube.

    PubMed

    Butler, Daniel P; Perry, Fiona; Shah, Zameer; Leon-Villapalos, Jorge

    2013-08-01

    To evaluate the clinical accuracy and delivery of information on thermal burn first aid available on the leading video-streaming website, YouTube. YouTube was searched using four separate search terms. The first 20 videos identified for each search term were included in the study if their primary focus was on thermal burn first aid. Videos were scored by two independent reviewers using a standardised scoring system and the scores totalled to give each video an overall score out of 20. A total of 47 videos were analysed. The average video score was 8.5 out of a possible 20. No videos scored full-marks. A low correlation was found between the score given by the independent reviewers and the number of views the video received per month (Spearman's rank correlation co-efficient=0.03, p=0.86). The current standard of videos covering thermal burn first aid available on YouTube is unsatisfactory. In addition to this, viewers do not appear to be drawn to videos of higher quality. Organisations involved in managing burns and providing first aid care should be encouraged to produce clear, structured videos that can be made available on leading video streaming websites. Copyright © 2012 Elsevier Ltd and ISBI. All rights reserved.

  15. Modeling the time--varying subjective quality of HTTP video streams with rate adaptations.

    PubMed

    Chen, Chao; Choi, Lark Kwon; de Veciana, Gustavo; Caramanis, Constantine; Heath, Robert W; Bovik, Alan C

    2014-05-01

    Newly developed hypertext transfer protocol (HTTP)-based video streaming technologies enable flexible rate-adaptation under varying channel conditions. Accurately predicting the users' quality of experience (QoE) for rate-adaptive HTTP video streams is thus critical to achieve efficiency. An important aspect of understanding and modeling QoE is predicting the up-to-the-moment subjective quality of a video as it is played, which is difficult due to hysteresis effects and nonlinearities in human behavioral responses. This paper presents a Hammerstein-Wiener model for predicting the time-varying subjective quality (TVSQ) of rate-adaptive videos. To collect data for model parameterization and validation, a database of longer duration videos with time-varying distortions was built and the TVSQs of the videos were measured in a large-scale subjective study. The proposed method is able to reliably predict the TVSQ of rate adaptive videos. Since the Hammerstein-Wiener model has a very simple structure, the proposed method is suitable for online TVSQ prediction in HTTP-based streaming.

  16. A Scalable Multimedia Streaming Scheme with CBR-Transmission of VBR-Encoded Videos over the Internet

    ERIC Educational Resources Information Center

    Kabir, Md. H.; Shoja, Gholamali C.; Manning, Eric G.

    2006-01-01

    Streaming audio/video contents over the Internet requires large network bandwidth and timely delivery of media data. A streaming session is generally long and also needs a large I/O bandwidth at the streaming server. A streaming server, however, has limited network and I/O bandwidth. For this reason, a streaming server alone cannot scale a…

  17. Understanding the Perceptions of Network Gatekeepers on Bandwidth and Online Video Streams in Ahmadu Bello University, Nigeria

    ERIC Educational Resources Information Center

    Odigie, Imoisili Ojeime; Gbaje, Ezra Shiloba

    2017-01-01

    Online video streaming is a learning technology used in today's world and reliant on the availability of bandwidth. This research study sought to understand the perceptions of network gatekeepers about bandwidth and online video streams in Ahmadu Bello University, Nigeria. To achieve this, the interpretive paradigm and the Network Gatekeeping…

  18. Video streaming in nursing education: bringing life to online education.

    PubMed

    Smith-Stoner, Marilyn; Willer, Ann

    2003-01-01

    Distance education is a standard form of instruction for many colleges of nursing. Web-based course and program content has been delivered primarily through text-based presentations such as PowerPoint slides and Web search activities. However, the rapid pace of technological innovation is making available more sophisticated forms of delivery such as video streaming. High-quality video streams, created at the instructor's desktop or in basic recording studios, can be produced that build on PowerPoint or create new media for use on the Web. The technology required to design, produce, and upload short video-streamed course content objects to the Internet is described. The preparation of materials, suggested production guidelines, and examples of information presented via desktop video methods are presented.

  19. Construction of a multimodal CT-video chest model

    NASA Astrophysics Data System (ADS)

    Byrnes, Patrick D.; Higgins, William E.

    2014-03-01

    Bronchoscopy enables a number of minimally invasive chest procedures for diseases such as lung cancer and asthma. For example, using the bronchoscope's continuous video stream as a guide, a physician can navigate through the lung airways to examine general airway health, collect tissue samples, or administer a disease treatment. In addition, physicians can now use new image-guided intervention (IGI) systems, which draw upon both three-dimensional (3D) multi-detector computed tomography (MDCT) chest scans and bronchoscopic video, to assist with bronchoscope navigation. Unfortunately, little use is made of the acquired video stream, a potentially invaluable source of information. In addition, little effort has been made to link the bronchoscopic video stream to the detailed anatomical information given by a patient's 3D MDCT chest scan. We propose a method for constructing a multimodal CT-video model of the chest. After automatically computing a patient's 3D MDCT-based airway-tree model, the method next parses the available video data to generate a positional linkage between a sparse set of key video frames and airway path locations. Next, a fusion/mapping of the video's color mucosal information and MDCT-based endoluminal surfaces is performed. This results in the final multimodal CT-video chest model. The data structure constituting the model provides a history of those airway locations visited during bronchoscopy. It also provides for quick visual access to relevant sections of the airway wall by condensing large portions of endoscopic video into representative frames containing important structural and textural information. When examined with a set of interactive visualization tools, the resulting fused data structure provides a rich multimodal data source. We demonstrate the potential of the multimodal model with both phantom and human data.

  20. Integer-Linear-Programing Optimization in Scalable Video Multicast with Adaptive Modulation and Coding in Wireless Networks

    PubMed Central

    Lee, Chaewoo

    2014-01-01

    The advancement in wideband wireless network supports real time services such as IPTV and live video streaming. However, because of the sharing nature of the wireless medium, efficient resource allocation has been studied to achieve a high level of acceptability and proliferation of wireless multimedia. Scalable video coding (SVC) with adaptive modulation and coding (AMC) provides an excellent solution for wireless video streaming. By assigning different modulation and coding schemes (MCSs) to video layers, SVC can provide good video quality to users in good channel conditions and also basic video quality to users in bad channel conditions. For optimal resource allocation, a key issue in applying SVC in the wireless multicast service is how to assign MCSs and the time resources to each SVC layer in the heterogeneous channel condition. We formulate this problem with integer linear programming (ILP) and provide numerical results to show the performance under 802.16 m environment. The result shows that our methodology enhances the overall system throughput compared to an existing algorithm. PMID:25276862

  1. Webcam Stories

    ERIC Educational Resources Information Center

    Clidas, Jeanne

    2011-01-01

    Stories, steeped in science content and full of specific information, can be brought into schools and homes through the power of live video streaming. Video streaming refers to the process of viewing video over the internet. These videos may be live (webcam feeds) or recorded. These stories are engaging and inspiring. They offer opportunities to…

  2. Performance Evaluation of the NASA/KSC Transmission System

    NASA Technical Reports Server (NTRS)

    Christensen, Kenneth J.

    2000-01-01

    NASA-KSC currently uses three bridged 100-Mbps FDDI segments as its backbone for data traffic. The FDDI Transmission System (FTXS) connects the KSC industrial area, KSC launch complex 39 area, and the Cape Canaveral Air Force Station. The report presents a performance modeling study of the FTXS and the proposed ATM Transmission System (ATXS). The focus of the study is on performance of MPEG video transmission on these networks. Commercial modeling tools - the CACI Predictor and Comnet tools - were used. In addition, custom software tools were developed to characterize conversation pairs in Sniffer trace (capture) files to use as input to these tools. A baseline study of both non-launch and launch day data traffic on the FTXS is presented. MPEG-1 and MPEG-2 video traffic was characterized and the shaping of it evaluated. It is shown that the characteristics of a video stream has a direct effect on its performance in a network. It is also shown that shaping of video streams is necessary to prevent overflow losses and resulting poor video quality. The developed models can be used to predict when the existing FTXS will 'run out of room' and for optimizing the parameters of ATM links used for transmission of MPEG video. Future work with these models can provide useful input and validation to set-top box projects within the Advanced Networks Development group in NASA-KSC Development Engineering.

  3. Video streaming technologies using ActiveX and LabVIEW

    NASA Astrophysics Data System (ADS)

    Panoiu, M.; Rat, C. L.; Panoiu, C.

    2015-06-01

    The goal of this paper is to present the possibilities of remote image processing through data exchange between two programming technologies: LabVIEW and ActiveX. ActiveX refers to the process of controlling one program from another via ActiveX component; where one program acts as the client and the other as the server. LabVIEW can be either client or server. Both programs (client and server) exist independent of each other but are able to share information. The client communicates with the ActiveX objects that the server opens to allow the sharing of information [7]. In the case of video streaming [1] [2], most ActiveX controls can only display the data, being incapable of transforming it into a data type that LabVIEW can process. This becomes problematic when the system is used for remote image processing. The LabVIEW environment itself provides little if any possibilities for video streaming, and the methods it does offer are usually not high performance, but it possesses high performance toolkits and modules specialized in image processing, making it ideal for processing the captured data. Therefore, we chose to use existing software, specialized in video streaming along with LabVIEW and to capture the data provided by them, for further use, within LabVIEW. The software we studied (the ActiveX controls of a series of media players that utilize streaming technology) provide high quality data and a very small transmission delay, ensuring the reliability of the results of the image processing.

  4. Ontology-Based Multimedia Authoring Tool for Adaptive E-Learning

    ERIC Educational Resources Information Center

    Deng, Lawrence Y.; Keh, Huan-Chao; Liu, Yi-Jen

    2010-01-01

    More video streaming technologies supporting distance learning systems are becoming popular among distributed network environments. In this paper, the authors develop a multimedia authoring tool for adaptive e-learning by using characterization of extended media streaming technologies. The distributed approach is based on an ontology-based model.…

  5. A novel multiple description scalable coding scheme for mobile wireless video transmission

    NASA Astrophysics Data System (ADS)

    Zheng, Haifeng; Yu, Lun; Chen, Chang Wen

    2005-03-01

    We proposed in this paper a novel multiple description scalable coding (MDSC) scheme based on in-band motion compensation temporal filtering (IBMCTF) technique in order to achieve high video coding performance and robust video transmission. The input video sequence is first split into equal-sized groups of frames (GOFs). Within a GOF, each frame is hierarchically decomposed by discrete wavelet transform. Since there is a direct relationship between wavelet coefficients and what they represent in the image content after wavelet decomposition, we are able to reorganize the spatial orientation trees to generate multiple bit-streams and employed SPIHT algorithm to achieve high coding efficiency. We have shown that multiple bit-stream transmission is very effective in combating error propagation in both Internet video streaming and mobile wireless video. Furthermore, we adopt the IBMCTF scheme to remove the redundancy for inter-frames along the temporal direction using motion compensated temporal filtering, thus high coding performance and flexible scalability can be provided in this scheme. In order to make compressed video resilient to channel error and to guarantee robust video transmission over mobile wireless channels, we add redundancy to each bit-stream and apply error concealment strategy for lost motion vectors. Unlike traditional multiple description schemes, the integration of these techniques enable us to generate more than two bit-streams that may be more appropriate for multiple antenna transmission of compressed video. Simulate results on standard video sequences have shown that the proposed scheme provides flexible tradeoff between coding efficiency and error resilience.

  6. User interface using a 3D model for video surveillance

    NASA Astrophysics Data System (ADS)

    Hata, Toshihiko; Boh, Satoru; Tsukada, Akihiro; Ozaki, Minoru

    1998-02-01

    These days fewer people, who must carry out their tasks quickly and precisely, are required in industrial surveillance and monitoring applications such as plant control or building security. Utilizing multimedia technology is a good approach to meet this need, and we previously developed Media Controller, which is designed for the applications and provides realtime recording and retrieval of digital video data in a distributed environment. In this paper, we propose a user interface for such a distributed video surveillance system in which 3D models of buildings and facilities are connected to the surveillance video. A novel method of synchronizing camera field data with each frame of a video stream is considered. This method records and reads the camera field data similarity to the video data and transmits it synchronously with the video stream. This enables the user interface to have such useful functions as comprehending the camera field immediately and providing clues when visibility is poor, for not only live video but also playback video. We have also implemented and evaluated the display function which makes surveillance video and 3D model work together using Media Controller with Java and Virtual Reality Modeling Language employed for multi-purpose and intranet use of 3D model.

  7. Live HDR video streaming on commodity hardware

    NASA Astrophysics Data System (ADS)

    McNamee, Joshua; Hatchett, Jonathan; Debattista, Kurt; Chalmers, Alan

    2015-09-01

    High Dynamic Range (HDR) video provides a step change in viewing experience, for example the ability to clearly see the soccer ball when it is kicked from the shadow of the stadium into sunshine. To achieve the full potential of HDR video, so-called true HDR, it is crucial that all the dynamic range that was captured is delivered to the display device and tone mapping is confined only to the display. Furthermore, to ensure widespread uptake of HDR imaging, it should be low cost and available on commodity hardware. This paper describes an end-to-end HDR pipeline for capturing, encoding and streaming high-definition HDR video in real-time using off-the-shelf components. All the lighting that is captured by HDR-enabled consumer cameras is delivered via the pipeline to any display, including HDR displays and even mobile devices with minimum latency. The system thus provides an integrated HDR video pipeline that includes everything from capture to post-production, archival and storage, compression, transmission, and display.

  8. MPEG-7 based video annotation and browsing

    NASA Astrophysics Data System (ADS)

    Hoeynck, Michael; Auweiler, Thorsten; Wellhausen, Jens

    2003-11-01

    The huge amount of multimedia data produced worldwide requires annotation in order to enable universal content access and to provide content-based search-and-retrieval functionalities. Since manual video annotation can be time consuming, automatic annotation systems are required. We review recent approaches to content-based indexing and annotation of videos for different kind of sports and describe our approach to automatic annotation of equestrian sports videos. We especially concentrate on MPEG-7 based feature extraction and content description, where we apply different visual descriptors for cut detection. Further, we extract the temporal positions of single obstacles on the course by analyzing MPEG-7 edge information. Having determined single shot positions as well as the visual highlights, the information is jointly stored with meta-textual information in an MPEG-7 description scheme. Based on this information, we generate content summaries which can be utilized in a user-interface in order to provide content-based access to the video stream, but further for media browsing on a streaming server.

  9. Using a Video Split-Screen Technique To Evaluate Streaming Instructional Videos.

    ERIC Educational Resources Information Center

    Gibbs, William J.; Bernas, Ronan S.; McCann, Steven A.

    The Media Center at Eastern Illinois University developed and streamed on the Internet 26 short (one to five minutes) instructional videos about WebCT that illustrated specific functions, including logging-in, changing a password, and using chat. This study observed trainees using and reacting to selections of these videos. It set out to assess…

  10. Very low cost real time histogram-based contrast enhancer utilizing fixed-point DSP processing

    NASA Astrophysics Data System (ADS)

    McCaffrey, Nathaniel J.; Pantuso, Francis P.

    1998-03-01

    A real time contrast enhancement system utilizing histogram- based algorithms has been developed to operate on standard composite video signals. This low-cost DSP based system is designed with fixed-point algorithms and an off-chip look up table (LUT) to reduce the cost considerably over other contemporary approaches. This paper describes several real- time contrast enhancing systems advanced at the Sarnoff Corporation for high-speed visible and infrared cameras. The fixed-point enhancer was derived from these high performance cameras. The enhancer digitizes analog video and spatially subsamples the stream to qualify the scene's luminance. Simultaneously, the video is streamed through a LUT that has been programmed with the previous calculation. Reducing division operations by subsampling reduces calculation- cycles and also allows the processor to be used with cameras of nominal resolutions. All values are written to the LUT during blanking so no frames are lost. The enhancer measures 13 cm X 6.4 cm X 3.2 cm, operates off 9 VAC and consumes 12 W. This processor is small and inexpensive enough to be mounted with field deployed security cameras and can be used for surveillance, video forensics and real- time medical imaging.

  11. Low latency adaptive streaming of HD H.264 video over 802.11 wireless networks with cross-layer feedback

    NASA Astrophysics Data System (ADS)

    Patti, Andrew; Tan, Wai-tian; Shen, Bo

    2007-09-01

    Streaming video in consumer homes over wireless IEEE 802.11 networks is becoming commonplace. Wireless 802.11 networks pose unique difficulties for streaming high definition (HD), low latency video due to their error-prone physical layer and media access procedures which were not designed for real-time traffic. HD video streaming, even with sophisticated H.264 encoding, is particularly challenging due to the large number of packet fragments per slice. Cross-layer design strategies have been proposed to address the issues of video streaming over 802.11. These designs increase streaming robustness by imposing some degree of monitoring and control over 802.11 parameters from application level, or by making the 802.11 layer media-aware. Important contributions are made, but none of the existing approaches directly take the 802.11 queuing into account. In this paper we take a different approach and propose a cross-layer design allowing direct, expedient control over the wireless packet queue, while obtaining timely feedback on transmission status for each packet in a media flow. This method can be fully implemented on a media sender with no explicit support or changes required to the media client. We assume that due to congestion or deteriorating signal-to-noise levels, the available throughput may drop substantially for extended periods of time, and thus propose video source adaptation methods that allow matching the bit-rate to available throughput. A particular H.264 slice encoding is presented to enable seamless stream switching between streams at multiple bit-rates, and we explore using new computationally efficient transcoding methods when only a high bit-rate stream is available.

  12. Live Streaming of the Moon's Shadow from the Edge of Space across the United States during the August 2017 Total Solar Eclipse

    NASA Astrophysics Data System (ADS)

    Guzik, T. G.

    2017-12-01

    On August 21, 2017 approximately 55 teams across the path of totality of the eclipse across America will use sounding balloon platforms to transmit, in real-time from an altitude of 90,000 feet, HD video of the moon's shadow as it crosses the U.S. from Oregon to South Carolina. This unprecedented activity was originally organized by the Montana Space Grant Consortium in order to 1) use the rare total eclipse event to captivate the imagination of students and encourage the development of new ballooning teams across the United States, 2) provide an inexpensive high bandwidth data telemetry system for real-time video streaming, and 3) establish the basic infrastructure at multiple institutions enabling advanced "new generation" student ballooning projects following the eclipse event. A ballooning leadership group consisting of Space Grant Consortia in Montana, Colorado, Louisiana, and Minnesota was established to support further development and testing of the systems, as well as to assist in training the ballooning teams. This presentation will describe the high bandwidth telemetry system used for the never before attempted live streaming of HD video from the edge of space, the results of this highly collaborative science campaign stretching from coast-to-coast, potential uses of the data telemetry system for other student science projects, and lessons learned that can be applied to the 2024 total solar eclipse.

  13. Feasibility of video codec algorithms for software-only playback

    NASA Astrophysics Data System (ADS)

    Rodriguez, Arturo A.; Morse, Ken

    1994-05-01

    Software-only video codecs can provide good playback performance in desktop computers with a 486 or 68040 CPU running at 33 MHz without special hardware assistance. Typically, playback of compressed video can be categorized into three tasks: the actual decoding of the video stream, color conversion, and the transfer of decoded video data from system RAM to video RAM. By current standards, good playback performance is the decoding and display of video streams of 320 by 240 (or larger) compressed frames at 15 (or greater) frames-per- second. Software-only video codecs have evolved by modifying and tailoring existing compression methodologies to suit video playback in desktop computers. In this paper we examine the characteristics used to evaluate software-only video codec algorithms, namely: image fidelity (i.e., image quality), bandwidth (i.e., compression) ease-of-decoding (i.e., playback performance), memory consumption, compression to decompression asymmetry, scalability, and delay. We discuss the tradeoffs among these variables and the compromises that can be made to achieve low numerical complexity for software-only playback. Frame- differencing approaches are described since software-only video codecs typically employ them to enhance playback performance. To complement other papers that appear in this session of the Proceedings, we review methods derived from binary pattern image coding since these methods are amenable for software-only playback. In particular, we introduce a novel approach called pixel distribution image coding.

  14. Fingerprint multicast in secure video streaming.

    PubMed

    Zhao, H Vicky; Liu, K J Ray

    2006-01-01

    Digital fingerprinting is an emerging technology to protect multimedia content from illegal redistribution, where each distributed copy is labeled with unique identification information. In video streaming, huge amount of data have to be transmitted to a large number of users under stringent latency constraints, so the bandwidth-efficient distribution of uniquely fingerprinted copies is crucial. This paper investigates the secure multicast of anticollusion fingerprinted video in streaming applications and analyzes their performance. We first propose a general fingerprint multicast scheme that can be used with most spread spectrum embedding-based multimedia fingerprinting systems. To further improve the bandwidth efficiency, we explore the special structure of the fingerprint design and propose a joint fingerprint design and distribution scheme. From our simulations, the two proposed schemes can reduce the bandwidth requirement by 48% to 87%, depending on the number of users, the characteristics of video sequences, and the network and computation constraints. We also show that under the constraint that all colluders have the same probability of detection, the embedded fingerprints in the two schemes have approximately the same collusion resistance. Finally, we propose a fingerprint drift compensation scheme to improve the quality of the reconstructed sequences at the decoder's side without introducing extra communication overhead.

  15. Novel dynamic caching for hierarchically distributed video-on-demand systems

    NASA Astrophysics Data System (ADS)

    Ogo, Kenta; Matsuda, Chikashi; Nishimura, Kazutoshi

    1998-02-01

    It is difficult to simultaneously serve the millions of video streams that will be needed in the age of 'Mega-Media' networks by using only one high-performance server. To distribute the service load, caching servers should be location near users. However, in previously proposed caching mechanisms, the grade of service depends on whether the data is already cached at a caching server. To make the caching servers transparent to the users, the ability to randomly access the large volume of data stored in the central server should be supported, and the operational functions of the provided service should not be narrowly restricted. We propose a mechanism for constructing a video-stream-caching server that is transparent to the users and that will always support all special playback functions for all available programs to all the contents with a latency of only 1 or 2 seconds. This mechanism uses Variable-sized-quantum-segment- caching technique derived from an analysis of the historical usage log data generated by a line-on-demand-type service experiment and based on the basic techniques used by a time- slot-based multiple-stream video-on-demand server.

  16. A digital audio/video interleaving system. [for Shuttle Orbiter

    NASA Technical Reports Server (NTRS)

    Richards, R. W.

    1978-01-01

    A method of interleaving an audio signal with its associated video signal for simultaneous transmission or recording, and the subsequent separation of the two signals, is described. Comparisons are made between the new audio signal interleaving system and the Skylab Pam audio/video interleaving system, pointing out improvements gained by using the digital audio/video interleaving system. It was found that the digital technique is the simplest, most effective and most reliable method for interleaving audio and/or other types of data into the video signal for the Shuttle Orbiter application. Details of the design of a multiplexer capable of accommodating two basic data channels, each consisting of a single 31.5-kb/s digital bit stream are given. An adaptive slope delta modulation system is introduced to digitize audio signals, producing a high immunity of work intelligibility to channel errors, primarily due to the robust nature of the delta-modulation algorithm.

  17. Trans-Pacific tele-ultrasound image transmission of fetal central nervous system structures.

    PubMed

    Ferreira, Adilson Cunha; Araujo Júnior, Edward; Martins, Wellington P; Jordão, João Francisco; Oliani, Antônio Hélio; Meagher, Simon E; Da Silva Costa, Fabricio

    2015-01-01

    To assess the quality of images and video clips of fetal central nervous (CNS) structures obtained by ultrasound and transmitted via tele-ultrasound from Brazil to Australia. In this cross-sectional study, 15 normal singleton pregnant women between 20 and 26 weeks were selected. Fetal CNS structures were obtained by images and video clips. The exams were transmitted in real-time using a broadband internet and an inexpensive video streaming device. Four blinded examiners evaluated the quality of the exams using the Likert scale. We calculated the mean, standard deviation, mean difference, and p values were obtained from paired t tests. The quality of the original video clips was slightly better than that observed by the transmitted video clips; mean difference considering all observers = 0.23 points. In 47/60 comparisons (78.3%; 95% CI = 66.4-86.9%) the quality of the video clips were judged to be the same. In 182/240 still images (75.8%; 95% CI = 70.0-80.8%) the scores of transmitted image were considered the same as the original. We demonstrated that long distance tele-ultrasound transmission of fetal CNS structures using an inexpensive video streaming device provided images of subjective good quality.

  18. Research on quality metrics of wireless adaptive video streaming

    NASA Astrophysics Data System (ADS)

    Li, Xuefei

    2018-04-01

    With the development of wireless networks and intelligent terminals, video traffic has increased dramatically. Adaptive video streaming has become one of the most promising video transmission technologies. For this type of service, a good QoS (Quality of Service) of wireless network does not always guarantee that all customers have good experience. Thus, new quality metrics have been widely studies recently. Taking this into account, the objective of this paper is to investigate the quality metrics of wireless adaptive video streaming. In this paper, a wireless video streaming simulation platform with DASH mechanism and multi-rate video generator is established. Based on this platform, PSNR model, SSIM model and Quality Level model are implemented. Quality Level Model considers the QoE (Quality of Experience) factors such as image quality, stalling and switching frequency while PSNR Model and SSIM Model mainly consider the quality of the video. To evaluate the performance of these QoE models, three performance metrics (SROCC, PLCC and RMSE) which are used to make a comparison of subjective and predicted MOS (Mean Opinion Score) are calculated. From these performance metrics, the monotonicity, linearity and accuracy of these quality metrics can be observed.

  19. Portable Airborne Laser System Measures Forest-Canopy Height

    NASA Technical Reports Server (NTRS)

    Nelson, Ross

    2005-01-01

    (PALS) is a combination of laser ranging, video imaging, positioning, and data-processing subsystems designed for measuring the heights of forest canopies along linear transects from tens to thousands of kilometers long. Unlike prior laser ranging systems designed to serve the same purpose, the PALS is not restricted to use aboard a single aircraft of a specific type: the PALS fits into two large suitcases that can be carried to any convenient location, and the PALS can be installed in almost any local aircraft for hire, thereby making it possible to sample remote forests at relatively low cost. The initial cost and the cost of repairing the PALS are also lower because the PALS hardware consists mostly of commercial off-the-shelf (COTS) units that can easily be replaced in the field. The COTS units include a laser ranging transceiver, a charge-coupled-device camera that images the laser-illuminated targets, a differential Global Positioning System (dGPS) receiver capable of operation within the Wide Area Augmentation System, a video titler, a video cassette recorder (VCR), and a laptop computer equipped with two serial ports. The VCR and computer are powered by batteries; the other units are powered at 12 VDC from the 28-VDC aircraft power system via a low-pass filter and a voltage converter. The dGPS receiver feeds location and time data, at an update rate of 0.5 Hz, to the video titler and the computer. The laser ranging transceiver, operating at a sampling rate of 2 kHz, feeds its serial range and amplitude data stream to the computer. The analog video signal from the CCD camera is fed into the video titler wherein the signal is annotated with position and time information. The titler then forwards the annotated signal to the VCR for recording on 8-mm tapes. The dGPS and laser range and amplitude serial data streams are processed by software that displays the laser trace and the dGPS information as they are fed into the computer, subsamples the laser range and amplitude data, interleaves the subsampled data with the dGPS information, and records the resulting interleaved data stream.

  20. Two Dimensional Array Based Overlay Network for Balancing Load of Peer-to-Peer Live Video Streaming

    NASA Astrophysics Data System (ADS)

    Faruq Ibn Ibrahimy, Abdullah; Rafiqul, Islam Md; Anwar, Farhat; Ibn Ibrahimy, Muhammad

    2013-12-01

    The live video data is streaming usually in a tree-based overlay network or in a mesh-based overlay network. In case of departure of a peer with additional upload bandwidth, the overlay network becomes very vulnerable to churn. In this paper, a two dimensional array-based overlay network is proposed for streaming the live video stream data. As there is always a peer or a live video streaming server to upload the live video stream data, so the overlay network is very stable and very robust to churn. Peers are placed according to their upload and download bandwidth, which enhances the balance of load and performance. The overlay network utilizes the additional upload bandwidth of peers to minimize chunk delivery delay and to maximize balance of load. The procedure, which is used for distributing the additional upload bandwidth of the peers, distributes the additional upload bandwidth to the heterogeneous strength peers in a fair treat distribution approach and to the homogeneous strength peers in a uniform distribution approach. The proposed overlay network has been simulated by Qualnet from Scalable Network Technologies and results are presented in this paper.

  1. Gaze-Aware Streaming Solutions for the Next Generation of Mobile VR Experiences.

    PubMed

    Lungaro, Pietro; Sjoberg, Rickard; Valero, Alfredo Jose Fanghella; Mittal, Ashutosh; Tollmar, Konrad

    2018-04-01

    This paper presents a novel approach to content delivery for video streaming services. It exploits information from connected eye-trackers embedded in the next generation of VR Head Mounted Displays (HMDs). The proposed solution aims to deliver high visual quality, in real time, around the users' fixations points while lowering the quality everywhere else. The goal of the proposed approach is to substantially reduce the overall bandwidth requirements for supporting VR video experiences while delivering high levels of user perceived quality. The prerequisites to achieve these results are: (1) mechanisms that can cope with different degrees of latency in the system and (2) solutions that support fast adaptation of video quality in different parts of a frame, without requiring a large increase in bitrate. A novel codec configuration, capable of supporting near-instantaneous video quality adaptation in specific portions of a video frame, is presented. The proposed method exploits in-built properties of HEVC encoders and while it introduces a moderate amount of error, these errors are indetectable by users. Fast adaptation is the key to enable gaze-aware streaming and its reduction in bandwidth. A testbed implementing gaze-aware streaming, together with a prototype HMD with in-built eye tracker, is presented and was used for testing with real users. The studies quantified the bandwidth savings achievable by the proposed approach and characterize the relationships between Quality of Experience (QoE) and network latency. The results showed that up to 83% less bandwidth is required to deliver high QoE levels to the users, as compared to conventional solutions.

  2. A professional and cost effective digital video editing and image storage system for the operating room.

    PubMed

    Scollato, A; Perrini, P; Benedetto, N; Di Lorenzo, N

    2007-06-01

    We propose an easy-to-construct digital video editing system ideal to produce video documentation and still images. A digital video editing system applicable to many video sources in the operating room is described in detail. The proposed system has proved easy to use and permits one to obtain videography quickly and easily. Mixing different streams of video input from all the devices in use in the operating room, the application of filters and effects produces a final, professional end-product. Recording on a DVD provides an inexpensive, portable and easy-to-use medium to store or re-edit or tape at a later time. From stored videography it is easy to extract high-quality, still images useful for teaching, presentations and publications. In conclusion digital videography and still photography can easily be recorded by the proposed system, producing high-quality video recording. The use of firewire ports provides good compatibility with next-generation hardware and software. The high standard of quality makes the proposed system one of the lowest priced products available today.

  3. Communication system analysis for manned space flight

    NASA Technical Reports Server (NTRS)

    Schilling, D. L.

    1978-01-01

    The development of adaptive delta modulators capable of digitizing a video signal is summarized. The delta modulator encoder accepts a 4 MHz black and white composite video signal or a color video signal and encodes it into a stream of binary digits at a rate which can be adjusted from 8 Mb/s to 24 Mb/s. The output bit rate is determined by the user and alters the quality of the video picture. The digital signal is decoded using the adaptive delta modulator decoder to reconstruct the picture.

  4. In-network adaptation of SHVC video in software-defined networks

    NASA Astrophysics Data System (ADS)

    Awobuluyi, Olatunde; Nightingale, James; Wang, Qi; Alcaraz Calero, Jose Maria; Grecos, Christos

    2016-04-01

    Software Defined Networks (SDN), when combined with Network Function Virtualization (NFV) represents a paradigm shift in how future networks will behave and be managed. SDN's are expected to provide the underpinning technologies for future innovations such as 5G mobile networks and the Internet of Everything. The SDN architecture offers features that facilitate an abstracted and centralized global network view in which packet forwarding or dropping decisions are based on application flows. Software Defined Networks facilitate a wide range of network management tasks, including the adaptation of real-time video streams as they traverse the network. SHVC, the scalable extension to the recent H.265 standard is a new video encoding standard that supports ultra-high definition video streams with spatial resolutions of up to 7680×4320 and frame rates of 60fps or more. The massive increase in bandwidth required to deliver these U-HD video streams dwarfs the bandwidth requirements of current high definition (HD) video. Such large bandwidth increases pose very significant challenges for network operators. In this paper we go substantially beyond the limited number of existing implementations and proposals for video streaming in SDN's all of which have primarily focused on traffic engineering solutions such as load balancing. By implementing and empirically evaluating an SDN enabled Media Adaptation Network Entity (MANE) we provide a valuable empirical insight into the benefits and limitations of SDN enabled video adaptation for real time video applications. The SDN-MANE is the video adaptation component of our Video Quality Assurance Manager (VQAM) SDN control plane application, which also includes an SDN monitoring component to acquire network metrics and a decision making engine using algorithms to determine the optimum adaptation strategy for any real time video application flow given the current network conditions. Our proposed VQAM application has been implemented and evaluated on an SDN allowing us to provide important benchmarks for video streaming over SDN and for SDN control plane latency.

  5. Apply network coding for H.264/SVC multicasting

    NASA Astrophysics Data System (ADS)

    Wang, Hui; Kuo, C.-C. Jay

    2008-08-01

    In a packet erasure network environment, video streaming benefits from error control in two ways to achieve graceful degradation. The first approach is application-level (or the link-level) forward error-correction (FEC) to provide erasure protection. The second error control approach is error concealment at the decoder end to compensate lost packets. A large amount of research work has been done in the above two areas. More recently, network coding (NC) techniques have been proposed for efficient data multicast over networks. It was shown in our previous work that multicast video streaming benefits from NC for its throughput improvement. An algebraic model is given to analyze the performance in this work. By exploiting the linear combination of video packets along nodes in a network and the SVC video format, the system achieves path diversity automatically and enables efficient video delivery to heterogeneous receivers in packet erasure channels. The application of network coding can protect video packets against the erasure network environment. However, the rank defficiency problem of random linear network coding makes the error concealment inefficiently. It is shown by computer simulation that the proposed NC video multicast scheme enables heterogenous receiving according to their capacity constraints. But it needs special designing to improve the video transmission performance when applying network coding.

  6. A time-lapse photography method for monitoring salmon (Oncorhynchus spp.) passage and abundance in streams

    PubMed Central

    Leacock, William B.; Eby, Lisa A.; Stanford, Jack A.

    2016-01-01

    Accurately estimating population sizes is often a critical component of fisheries research and management. Although there is a growing appreciation of the importance of small-scale salmon population dynamics to the stability of salmon stock-complexes, our understanding of these populations is constrained by a lack of efficient and cost-effective monitoring tools for streams. Weirs are expensive, labor intensive, and can disrupt natural fish movements. While conventional video systems avoid some of these shortcomings, they are expensive and require excessive amounts of labor to review footage for data collection. Here, we present a novel method for quantifying salmon in small streams (<15 m wide, <1 m deep) that uses both time-lapse photography and video in a model-based double sampling scheme. This method produces an escapement estimate nearly as accurate as a video-only approach, but with substantially less labor, money, and effort. It requires servicing only every 14 days, detects salmon 24 h/day, is inexpensive, and produces escapement estimates with confidence intervals. In addition to escapement estimation, we present a method for estimating in-stream salmon abundance across time, data needed by researchers interested in predator--prey interactions or nutrient subsidies. We combined daily salmon passage estimates with stream specific estimates of daily mortality developed using previously published data. To demonstrate proof of concept for these methods, we present results from two streams in southwest Kodiak Island, Alaska in which high densities of sockeye salmon spawn. PMID:27326378

  7. Can You See Me Now Visualizing Battlefield Facial Recognition Technology in 2035

    DTIC Science & Technology

    2010-04-01

    County Sheriff’s Department, use certain measurements such as the distance between eyes, the length of the nose, or the shape of the ears. 8 However...captures multiple frames of video and composites them into an appropriately high-resolution image that can be processed by the facial recognition software...stream of data. High resolution video systems, such as those described below will be able to capture orders of magnitude more data in one video frame

  8. A web-based video annotation system for crowdsourcing surveillance videos

    NASA Astrophysics Data System (ADS)

    Gadgil, Neeraj J.; Tahboub, Khalid; Kirsh, David; Delp, Edward J.

    2014-03-01

    Video surveillance systems are of a great value to prevent threats and identify/investigate criminal activities. Manual analysis of a huge amount of video data from several cameras over a long period of time often becomes impracticable. The use of automatic detection methods can be challenging when the video contains many objects with complex motion and occlusions. Crowdsourcing has been proposed as an effective method for utilizing human intelligence to perform several tasks. Our system provides a platform for the annotation of surveillance video in an organized and controlled way. One can monitor a surveillance system using a set of tools such as training modules, roles and labels, task management. This system can be used in a real-time streaming mode to detect any potential threats or as an investigative tool to analyze past events. Annotators can annotate video contents assigned to them for suspicious activity or criminal acts. First responders are then able to view the collective annotations and receive email alerts about a newly reported incident. They can also keep track of the annotators' training performance, manage their activities and reward their success. By providing this system, the process of video analysis is made more efficient.

  9. Streaming Video to Enhance Students' Reflection in Dance Education

    ERIC Educational Resources Information Center

    Leijen, Ali; Lam, Ineke; Wildschut, Liesbeth; Simons, P. Robert-Jan; Admiraal, Wilfried

    2009-01-01

    This paper presents an evaluation case study that describes the experiences of 15 students and 2 teachers using a video-based learning environment, DiViDU, to facilitate students' daily reflection activities in a composition course and a ballet course. To support dance students' reflection processes streaming video was applied as follows: video…

  10. Factors that Influence Learning Satisfaction Delivered by Video Streaming Technology

    ERIC Educational Resources Information Center

    Keenan, Daniel Stephen

    2010-01-01

    In 2005, over 100,000 e-Learning courses were offered in over half of all U.S. postsecondary education institutions with nearly 90% of all community colleges and four year institutions offering online education. Streaming video is commonplace across the internet offering seamless video and sound anywhere connectivity is available effectively…

  11. Video Streaming in Online Learning

    ERIC Educational Resources Information Center

    Hartsell, Taralynn; Yuen, Steve Chi-Yin

    2006-01-01

    The use of video in teaching and learning is a common practice in education today. As learning online becomes more of a common practice in education, streaming video and audio will play a bigger role in delivering course materials to online learners. This form of technology brings courses alive by allowing online learners to use their visual and…

  12. Live video monitoring robot controlled by web over internet

    NASA Astrophysics Data System (ADS)

    Lokanath, M.; Akhil Sai, Guruju

    2017-11-01

    Future is all about robots, robot can perform tasks where humans cannot, Robots have huge applications in military and industrial area for lifting heavy weights, for accurate placements, for repeating the same task number of times, where human are not efficient. Generally robot is a mix of electronic, electrical and mechanical engineering and can do the tasks automatically on its own or under the supervision of humans. The camera is the eye for robot, call as robovision helps in monitoring security system and also can reach into the places where the human eye cannot reach. This paper presents about developing a live video streaming robot controlled from the website. We designed the web, controlling for the robot to move left, right, front and back while streaming video. As we move to the smart environment or IoT (Internet of Things) by smart devices the system we developed here connects over the internet and can be operated with smart mobile phone using a web browser. The Raspberry Pi model B chip acts as heart for this system robot, the sufficient motors, surveillance camera R pi 2 are connected to Raspberry pi.

  13. Backwards compatible high dynamic range video compression

    NASA Astrophysics Data System (ADS)

    Dolzhenko, Vladimir; Chesnokov, Vyacheslav; Edirisinghe, Eran A.

    2014-02-01

    This paper presents a two layer CODEC architecture for high dynamic range video compression. The base layer contains the tone mapped video stream encoded with 8 bits per component which can be decoded using conventional equipment. The base layer content is optimized for rendering on low dynamic range displays. The enhancement layer contains the image difference, in perceptually uniform color space, between the result of inverse tone mapped base layer content and the original video stream. Prediction of the high dynamic range content reduces the redundancy in the transmitted data while still preserves highlights and out-of-gamut colors. Perceptually uniform colorspace enables using standard ratedistortion optimization algorithms. We present techniques for efficient implementation and encoding of non-uniform tone mapping operators with low overhead in terms of bitstream size and number of operations. The transform representation is based on human vision system model and suitable for global and local tone mapping operators. The compression techniques include predicting the transform parameters from previously decoded frames and from already decoded data for current frame. Different video compression techniques are compared: backwards compatible and non-backwards compatible using AVC and HEVC codecs.

  14. Delivering Instruction via Streaming Media: A Higher Education Perspective.

    ERIC Educational Resources Information Center

    Mortensen, Mark; Schlieve, Paul; Young, Jon

    2000-01-01

    Describes streaming media, an audio/video presentation that is delivered across a network so that it is viewed while being downloaded onto the user's computer, including a continuous stream of video that can be pre-recorded or live. Discusses its use for nontraditional students in higher education and reports on implementation experiences. (LRW)

  15. Task-oriented quality assessment and adaptation in real-time mission critical video streaming applications

    NASA Astrophysics Data System (ADS)

    Nightingale, James; Wang, Qi; Grecos, Christos

    2015-02-01

    In recent years video traffic has become the dominant application on the Internet with global year-on-year increases in video-oriented consumer services. Driven by improved bandwidth in both mobile and fixed networks, steadily reducing hardware costs and the development of new technologies, many existing and new classes of commercial and industrial video applications are now being upgraded or emerging. Some of the use cases for these applications include areas such as public and private security monitoring for loss prevention or intruder detection, industrial process monitoring and critical infrastructure monitoring. The use of video is becoming commonplace in defence, security, commercial, industrial, educational and health contexts. Towards optimal performances, the design or optimisation in each of these applications should be context aware and task oriented with the characteristics of the video stream (frame rate, spatial resolution, bandwidth etc.) chosen to match the use case requirements. For example, in the security domain, a task-oriented consideration may be that higher resolution video would be required to identify an intruder than to simply detect his presence. Whilst in the same case, contextual factors such as the requirement to transmit over a resource-limited wireless link, may impose constraints on the selection of optimum task-oriented parameters. This paper presents a novel, conceptually simple and easily implemented method of assessing video quality relative to its suitability for a particular task and dynamically adapting videos streams during transmission to ensure that the task can be successfully completed. Firstly we defined two principle classes of tasks: recognition tasks and event detection tasks. These task classes are further subdivided into a set of task-related profiles, each of which is associated with a set of taskoriented attributes (minimum spatial resolution, minimum frame rate etc.). For example, in the detection class, profiles for intruder detection will require different temporal characteristics (frame rate) from those used for detection of high motion objects such as vehicles or aircrafts. We also define a set of contextual attributes that are associated with each instance of a running application that include resource constraints imposed by the transmission system employed and the hardware platforms used as source and destination of the video stream. Empirical results are presented and analysed to demonstrate the advantages of the proposed schemes.

  16. Automatic topics segmentation for TV news video

    NASA Astrophysics Data System (ADS)

    Hmayda, Mounira; Ejbali, Ridha; Zaied, Mourad

    2017-03-01

    Automatic identification of television programs in the TV stream is an important task for operating archives. This article proposes a new spatio-temporal approach to identify the programs in TV stream into two main steps: First, a reference catalogue for video features visual jingles built. We operate the features that characterize the instances of the same program type to identify the different types of programs in the flow of television. The role of video features is to represent the visual invariants for each visual jingle using appropriate automatic descriptors for each television program. On the other hand, programs in television streams are identified by examining the similarity of the video signal for visual grammars in the catalogue. The main idea of the identification process is to compare the visual similarity of the video signal features in the flow of television to the catalogue. After presenting the proposed approach, the paper overviews encouraging experimental results on several streams extracted from different channels and compounds of several programs.

  17. Video time encoding machines.

    PubMed

    Lazar, Aurel A; Pnevmatikakis, Eftychios A

    2011-03-01

    We investigate architectures for time encoding and time decoding of visual stimuli such as natural and synthetic video streams (movies, animation). The architecture for time encoding is akin to models of the early visual system. It consists of a bank of filters in cascade with single-input multi-output neural circuits. Neuron firing is based on either a threshold-and-fire or an integrate-and-fire spiking mechanism with feedback. We show that analog information is represented by the neural circuits as projections on a set of band-limited functions determined by the spike sequence. Under Nyquist-type and frame conditions, the encoded signal can be recovered from these projections with arbitrary precision. For the video time encoding machine architecture, we demonstrate that band-limited video streams of finite energy can be faithfully recovered from the spike trains and provide a stable algorithm for perfect recovery. The key condition for recovery calls for the number of neurons in the population to be above a threshold value.

  18. Video Time Encoding Machines

    PubMed Central

    Lazar, Aurel A.; Pnevmatikakis, Eftychios A.

    2013-01-01

    We investigate architectures for time encoding and time decoding of visual stimuli such as natural and synthetic video streams (movies, animation). The architecture for time encoding is akin to models of the early visual system. It consists of a bank of filters in cascade with single-input multi-output neural circuits. Neuron firing is based on either a threshold-and-fire or an integrate-and-fire spiking mechanism with feedback. We show that analog information is represented by the neural circuits as projections on a set of band-limited functions determined by the spike sequence. Under Nyquist-type and frame conditions, the encoded signal can be recovered from these projections with arbitrary precision. For the video time encoding machine architecture, we demonstrate that band-limited video streams of finite energy can be faithfully recovered from the spike trains and provide a stable algorithm for perfect recovery. The key condition for recovery calls for the number of neurons in the population to be above a threshold value. PMID:21296708

  19. A Near-Reality Approach to Improve the e-Learning Open Courseware

    ERIC Educational Resources Information Center

    Yu, Pao-Ta; Liao, Yuan-Hsun; Su, Ming-Hsiang

    2013-01-01

    The open courseware proposed by MIT with single streaming video has been widely accepted by most of the universities as their supplementary learning contents. In this streaming video, a digital video camera is used to capture the speaker's gesture and his/her PowerPoint presentation at the same time. However, the blurry content of PowerPoint…

  20. MPEG-1 low-cost encoder solution

    NASA Astrophysics Data System (ADS)

    Grueger, Klaus; Schirrmeister, Frank; Filor, Lutz; von Reventlow, Christian; Schneider, Ulrich; Mueller, Gerriet; Sefzik, Nicolai; Fiedrich, Sven

    1995-02-01

    A solution for real-time compression of digital YCRCB video data to an MPEG-1 video data stream has been developed. As an additional option, motion JPEG and video telephone streams (H.261) can be generated. For MPEG-1, up to two bidirectional predicted images are supported. The required computational power for motion estimation and DCT/IDCT, memory size and memory bandwidth have been the main challenges. The design uses fast-page-mode memory accesses and requires only one single 80 ns EDO-DRAM with 256 X 16 organization for video encoding. This can be achieved only by using adequate access and coding strategies. The architecture consists of an input processing and filter unit, a memory interface, a motion estimation unit, a motion compensation unit, a DCT unit, a quantization control, a VLC unit and a bus interface. For using the available memory bandwidth by the processing tasks, a fixed schedule for memory accesses has been applied, that can be interrupted for asynchronous events. The motion estimation unit implements a highly sophisticated hierarchical search strategy based on block matching. The DCT unit uses a separated fast-DCT flowgraph realized by a switchable hardware unit for both DCT and IDCT operation. By appropriate multiplexing, only one multiplier is required for: DCT, quantization, inverse quantization, and IDCT. The VLC unit generates the video-stream up to the video sequence layer and is directly coupled with an intelligent bus-interface. Thus, the assembly of video, audio and system data can easily be performed by the host computer. Having a relatively low complexity and only small requirements for DRAM circuits, the developed solution can be applied to low-cost encoding products for consumer electronics.

  1. A Video Game Platform for Exploring Satellite and In-Situ Data Streams

    NASA Astrophysics Data System (ADS)

    Cai, Y.

    2014-12-01

    Exploring spatiotemporal patterns of moving objects are essential to Earth Observation missions, such as tracking, modeling and predicting movement of clouds, dust, plumes and harmful algal blooms. Those missions involve high-volume, multi-source, and multi-modal imagery data analysis. Analytical models intend to reveal inner structure, dynamics, and relationship of things. However, they are not necessarily intuitive to humans. Conventional scientific visualization methods are intuitive but limited by manual operations, such as area marking, measurement and alignment of multi-source data, which are expensive and time-consuming. A new development of video analytics platform has been in progress, which integrates the video game engine with satellite and in-situ data streams. The system converts Earth Observation data into articulated objects that are mapped from a high-dimensional space to a 3D space. The object tracking and augmented reality algorithms highlight the objects' features in colors, shapes and trajectories, creating visual cues for observing dynamic patterns. The head and gesture tracker enable users to navigate the data space interactively. To validate our design, we have used NASA SeaWiFS satellite images of oceanographic remote sensing data and NOAA's in-situ cell count data. Our study demonstrates that the video game system can reduce the size and cost of traditional CAVE systems in two to three orders of magnitude. This system can also be used for satellite mission planning and public outreaching.

  2. MATIN: a random network coding based framework for high quality peer-to-peer live video streaming.

    PubMed

    Barekatain, Behrang; Khezrimotlagh, Dariush; Aizaini Maarof, Mohd; Ghaeini, Hamid Reza; Salleh, Shaharuddin; Quintana, Alfonso Ariza; Akbari, Behzad; Cabrera, Alicia Triviño

    2013-01-01

    In recent years, Random Network Coding (RNC) has emerged as a promising solution for efficient Peer-to-Peer (P2P) video multicasting over the Internet. This probably refers to this fact that RNC noticeably increases the error resiliency and throughput of the network. However, high transmission overhead arising from sending large coefficients vector as header has been the most important challenge of the RNC. Moreover, due to employing the Gauss-Jordan elimination method, considerable computational complexity can be imposed on peers in decoding the encoded blocks and checking linear dependency among the coefficients vectors. In order to address these challenges, this study introduces MATIN which is a random network coding based framework for efficient P2P video streaming. The MATIN includes a novel coefficients matrix generation method so that there is no linear dependency in the generated coefficients matrix. Using the proposed framework, each peer encapsulates one instead of n coefficients entries into the generated encoded packet which results in very low transmission overhead. It is also possible to obtain the inverted coefficients matrix using a bit number of simple arithmetic operations. In this regard, peers sustain very low computational complexities. As a result, the MATIN permits random network coding to be more efficient in P2P video streaming systems. The results obtained from simulation using OMNET++ show that it substantially outperforms the RNC which uses the Gauss-Jordan elimination method by providing better video quality on peers in terms of the four important performance metrics including video distortion, dependency distortion, End-to-End delay and Initial Startup delay.

  3. Online and unsupervised face recognition for continuous video stream

    NASA Astrophysics Data System (ADS)

    Huo, Hongwen; Feng, Jufu

    2009-10-01

    We present a novel online face recognition approach for video stream in this paper. Our method includes two stages: pre-training and online training. In the pre-training phase, our method observes interactions, collects batches of input data, and attempts to estimate their distributions (Box-Cox transformation is adopted here to normalize rough estimates). In the online training phase, our method incrementally improves classifiers' knowledge of the face space and updates it continuously with incremental eigenspace analysis. The performance achieved by our method shows its great potential in video stream processing.

  4. Network-aware scalable video monitoring system for emergency situations with operator-managed fidelity control

    NASA Astrophysics Data System (ADS)

    Al Hadhrami, Tawfik; Nightingale, James M.; Wang, Qi; Grecos, Christos

    2014-05-01

    In emergency situations, the ability to remotely monitor unfolding events using high-quality video feeds will significantly improve the incident commander's understanding of the situation and thereby aids effective decision making. This paper presents a novel, adaptive video monitoring system for emergency situations where the normal communications network infrastructure has been severely impaired or is no longer operational. The proposed scheme, operating over a rapidly deployable wireless mesh network, supports real-time video feeds between first responders, forward operating bases and primary command and control centers. Video feeds captured on portable devices carried by first responders and by static visual sensors are encoded in H.264/SVC, the scalable extension to H.264/AVC, allowing efficient, standard-based temporal, spatial, and quality scalability of the video. A three-tier video delivery system is proposed, which balances the need to avoid overuse of mesh nodes with the operational requirements of the emergency management team. In the first tier, the video feeds are delivered at a low spatial and temporal resolution employing only the base layer of the H.264/SVC video stream. Routing in this mode is designed to employ all nodes across the entire mesh network. In the second tier, whenever operational considerations require that commanders or operators focus on a particular video feed, a `fidelity control' mechanism at the monitoring station sends control messages to the routing and scheduling agents in the mesh network, which increase the quality of the received picture using SNR scalability while conserving bandwidth by maintaining a low frame rate. In this mode, routing decisions are based on reliable packet delivery with the most reliable routes being used to deliver the base and lower enhancement layers; as fidelity is increased and more scalable layers are transmitted they will be assigned to routes in descending order of reliability. The third tier of video delivery transmits a high-quality video stream including all available scalable layers using the most reliable routes through the mesh network ensuring the highest possible video quality. The proposed scheme is implemented in a proven simulator, and the performance of the proposed system is numerically evaluated through extensive simulations. We further present an in-depth analysis of the proposed solutions and potential approaches towards supporting high-quality visual communications in such a demanding context.

  5. A teledentistry system for the second opinion.

    PubMed

    Gambino, Orazio; Lima, Fausto; Pirrone, Roberto; Ardizzone, Edoardo; Campisi, Giuseppina; di Fede, Olga

    2014-01-01

    In this paper we present a Teledentistry system aimed to the Second Opinion task. It make use of a particular camera called intra-oral camera, also called dental camera, in order to perform the photo shooting and real-time video of the inner part of the mouth. The pictures acquired by the Operator with such a device are sent to the Oral Medicine Expert (OME) by means of a current File Transfer Protocol (FTP) service and the real-time video is channeled into a video streaming thanks to the VideoLan client/server (VLC) application. It is composed by a HTML5 web-pages generated by PHP and allows to perform the Second Opinion both when Operator and OME are logged and when one of them is offline.

  6. Record Desktop Activity as Streaming Videos for Asynchronous, Video-Based Collaborative Learning.

    ERIC Educational Resources Information Center

    Chang, Chih-Kai

    As Web-based courses using videos have become popular in recent years, the issue of managing audiovisual aids has become noteworthy. The contents of audiovisual aids may include a lecture, an interview, a featurette, an experiment, etc. The audiovisual aids of Web-based courses are transformed into the streaming format that can make the quality of…

  7. Evaluation of in-network adaptation of scalable high efficiency video coding (SHVC) in mobile environments

    NASA Astrophysics Data System (ADS)

    Nightingale, James; Wang, Qi; Grecos, Christos; Goma, Sergio

    2014-02-01

    High Efficiency Video Coding (HEVC), the latest video compression standard (also known as H.265), can deliver video streams of comparable quality to the current H.264 Advanced Video Coding (H.264/AVC) standard with a 50% reduction in bandwidth. Research into SHVC, the scalable extension to the HEVC standard, is still in its infancy. One important area for investigation is whether, given the greater compression ratio of HEVC (and SHVC), the loss of packets containing video content will have a greater impact on the quality of delivered video than is the case with H.264/AVC or its scalable extension H.264/SVC. In this work we empirically evaluate the layer-based, in-network adaptation of video streams encoded using SHVC in situations where dynamically changing bandwidths and datagram loss ratios require the real-time adaptation of video streams. Through the use of extensive experimentation, we establish a comprehensive set of benchmarks for SHVC-based highdefinition video streaming in loss prone network environments such as those commonly found in mobile networks. Among other results, we highlight that packet losses of only 1% can lead to a substantial reduction in PSNR of over 3dB and error propagation in over 130 pictures following the one in which the loss occurred. This work would be one of the earliest studies in this cutting-edge area that reports benchmark evaluation results for the effects of datagram loss on SHVC picture quality and offers empirical and analytical insights into SHVC adaptation to lossy, mobile networking conditions.

  8. Streaming video-based 3D reconstruction method compatible with existing monoscopic and stereoscopic endoscopy systems

    NASA Astrophysics Data System (ADS)

    Bouma, Henri; van der Mark, Wannes; Eendebak, Pieter T.; Landsmeer, Sander H.; van Eekeren, Adam W. M.; ter Haar, Frank B.; Wieringa, F. Pieter; van Basten, Jean-Paul

    2012-06-01

    Compared to open surgery, minimal invasive surgery offers reduced trauma and faster recovery. However, lack of direct view limits space perception. Stereo-endoscopy improves depth perception, but is still restricted to the direct endoscopic field-of-view. We describe a novel technology that reconstructs 3D-panoramas from endoscopic video streams providing a much wider cumulative overview. The method is compatible with any endoscope. We demonstrate that it is possible to generate photorealistic 3D-environments from mono- and stereoscopic endoscopy. The resulting 3D-reconstructions can be directly applied in simulators and e-learning. Extended to real-time processing, the method looks promising for telesurgery or other remote vision-guided tasks.

  9. An analysis of technology usage for streaming digital video in support of a preclinical curriculum.

    PubMed

    Dev, P; Rindfleisch, T C; Kush, S J; Stringer, J R

    2000-01-01

    Usage of streaming digital video of lectures in preclinical courses was measured by analysis of the data in the log file maintained on the web server. We observed that students use the video when it is available. They do not use it to replace classroom attendance but rather for review before examinations or when a class has been missed. Usage of video has not increased significantly for any course within the 18 month duration of this project.

  10. Video monitoring of oxygen saturation during controlled episodes of acute hypoxia.

    PubMed

    Addison, Paul S; Foo, David M H; Jacquel, Dominique; Borg, Ulf

    2016-08-01

    A method for extracting video photoplethysmographic information from an RGB video stream is tested on data acquired during a porcine model of acute hypoxia. Cardiac pulsatile information was extracted from the acquired signals and processed to determine a continuously reported oxygen saturation (SvidO2). A high degree of correlation was found to exist between the video and a reference from a pulse oximeter. The calculated mean bias and accuracy across all eight desaturation episodes were -0.03% (range: -0.21% to 0.24%) and accuracy 4.90% (range: 3.80% to 6.19%) respectively. The results support the hypothesis that oxygen saturation trending can be evaluated accurately from a video system during acute hypoxia.

  11. Adaptive Video Streaming Using Bandwidth Estimation for 3.5G Mobile Network

    NASA Astrophysics Data System (ADS)

    Nam, Hyeong-Min; Park, Chun-Su; Jung, Seung-Won; Ko, Sung-Jea

    Currently deployed mobile networks including High Speed Downlink Packet Access (HSDPA) offer only best-effort Quality of Service (QoS). In wireless best effort networks, the bandwidth variation is a critical problem, especially, for mobile devices with small buffers. This is because the bandwidth variation leads to packet losses caused by buffer overflow as well as picture freezing due to high transmission delay or buffer underflow. In this paper, in order to provide seamless video streaming over HSDPA, we propose an efficient real-time video streaming method that consists of the available bandwidth (AB) estimation for the HSDPA network and the transmission rate control to prevent buffer overflows/underflows. In the proposed method, the client estimates the AB and the estimated AB is fed back to the server through real-time transport control protocol (RTCP) packets. Then, the server adaptively adjusts the transmission rate according to the estimated AB and the buffer state obtained from the RTCP feedback information. Experimental results show that the proposed method achieves seamless video streaming over the HSDPA network providing higher video quality and lower transmission delay.

  12. SOA approach to battle command: simulation interoperability

    NASA Astrophysics Data System (ADS)

    Mayott, Gregory; Self, Mid; Miller, Gordon J.; McDonnell, Joseph S.

    2010-04-01

    NVESD is developing a Sensor Data and Management Services (SDMS) Service Oriented Architecture (SOA) that provides an innovative approach to achieve seamless application functionality across simulation and battle command systems. In 2010, CERDEC will conduct a SDMS Battle Command demonstration that will highlight the SDMS SOA capability to couple simulation applications to existing Battle Command systems. The demonstration will leverage RDECOM MATREX simulation tools and TRADOC Maneuver Support Battle Laboratory Virtual Base Defense Operations Center facilities. The battle command systems are those specific to the operation of a base defense operations center in support of force protection missions. The SDMS SOA consists of four components that will be discussed. An Asset Management Service (AMS) will automatically discover the existence, state, and interface definition required to interact with a named asset (sensor or a sensor platform, a process such as level-1 fusion, or an interface to a sensor or other network endpoint). A Streaming Video Service (SVS) will automatically discover the existence, state, and interfaces required to interact with a named video stream, and abstract the consumers of the video stream from the originating device. A Task Manager Service (TMS) will be used to automatically discover the existence of a named mission task, and will interpret, translate and transmit a mission command for the blue force unit(s) described in a mission order. JC3IEDM data objects, and software development kit (SDK), will be utilized as the basic data object definition for implemented web services.

  13. Power-rate-distortion analysis for wireless video communication under energy constraint

    NASA Astrophysics Data System (ADS)

    He, Zhihai; Liang, Yongfang; Ahmad, Ishfaq

    2004-01-01

    In video coding and streaming over wireless communication network, the power-demanding video encoding operates on the mobile devices with limited energy supply. To analyze, control, and optimize the rate-distortion (R-D) behavior of the wireless video communication system under the energy constraint, we need to develop a power-rate-distortion (P-R-D) analysis framework, which extends the traditional R-D analysis by including another dimension, the power consumption. Specifically, in this paper, we analyze the encoding mechanism of typical video encoding systems and develop a parametric video encoding architecture which is fully scalable in computational complexity. Using dynamic voltage scaling (DVS), a hardware technology recently developed in CMOS circuits design, the complexity scalability can be translated into the power consumption scalability of the video encoder. We investigate the rate-distortion behaviors of the complexity control parameters and establish an analytic framework to explore the P-R-D behavior of the video encoding system. Both theoretically and experimentally, we show that, using this P-R-D model, the encoding system is able to automatically adjust its complexity control parameters to match the available energy supply of the mobile device while maximizing the picture quality. The P-R-D model provides a theoretical guideline for system design and performance optimization in wireless video communication under energy constraint, especially over the wireless video sensor network.

  14. Video Transmission for Third Generation Wireless Communication Systems

    PubMed Central

    Gharavi, H.; Alamouti, S. M.

    2001-01-01

    This paper presents a twin-class unequal protected video transmission system over wireless channels. Video partitioning based on a separation of the Variable Length Coded (VLC) Discrete Cosine Transform (DCT) coefficients within each block is considered for constant bitrate transmission (CBR). In the splitting process the fraction of bits assigned to each of the two partitions is adjusted according to the requirements of the unequal error protection scheme employed. Subsequently, partitioning is applied to the ITU-T H.263 coding standard. As a transport vehicle, we have considered one of the leading third generation cellular radio standards known as WCDMA. A dual-priority transmission system is then invoked on the WCDMA system where the video data, after being broken into two streams, is unequally protected. We use a very simple error correction coding scheme for illustration and then propose more sophisticated forms of unequal protection of the digitized video signals. We show that this strategy results in a significantly higher quality of the reconstructed video data when it is transmitted over time-varying multipath fading channels. PMID:27500033

  15. A real-time TV logo tracking method using template matching

    NASA Astrophysics Data System (ADS)

    Li, Zhi; Sang, Xinzhu; Yan, Binbin; Leng, Junmin

    2012-11-01

    A fast and accurate TV Logo detection method is presented based on real-time image filtering, noise eliminating and recognition of image features including edge and gray level information. It is important to accurately extract the optical template using the time averaging method from the sample video stream, and then different templates are used to match different logos in separated video streams with different resolution based on the topology features of logos. 12 video streams with different logos are used to verify the proposed method, and the experimental result demonstrates that the achieved accuracy can be up to 99%.

  16. Competitive action video game players display rightward error bias during on-line video game play.

    PubMed

    Roebuck, Andrew J; Dubnyk, Aurora J B; Cochran, David; Mandryk, Regan L; Howland, John G; Harms, Victoria

    2017-09-12

    Research in asymmetrical visuospatial attention has identified a leftward bias in the general population across a variety of measures including visual attention and line-bisection tasks. In addition, increases in rightward collisions, or bumping, during visuospatial navigation tasks have been demonstrated in real world and virtual environments. However, little research has investigated these biases beyond the laboratory. The present study uses a semi-naturalistic approach and the online video game streaming service Twitch to examine navigational errors and assaults as skilled action video game players (n = 60) compete in Counter Strike: Global Offensive. This study showed a significant rightward bias in both fatal assaults and navigational errors. Analysis using the in-game ranking system as a measure of skill failed to show a relationship between bias and skill. These results suggest that a leftward visuospatial bias may exist in skilled players during online video game play. However, the present study was unable to account for some factors such as environmental symmetry and player handedness. In conclusion, video game streaming is a promising method for behavioural research in the future, however further study is required before one can determine whether these results are an artefact of the method applied, or representative of a genuine rightward bias.

  17. Platform for intraoperative analysis of video streams

    NASA Astrophysics Data System (ADS)

    Clements, Logan; Galloway, Robert L., Jr.

    2004-05-01

    Interactive, image-guided surgery (IIGS) has proven to increase the specificity of a variety of surgical procedures. However, current IIGS systems do not compensate for changes that occur intraoperatively and are not reflected in preoperative tomograms. Endoscopes and intraoperative ultrasound, used in minimally invasive surgery, provide real-time (RT) information in a surgical setting. Combining the information from RT imaging modalities with traditional IIGS techniques will further increase surgical specificity by providing enhanced anatomical information. In order to merge these techniques and obtain quantitative data from RT imaging modalities, a platform was developed to allow both the display and processing of video streams in RT. Using a Bandit-II CV frame grabber board (Coreco Imaging, St. Laurent, Quebec) and the associated library API, a dynamic link library was created in Microsoft Visual C++ 6.0 such that the platform could be incorporated into the IIGS system developed at Vanderbilt University. Performance characterization, using two relatively inexpensive host computers, has shown the platform capable of performing simple image processing operations on frames captured from a CCD camera and displaying the processed video data at near RT rates both independent of and while running the IIGS system.

  18. Using the Periscope Live Video-Streaming Application for Global Pathology Education: A Brief Introduction.

    PubMed

    Fuller, Maren Y; Mukhopadhyay, Sanjay; Gardner, Jerad M

    2016-07-21

    Periscope is a live video-streaming smartphone application (app) that allows any individual with a smartphone to broadcast live video simultaneously to multiple smartphone users around the world. The aim of this review is to describe the potential of this emerging technology for global pathology education. To our knowledge, since the launch of the Periscope app (2015), only a handful of educational presentations by pathologists have been streamed as live video via Periscope. This review includes links to these initial attempts, a step-by-step guide for those interested in using the app for pathology education, and a summary of the pros and cons, including ethical/legal issues. We hope that pathologists will appreciate the potential of Periscope for sharing their knowledge, expertise, and research with a live (and potentially large) audience without the barriers associated with traditional video equipment and standard classroom/conference settings.

  19. MATIN: A Random Network Coding Based Framework for High Quality Peer-to-Peer Live Video Streaming

    PubMed Central

    Barekatain, Behrang; Khezrimotlagh, Dariush; Aizaini Maarof, Mohd; Ghaeini, Hamid Reza; Salleh, Shaharuddin; Quintana, Alfonso Ariza; Akbari, Behzad; Cabrera, Alicia Triviño

    2013-01-01

    In recent years, Random Network Coding (RNC) has emerged as a promising solution for efficient Peer-to-Peer (P2P) video multicasting over the Internet. This probably refers to this fact that RNC noticeably increases the error resiliency and throughput of the network. However, high transmission overhead arising from sending large coefficients vector as header has been the most important challenge of the RNC. Moreover, due to employing the Gauss-Jordan elimination method, considerable computational complexity can be imposed on peers in decoding the encoded blocks and checking linear dependency among the coefficients vectors. In order to address these challenges, this study introduces MATIN which is a random network coding based framework for efficient P2P video streaming. The MATIN includes a novel coefficients matrix generation method so that there is no linear dependency in the generated coefficients matrix. Using the proposed framework, each peer encapsulates one instead of n coefficients entries into the generated encoded packet which results in very low transmission overhead. It is also possible to obtain the inverted coefficients matrix using a bit number of simple arithmetic operations. In this regard, peers sustain very low computational complexities. As a result, the MATIN permits random network coding to be more efficient in P2P video streaming systems. The results obtained from simulation using OMNET++ show that it substantially outperforms the RNC which uses the Gauss-Jordan elimination method by providing better video quality on peers in terms of the four important performance metrics including video distortion, dependency distortion, End-to-End delay and Initial Startup delay. PMID:23940530

  20. A video event trigger for high frame rate, high resolution video technology

    NASA Astrophysics Data System (ADS)

    Williams, Glenn L.

    1991-12-01

    When video replaces film the digitized video data accumulates very rapidly, leading to a difficult and costly data storage problem. One solution exists for cases when the video images represent continuously repetitive 'static scenes' containing negligible activity, occasionally interrupted by short events of interest. Minutes or hours of redundant video frames can be ignored, and not stored, until activity begins. A new, highly parallel digital state machine generates a digital trigger signal at the onset of a video event. High capacity random access memory storage coupled with newly available fuzzy logic devices permits the monitoring of a video image stream for long term or short term changes caused by spatial translation, dilation, appearance, disappearance, or color change in a video object. Pretrigger and post-trigger storage techniques are then adaptable for archiving the digital stream from only the significant video images.

  1. A video event trigger for high frame rate, high resolution video technology

    NASA Technical Reports Server (NTRS)

    Williams, Glenn L.

    1991-01-01

    When video replaces film the digitized video data accumulates very rapidly, leading to a difficult and costly data storage problem. One solution exists for cases when the video images represent continuously repetitive 'static scenes' containing negligible activity, occasionally interrupted by short events of interest. Minutes or hours of redundant video frames can be ignored, and not stored, until activity begins. A new, highly parallel digital state machine generates a digital trigger signal at the onset of a video event. High capacity random access memory storage coupled with newly available fuzzy logic devices permits the monitoring of a video image stream for long term or short term changes caused by spatial translation, dilation, appearance, disappearance, or color change in a video object. Pretrigger and post-trigger storage techniques are then adaptable for archiving the digital stream from only the significant video images.

  2. Tactical 3D model generation using structure-from-motion on video from unmanned systems

    NASA Astrophysics Data System (ADS)

    Harguess, Josh; Bilinski, Mark; Nguyen, Kim B.; Powell, Darren

    2015-05-01

    Unmanned systems have been cited as one of the future enablers of all the services to assist the warfighter in dominating the battlespace. The potential benefits of unmanned systems are being closely investigated -- from providing increased and potentially stealthy surveillance, removing the warfighter from harms way, to reducing the manpower required to complete a specific job. In many instances, data obtained from an unmanned system is used sparingly, being applied only to the mission at hand. Other potential benefits to be gained from the data are overlooked and, after completion of the mission, the data is often discarded or lost. However, this data can be further exploited to offer tremendous tactical, operational, and strategic value. To show the potential value of this otherwise lost data, we designed a system that persistently stores the data in its original format from the unmanned vehicle and then generates a new, innovative data medium for further analysis. The system streams imagery and video from an unmanned system (original data format) and then constructs a 3D model (new data medium) using structure-from-motion. The 3D generated model provides warfighters additional situational awareness, tactical and strategic advantages that the original video stream lacks. We present our results using simulated unmanned vehicle data with Google Earth™providing the imagery as well as real-world data, including data captured from an unmanned aerial vehicle flight.

  3. Robust audio-visual speech recognition under noisy audio-video conditions.

    PubMed

    Stewart, Darryl; Seymour, Rowan; Pass, Adrian; Ming, Ji

    2014-02-01

    This paper presents the maximum weighted stream posterior (MWSP) model as a robust and efficient stream integration method for audio-visual speech recognition in environments, where the audio or video streams may be subjected to unknown and time-varying corruption. A significant advantage of MWSP is that it does not require any specific measurements of the signal in either stream to calculate appropriate stream weights during recognition, and as such it is modality-independent. This also means that MWSP complements and can be used alongside many of the other approaches that have been proposed in the literature for this problem. For evaluation we used the large XM2VTS database for speaker-independent audio-visual speech recognition. The extensive tests include both clean and corrupted utterances with corruption added in either/both the video and audio streams using a variety of types (e.g., MPEG-4 video compression) and levels of noise. The experiments show that this approach gives excellent performance in comparison to another well-known dynamic stream weighting approach and also compared to any fixed-weighted integration approach in both clean conditions or when noise is added to either stream. Furthermore, our experiments show that the MWSP approach dynamically selects suitable integration weights on a frame-by-frame basis according to the level of noise in the streams and also according to the naturally fluctuating relative reliability of the modalities even in clean conditions. The MWSP approach is shown to maintain robust recognition performance in all tested conditions, while requiring no prior knowledge about the type or level of noise.

  4. Combining multi-layered bitmap files using network specific hardware

    DOEpatents

    DuBois, David H [Los Alamos, NM; DuBois, Andrew J [Santa Fe, NM; Davenport, Carolyn Connor [Los Alamos, NM

    2012-02-28

    Images and video can be produced by compositing or alpha blending a group of image layers or video layers. Increasing resolution or the number of layers results in increased computational demands. As such, the available computational resources limit the images and videos that can be produced. A computational architecture in which the image layers are packetized and streamed through processors can be easily scaled so to handle many image layers and high resolutions. The image layers are packetized to produce packet streams. The packets in the streams are received, placed in queues, and processed. For alpha blending, ingress queues receive the packetized image layers which are then z sorted and sent to egress queues. The egress queue packets are alpha blended to produce an output image or video.

  5. Autosophy: an alternative vision for satellite communication, compression, and archiving

    NASA Astrophysics Data System (ADS)

    Holtz, Klaus; Holtz, Eric; Kalienky, Diana

    2006-08-01

    Satellite communication and archiving systems are now designed according to an outdated Shannon information theory where all data is transmitted in meaningless bit streams. Video bit rates, for example, are determined by screen size, color resolution, and scanning rates. The video "content" is irrelevant so that totally random images require the same bit rates as blank images. An alternative system design, based on the newer Autosophy information theory, is now evolving, which transmits data "contend" or "meaning" in a universally compatible 64bit format. This would allow mixing all multimedia transmissions in the Internet's packet stream. The new systems design uses self-assembling data structures, which grow like data crystals or data trees in electronic memories, for both communication and archiving. The advantages for satellite communication and archiving may include: very high lossless image and video compression, unbreakable encryption, resistance to transmission errors, universally compatible data formats, self-organizing error-proof mass memories, immunity to the Internet's Quality of Service problems, and error-proof secure communication protocols. Legacy data transmission formats can be converted by simple software patches or integrated chipsets to be forwarded through any media - satellites, radio, Internet, cable - without needing to be reformatted. This may result in orders of magnitude improvements for all communication and archiving systems.

  6. A highly sensitive underwater video system for use in turbid aquaculture ponds.

    PubMed

    Hung, Chin-Chang; Tsao, Shih-Chieh; Huang, Kuo-Hao; Jang, Jia-Pu; Chang, Hsu-Kuang; Dobbs, Fred C

    2016-08-24

    The turbid, low-light waters characteristic of aquaculture ponds have made it difficult or impossible for previous video cameras to provide clear imagery of the ponds' benthic habitat. We developed a highly sensitive, underwater video system (UVS) for this particular application and tested it in shrimp ponds having turbidities typical of those in southern Taiwan. The system's high-quality video stream and images, together with its camera capacity (up to nine cameras), permit in situ observations of shrimp feeding behavior, shrimp size and internal anatomy, and organic matter residues on pond sediments. The UVS can operate continuously and be focused remotely, a convenience to shrimp farmers. The observations possible with the UVS provide aquaculturists with information critical to provision of feed with minimal waste; determining whether the accumulation of organic-matter residues dictates exchange of pond water; and management decisions concerning shrimp health.

  7. Remote gaming on resource-constrained devices

    NASA Astrophysics Data System (ADS)

    Reza, Waazim; Kalva, Hari; Kaufman, Richard

    2010-08-01

    Games have become important applications on mobile devices. A mobile gaming approach known as remote gaming is being developed to support games on low cost mobile devices. In the remote gaming approach, the responsibility of rendering a game and advancing the game play is put on remote servers instead of the resource constrained mobile devices. The games rendered on the servers are encoded as video and streamed to mobile devices. Mobile devices gather user input and stream the commands back to the servers to advance game play. With this solution, mobile devices with video playback and network connectivity can become game consoles. In this paper we present the design and development of such a system and evaluate the performance and design considerations to maximize the end user gaming experience.

  8. VLSI-based video event triggering for image data compression

    NASA Astrophysics Data System (ADS)

    Williams, Glenn L.

    1994-02-01

    Long-duration, on-orbit microgravity experiments require a combination of high resolution and high frame rate video data acquisition. The digitized high-rate video stream presents a difficult data storage problem. Data produced at rates of several hundred million bytes per second may require a total mission video data storage requirement exceeding one terabyte. A NASA-designed, VLSI-based, highly parallel digital state machine generates a digital trigger signal at the onset of a video event. High capacity random access memory storage coupled with newly available fuzzy logic devices permits the monitoring of a video image stream for long term (DC-like) or short term (AC-like) changes caused by spatial translation, dilation, appearance, disappearance, or color change in a video object. Pre-trigger and post-trigger storage techniques are then adaptable to archiving only the significant video images.

  9. VLSI-based Video Event Triggering for Image Data Compression

    NASA Technical Reports Server (NTRS)

    Williams, Glenn L.

    1994-01-01

    Long-duration, on-orbit microgravity experiments require a combination of high resolution and high frame rate video data acquisition. The digitized high-rate video stream presents a difficult data storage problem. Data produced at rates of several hundred million bytes per second may require a total mission video data storage requirement exceeding one terabyte. A NASA-designed, VLSI-based, highly parallel digital state machine generates a digital trigger signal at the onset of a video event. High capacity random access memory storage coupled with newly available fuzzy logic devices permits the monitoring of a video image stream for long term (DC-like) or short term (AC-like) changes caused by spatial translation, dilation, appearance, disappearance, or color change in a video object. Pre-trigger and post-trigger storage techniques are then adaptable to archiving only the significant video images.

  10. A perioperative echocardiographic reporting and recording system.

    PubMed

    Pybus, David A

    2004-11-01

    Advances in video capture, compression, and streaming technology, coupled with improvements in central processing unit design and the inclusion of a database engine in the Windows operating system, have simplified the task of implementing a digital echocardiographic recording system. I describe an application that uses these technologies and runs on a notebook computer.

  11. High dynamic range adaptive real-time smart camera: an overview of the HDR-ARTiST project

    NASA Astrophysics Data System (ADS)

    Lapray, Pierre-Jean; Heyrman, Barthélémy; Ginhac, Dominique

    2015-04-01

    Standard cameras capture only a fraction of the information that is visible to the human visual system. This is specifically true for natural scenes including areas of low and high illumination due to transitions between sunlit and shaded areas. When capturing such a scene, many cameras are unable to store the full Dynamic Range (DR) resulting in low quality video where details are concealed in shadows or washed out by sunlight. The imaging technique that can overcome this problem is called HDR (High Dynamic Range) imaging. This paper describes a complete smart camera built around a standard off-the-shelf LDR (Low Dynamic Range) sensor and a Virtex-6 FPGA board. This smart camera called HDR-ARtiSt (High Dynamic Range Adaptive Real-time Smart camera) is able to produce a real-time HDR live video color stream by recording and combining multiple acquisitions of the same scene while varying the exposure time. This technique appears as one of the most appropriate and cheapest solution to enhance the dynamic range of real-life environments. HDR-ARtiSt embeds real-time multiple captures, HDR processing, data display and transfer of a HDR color video for a full sensor resolution (1280 1024 pixels) at 60 frames per second. The main contributions of this work are: (1) Multiple Exposure Control (MEC) dedicated to the smart image capture with alternating three exposure times that are dynamically evaluated from frame to frame, (2) Multi-streaming Memory Management Unit (MMMU) dedicated to the memory read/write operations of the three parallel video streams, corresponding to the different exposure times, (3) HRD creating by combining the video streams using a specific hardware version of the Devebecs technique, and (4) Global Tone Mapping (GTM) of the HDR scene for display on a standard LCD monitor.

  12. Segment scheduling method for reducing 360° video streaming latency

    NASA Astrophysics Data System (ADS)

    Gudumasu, Srinivas; Asbun, Eduardo; He, Yong; Ye, Yan

    2017-09-01

    360° video is an emerging new format in the media industry enabled by the growing availability of virtual reality devices. It provides the viewer a new sense of presence and immersion. Compared to conventional rectilinear video (2D or 3D), 360° video poses a new and difficult set of engineering challenges on video processing and delivery. Enabling comfortable and immersive user experience requires very high video quality and very low latency, while the large video file size poses a challenge to delivering 360° video in a quality manner at scale. Conventionally, 360° video represented in equirectangular or other projection formats can be encoded as a single standards-compliant bitstream using existing video codecs such as H.264/AVC or H.265/HEVC. Such method usually needs very high bandwidth to provide an immersive user experience. While at the client side, much of such high bandwidth and the computational power used to decode the video are wasted because the user only watches a small portion (i.e., viewport) of the entire picture. Viewport dependent 360°video processing and delivery approaches spend more bandwidth on the viewport than on non-viewports and are therefore able to reduce the overall transmission bandwidth. This paper proposes a dual buffer segment scheduling algorithm for viewport adaptive streaming methods to reduce latency when switching between high quality viewports in 360° video streaming. The approach decouples the scheduling of viewport segments and non-viewport segments to ensure the viewport segment requested matches the latest user head orientation. A base layer buffer stores all lower quality segments, and a viewport buffer stores high quality viewport segments corresponding to the most recent viewer's head orientation. The scheduling scheme determines viewport requesting time based on the buffer status and the head orientation. This paper also discusses how to deploy the proposed scheduling design for various viewport adaptive video streaming methods. The proposed dual buffer segment scheduling method is implemented in an end-to-end tile based 360° viewports adaptive video streaming platform, where the entire 360° video is divided into a number of tiles, and each tile is independently encoded into multiple quality level representations. The client requests different quality level representations of each tile based on the viewer's head orientation and the available bandwidth, and then composes all tiles together for rendering. The simulation results verify that the proposed dual buffer segment scheduling algorithm reduces the viewport switch latency, and utilizes available bandwidth more efficiently. As a result, a more consistent immersive 360° video viewing experience can be presented to the user.

  13. Stackable middleware services for advanced multimedia applications. Final report for period July 14, 1999 - July 14, 2001

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feng, Wu-chi; Crawfis, Roger, Weide, Bruce

    2002-02-01

    In this project, the authors propose the research, development, and distribution of a stackable component-based multimedia streaming protocol middleware service. The goals of this stackable middleware interface include: (1) The middleware service will provide application writers and scientists easy to use interfaces that support their visualization needs. (2) The middleware service will support a variety of image compression modes. Currently, many of the network adaptation protocols for video have been developed with DCT-based compression algorithms like H.261, MPEG-1, or MPEG-2 in mind. It is expected that with advanced scientific computing applications that the lossy compression of the image data willmore » be unacceptable in certain instances. The middleware service will support several in-line lossless compression modes for error-sensitive scientific visualization data. (3) The middleware service will support two different types of streaming video modes: one for interactive collaboration of scientists and a stored video streaming mode for viewing prerecorded animations. The use of two different streaming types will allow the quality of the video delivered to the user to be maximized. Most importantly, this service will happen transparently to the user (with some basic controls exported to the user for domain specific tweaking). In the spirit of layered network protocols (like ISO and TCP/IP), application writers should not have to know a large amount about lower level network details. Currently, many example video streaming players have their congestion management techniques tightly integrated into the video player itself and are, for the most part, ''one-off'' applications. As more networked multimedia and video applications are written in the future, a larger percentage of these programmers and scientist will most likely know little about the underlying networking layer. By providing a simple, powerful, and semi-transparent middleware layer, the successful completion of this project will help serve as a catalyst to support future video-based applications, particularly those of advanced scientific computing applications.« less

  14. Streaming weekly soap opera video episodes to smartphones in a randomized controlled trial to reduce HIV risk in young urban African American/black women.

    PubMed

    Jones, Rachel; Lacroix, Lorraine J

    2012-07-01

    Love, Sex, and Choices is a 12-episode soap opera video series created as an intervention to reduce HIV sex risk. The effect on women's HIV risk behavior was evaluated in a randomized controlled trial in 238 high risk, predominately African American young adult women in the urban Northeast. To facilitate on-demand access and privacy, the episodes were streamed to study-provided smartphones. Here, we discuss the development of a mobile platform to deliver the 12-weekly video episodes or weekly HIV risk reduction written messages to smartphones, including; the technical requirements, development, and evaluation. Popularity of the smartphone and use of the Internet for multimedia offer a new channel to address health disparities in traditionally underserved populations. This is the first study to report on streaming a serialized video-based intervention to a smartphone. The approach described here may provide useful insights in assessing advantages and disadvantages of smartphones to implement a video-based intervention.

  15. Studies on a Novel Neuro-dynamic Model for Prediction Learning of Fluctuated Data Streams: Beyond Dichotomy between Probabilistic and Deterministic Models

    DTIC Science & Technology

    2014-11-04

    learning by robots as well as video image understanding by accumulated learning of the exemplars are discussed. 15. SUBJECT TERMS Cognitive ...learning to predict perceptual streams or encountering events by acquiring internal models is indispensable for intelligent or cognitive systems because...various cognitive functions are based on this compentency including goal-directed planning, mental simulation and recognition of the current situation

  16. Shot boundary detection and label propagation for spatio-temporal video segmentation

    NASA Astrophysics Data System (ADS)

    Piramanayagam, Sankaranaryanan; Saber, Eli; Cahill, Nathan D.; Messinger, David

    2015-02-01

    This paper proposes a two stage algorithm for streaming video segmentation. In the first stage, shot boundaries are detected within a window of frames by comparing dissimilarity between 2-D segmentations of each frame. In the second stage, the 2-D segments are propagated across the window of frames in both spatial and temporal direction. The window is moved across the video to find all shot transitions and obtain spatio-temporal segments simultaneously. As opposed to techniques that operate on entire video, the proposed approach consumes significantly less memory and enables segmentation of lengthy videos. We tested our segmentation based shot detection method on the TRECVID 2007 video dataset and compared it with block-based technique. Cut detection results on the TRECVID 2007 dataset indicate that our algorithm has comparable results to the best of the block-based methods. The streaming video segmentation routine also achieves promising results on a challenging video segmentation benchmark database.

  17. Image quality assessment for video stream recognition systems

    NASA Astrophysics Data System (ADS)

    Chernov, Timofey S.; Razumnuy, Nikita P.; Kozharinov, Alexander S.; Nikolaev, Dmitry P.; Arlazarov, Vladimir V.

    2018-04-01

    Recognition and machine vision systems have long been widely used in many disciplines to automate various processes of life and industry. Input images of optical recognition systems can be subjected to a large number of different distortions, especially in uncontrolled or natural shooting conditions, which leads to unpredictable results of recognition systems, making it impossible to assess their reliability. For this reason, it is necessary to perform quality control of the input data of recognition systems, which is facilitated by modern progress in the field of image quality evaluation. In this paper, we investigate the approach to designing optical recognition systems with built-in input image quality estimation modules and feedback, for which the necessary definitions are introduced and a model for describing such systems is constructed. The efficiency of this approach is illustrated by the example of solving the problem of selecting the best frames for recognition in a video stream for a system with limited resources. Experimental results are presented for the system for identity documents recognition, showing a significant increase in the accuracy and speed of the system under simulated conditions of automatic camera focusing, leading to blurring of frames.

  18. Object tracking using multiple camera video streams

    NASA Astrophysics Data System (ADS)

    Mehrubeoglu, Mehrube; Rojas, Diego; McLauchlan, Lifford

    2010-05-01

    Two synchronized cameras are utilized to obtain independent video streams to detect moving objects from two different viewing angles. The video frames are directly correlated in time. Moving objects in image frames from the two cameras are identified and tagged for tracking. One advantage of such a system involves overcoming effects of occlusions that could result in an object in partial or full view in one camera, when the same object is fully visible in another camera. Object registration is achieved by determining the location of common features in the moving object across simultaneous frames. Perspective differences are adjusted. Combining information from images from multiple cameras increases robustness of the tracking process. Motion tracking is achieved by determining anomalies caused by the objects' movement across frames in time in each and the combined video information. The path of each object is determined heuristically. Accuracy of detection is dependent on the speed of the object as well as variations in direction of motion. Fast cameras increase accuracy but limit the speed and complexity of the algorithm. Such an imaging system has applications in traffic analysis, surveillance and security, as well as object modeling from multi-view images. The system can easily be expanded by increasing the number of cameras such that there is an overlap between the scenes from at least two cameras in proximity. An object can then be tracked long distances or across multiple cameras continuously, applicable, for example, in wireless sensor networks for surveillance or navigation.

  19. Quality of experience enhancement of high efficiency video coding video streaming in wireless packet networks using multiple description coding

    NASA Astrophysics Data System (ADS)

    Boumehrez, Farouk; Brai, Radhia; Doghmane, Noureddine; Mansouri, Khaled

    2018-01-01

    Recently, video streaming has attracted much attention and interest due to its capability to process and transmit large data. We propose a quality of experience (QoE) model relying on high efficiency video coding (HEVC) encoder adaptation scheme, in turn based on the multiple description coding (MDC) for video streaming. The main contributions of the paper are (1) a performance evaluation of the new and emerging video coding standard HEVC/H.265, which is based on the variation of quantization parameter (QP) values depending on different video contents to deduce their influence on the sequence to be transmitted, (2) QoE support multimedia applications in wireless networks are investigated, so we inspect the packet loss impact on the QoE of transmitted video sequences, (3) HEVC encoder parameter adaptation scheme based on MDC is modeled with the encoder parameter and objective QoE model. A comparative study revealed that the proposed MDC approach is effective for improving the transmission with a peak signal-to-noise ratio (PSNR) gain of about 2 to 3 dB. Results show that a good choice of QP value can compensate for transmission channel effects and improve received video quality, although HEVC/H.265 is also sensitive to packet loss. The obtained results show the efficiency of our proposed method in terms of PSNR and mean-opinion-score.

  20. Nonchronological video synopsis and indexing.

    PubMed

    Pritch, Yael; Rav-Acha, Alex; Peleg, Shmuel

    2008-11-01

    The amount of captured video is growing with the increased numbers of video cameras, especially the increase of millions of surveillance cameras that operate 24 hours a day. Since video browsing and retrieval is time consuming, most captured video is never watched or examined. Video synopsis is an effective tool for browsing and indexing of such a video. It provides a short video representation, while preserving the essential activities of the original video. The activity in the video is condensed into a shorter period by simultaneously showing multiple activities, even when they originally occurred at different times. The synopsis video is also an index into the original video by pointing to the original time of each activity. Video Synopsis can be applied to create a synopsis of an endless video streams, as generated by webcams and by surveillance cameras. It can address queries like "Show in one minute the synopsis of this camera broadcast during the past day''. This process includes two major phases: (i) An online conversion of the endless video stream into a database of objects and activities (rather than frames). (ii) A response phase, generating the video synopsis as a response to the user's query.

  1. Online Discussion Forums with Embedded Streamed Videos on Distance Courses

    ERIC Educational Resources Information Center

    Fernandez, Vicenc; Simo, Pep; Castillo, David; Sallan, Jose M.

    2014-01-01

    Existing literature on education and technology has frequently highlighted the usefulness of online discussion forums for distance courses; however, the majority of such investigations have focused their attention only on text-based forums. The objective of this paper is to determine if the embedding of streamed videos in online discussion forums…

  2. Audio-video feature correlation: faces and speech

    NASA Astrophysics Data System (ADS)

    Durand, Gwenael; Montacie, Claude; Caraty, Marie-Jose; Faudemay, Pascal

    1999-08-01

    This paper presents a study of the correlation of features automatically extracted from the audio stream and the video stream of audiovisual documents. In particular, we were interested in finding out whether speech analysis tools could be combined with face detection methods, and to what extend they should be combined. A generic audio signal partitioning algorithm as first used to detect Silence/Noise/Music/Speech segments in a full length movie. A generic object detection method was applied to the keyframes extracted from the movie in order to detect the presence or absence of faces. The correlation between the presence of a face in the keyframes and of the corresponding voice in the audio stream was studied. A third stream, which is the script of the movie, is warped on the speech channel in order to automatically label faces appearing in the keyframes with the name of the corresponding character. We naturally found that extracted audio and video features were related in many cases, and that significant benefits can be obtained from the joint use of audio and video analysis methods.

  3. A streaming-based solution for remote visualization of 3D graphics on mobile devices.

    PubMed

    Lamberti, Fabrizio; Sanna, Andrea

    2007-01-01

    Mobile devices such as Personal Digital Assistants, Tablet PCs, and cellular phones have greatly enhanced user capability to connect to remote resources. Although a large set of applications are now available bridging the gap between desktop and mobile devices, visualization of complex 3D models is still a task hard to accomplish without specialized hardware. This paper proposes a system where a cluster of PCs, equipped with accelerated graphics cards managed by the Chromium software, is able to handle remote visualization sessions based on MPEG video streaming involving complex 3D models. The proposed framework allows mobile devices such as smart phones, Personal Digital Assistants (PDAs), and Tablet PCs to visualize objects consisting of millions of textured polygons and voxels at a frame rate of 30 fps or more depending on hardware resources at the server side and on multimedia capabilities at the client side. The server is able to concurrently manage multiple clients computing a video stream for each one; resolution and quality of each stream is tailored according to screen resolution and bandwidth of the client. The paper investigates in depth issues related to latency time, bit rate and quality of the generated stream, screen resolutions, as well as frames per second displayed.

  4. Detecting and Analyzing Multiple Moving Objects in Crowded Environments with Coherent Motion Regions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheriyadat, Anil M.

    Understanding the world around us from large-scale video data requires vision systems that can perform automatic interpretation. While human eyes can unconsciously perceive independent objects in crowded scenes and other challenging operating environments, automated systems have difficulty detecting, counting, and understanding their behavior in similar scenes. Computer scientists at ORNL have a developed a technology termed as "Coherent Motion Region Detection" that invloves identifying multiple indepedent moving objects in crowded scenes by aggregating low-level motion cues extracted from moving objects. Humans and other species exploit such low-level motion cues seamlessely to perform perceptual grouping for visual understanding. The algorithm detectsmore » and tracks feature points on moving objects resulting in partial trajectories that span coherent 3D region in the space-time volume defined by the video. In the case of multi-object motion, many possible coherent motion regions can be constructed around the set of trajectories. The unique approach in the algorithm is to identify all possible coherent motion regions, then extract a subset of motion regions based on an innovative measure to automatically locate moving objects in crowded environments.The software reports snapshot of the object, count, and derived statistics ( count over time) from input video streams. The software can directly process videos streamed over the internet or directly from a hardware device (camera).« less

  5. Thermal imagers: from ancient analog video output to state-of-the-art video streaming

    NASA Astrophysics Data System (ADS)

    Haan, Hubertus; Feuchter, Timo; Münzberg, Mario; Fritze, Jörg; Schlemmer, Harry

    2013-06-01

    The video output of thermal imagers stayed constant over almost two decades. When the famous Common Modules were employed a thermal image at first was presented to the observer in the eye piece only. In the early 1990s TV cameras were attached and the standard output was CCIR. In the civil camera market output standards changed to digital formats a decade ago with digital video streaming being nowadays state-of-the-art. The reasons why the output technique in the thermal world stayed unchanged over such a long time are: the very conservative view of the military community, long planning and turn-around times of programs and a slower growth of pixel number of TIs in comparison to consumer cameras. With megapixel detectors the CCIR output format is not sufficient any longer. The paper discusses the state-of-the-art compression and streaming solutions for TIs.

  6. Applying emerging digital video interface standards to airborne avionics sensor and digital map integrations: benefits outweigh the initial costs

    NASA Astrophysics Data System (ADS)

    Kuehl, C. Stephen

    1996-06-01

    Video signal system performance can be compromised in a military aircraft cockpit management system (CMS) with the tailoring of vintage Electronics Industries Association (EIA) RS170 and RS343A video interface standards. Video analog interfaces degrade when induced system noise is present. Further signal degradation has been traditionally associated with signal data conversions between avionics sensor outputs and the cockpit display system. If the CMS engineering process is not carefully applied during the avionics video and computing architecture development, extensive and costly redesign will occur when visual sensor technology upgrades are incorporated. Close monitoring and technical involvement in video standards groups provides the knowledge-base necessary for avionic systems engineering organizations to architect adaptable and extendible cockpit management systems. With the Federal Communications Commission (FCC) in the process of adopting the Digital HDTV Grand Alliance System standard proposed by the Advanced Television Systems Committee (ATSC), the entertainment and telecommunications industries are adopting and supporting the emergence of new serial/parallel digital video interfaces and data compression standards that will drastically alter present NTSC-M video processing architectures. The re-engineering of the U.S. Broadcasting system must initially preserve the electronic equipment wiring networks within broadcast facilities to make the transition to HDTV affordable. International committee activities in technical forums like ITU-R (former CCIR), ANSI/SMPTE, IEEE, and ISO/IEC are establishing global consensus on video signal parameterizations that support a smooth transition from existing analog based broadcasting facilities to fully digital computerized systems. An opportunity exists for implementing these new video interface standards over existing video coax/triax cabling in military aircraft cockpit management systems. Reductions in signal conversion processing steps, major improvement in video noise reduction, and an added capability to pass audio/embedded digital data within the digital video signal stream are the significant performance increases associated with the incorporation of digital video interface standards. By analyzing the historical progression of military CMS developments, establishing a systems engineering process for CMS design, tracing the commercial evolution of video signal standardization, adopting commercial video signal terminology/definitions, and comparing/contrasting CMS architecture modifications using digital video interfaces; this paper provides a technical explanation on how a systems engineering process approach to video interface standardization can result in extendible and affordable cockpit management systems.

  7. Digital cinema system using JPEG2000 movie of 8-million pixel resolution

    NASA Astrophysics Data System (ADS)

    Fujii, Tatsuya; Nomura, Mitsuru; Shirai, Daisuke; Yamaguchi, Takahiro; Fujii, Tetsuro; Ono, Sadayasu

    2003-05-01

    We have developed a prototype digital cinema system that can store, transmit and display extra high quality movies of 8-million pixel resolution, using JPEG2000 coding algorithm. The image quality is 4 times better than HDTV in resolution, and enables us to replace conventional films with digital cinema archives. Using wide-area optical gigabit IP networks, cinema contents are distributed and played back as a video-on-demand (VoD) system. The system consists of three main devices, a video server, a real-time JPEG2000 decoder, and a large-venue LCD projector. All digital movie data are compressed by JPEG2000 and stored in advance. The coded streams of 300~500 Mbps can be continuously transmitted from the PC server using TCP/IP. The decoder can perform the real-time decompression at 24/48 frames per second, using 120 parallel JPEG2000 processing elements. The received streams are expanded into 4.5Gbps raw video signals. The prototype LCD projector uses 3 pieces of 3840×2048 pixel reflective LCD panels (D-ILA) to show RGB 30-bit color movies fed by the decoder. The brightness exceeds 3000 ANSI lumens for a 300-inch screen. The refresh rate is chosen to 96Hz to thoroughly eliminate flickers, while preserving compatibility to cinema movies of 24 frames per second.

  8. Annotation of UAV surveillance video

    NASA Astrophysics Data System (ADS)

    Howlett, Todd; Robertson, Mark A.; Manthey, Dan; Krol, John

    2004-08-01

    Significant progress toward the development of a video annotation capability is presented in this paper. Research and development of an object tracking algorithm applicable for UAV video is described. Object tracking is necessary for attaching the annotations to the objects of interest. A methodology and format is defined for encoding video annotations using the SMPTE Key-Length-Value encoding standard. This provides the following benefits: a non-destructive annotation, compliance with existing standards, video playback in systems that are not annotation enabled and support for a real-time implementation. A model real-time video annotation system is also presented, at a high level, using the MPEG-2 Transport Stream as the transmission medium. This work was accomplished to meet the Department of Defense"s (DoD"s) need for a video annotation capability. Current practices for creating annotated products are to capture a still image frame, annotate it using an Electric Light Table application, and then pass the annotated image on as a product. That is not adequate for reporting or downstream cueing. It is too slow and there is a severe loss of information. This paper describes a capability for annotating directly on the video.

  9. Using Text Mining to Uncover Students' Technology-Related Problems in Live Video Streaming

    ERIC Educational Resources Information Center

    Abdous, M'hammed; He, Wu

    2011-01-01

    Because of their capacity to sift through large amounts of data, text mining and data mining are enabling higher education institutions to reveal valuable patterns in students' learning behaviours without having to resort to traditional survey methods. In an effort to uncover live video streaming (LVS) students' technology related-problems and to…

  10. Constructing a Streaming Video-Based Learning Forum for Collaborative Learning

    ERIC Educational Resources Information Center

    Chang, Chih-Kai

    2004-01-01

    As web-based courses using videos have become popular in recent years, the issue of managing audio-visual aids has become pertinent. Generally, the contents of audio-visual aids may include a lecture, an interview, a report, or an experiment, which may be transformed into a streaming format capable of making the quality of Internet-based videos…

  11. An Evaluation of Streaming Digital Video Resources in On- and Off-Campus Engineering Management Education

    ERIC Educational Resources Information Center

    Palmer, Stuart

    2007-01-01

    A recent television documentary on the Columbia space shuttle disaster was converted to streaming digital video format for educational use by on- and off-campus students in an engineering management study unit examining issues in professional engineering ethics. An evaluation was conducted to assess the effectiveness of this new resource. Use of…

  12. A highly sensitive underwater video system for use in turbid aquaculture ponds

    NASA Astrophysics Data System (ADS)

    Hung, Chin-Chang; Tsao, Shih-Chieh; Huang, Kuo-Hao; Jang, Jia-Pu; Chang, Hsu-Kuang; Dobbs, Fred C.

    2016-08-01

    The turbid, low-light waters characteristic of aquaculture ponds have made it difficult or impossible for previous video cameras to provide clear imagery of the ponds’ benthic habitat. We developed a highly sensitive, underwater video system (UVS) for this particular application and tested it in shrimp ponds having turbidities typical of those in southern Taiwan. The system’s high-quality video stream and images, together with its camera capacity (up to nine cameras), permit in situ observations of shrimp feeding behavior, shrimp size and internal anatomy, and organic matter residues on pond sediments. The UVS can operate continuously and be focused remotely, a convenience to shrimp farmers. The observations possible with the UVS provide aquaculturists with information critical to provision of feed with minimal waste; determining whether the accumulation of organic-matter residues dictates exchange of pond water; and management decisions concerning shrimp health.

  13. A highly sensitive underwater video system for use in turbid aquaculture ponds

    PubMed Central

    Hung, Chin-Chang; Tsao, Shih-Chieh; Huang, Kuo-Hao; Jang, Jia-Pu; Chang, Hsu-Kuang; Dobbs, Fred C.

    2016-01-01

    The turbid, low-light waters characteristic of aquaculture ponds have made it difficult or impossible for previous video cameras to provide clear imagery of the ponds’ benthic habitat. We developed a highly sensitive, underwater video system (UVS) for this particular application and tested it in shrimp ponds having turbidities typical of those in southern Taiwan. The system’s high-quality video stream and images, together with its camera capacity (up to nine cameras), permit in situ observations of shrimp feeding behavior, shrimp size and internal anatomy, and organic matter residues on pond sediments. The UVS can operate continuously and be focused remotely, a convenience to shrimp farmers. The observations possible with the UVS provide aquaculturists with information critical to provision of feed with minimal waste; determining whether the accumulation of organic-matter residues dictates exchange of pond water; and management decisions concerning shrimp health. PMID:27554201

  14. Game theoretic wireless resource allocation for H.264 MGS video transmission over cognitive radio networks

    NASA Astrophysics Data System (ADS)

    Fragkoulis, Alexandros; Kondi, Lisimachos P.; Parsopoulos, Konstantinos E.

    2015-03-01

    We propose a method for the fair and efficient allocation of wireless resources over a cognitive radio system network to transmit multiple scalable video streams to multiple users. The method exploits the dynamic architecture of the Scalable Video Coding extension of the H.264 standard, along with the diversity that OFDMA networks provide. We use a game-theoretic Nash Bargaining Solution (NBS) framework to ensure that each user receives the minimum video quality requirements, while maintaining fairness over the cognitive radio system. An optimization problem is formulated, where the objective is the maximization of the Nash product while minimizing the waste of resources. The problem is solved by using a Swarm Intelligence optimizer, namely Particle Swarm Optimization. Due to the high dimensionality of the problem, we also introduce a dimension-reduction technique. Our experimental results demonstrate the fairness imposed by the employed NBS framework.

  15. The compressed average image intensity metric for stereoscopic video quality assessment

    NASA Astrophysics Data System (ADS)

    Wilczewski, Grzegorz

    2016-09-01

    The following article depicts insights towards design, creation and testing of a genuine metric designed for a 3DTV video quality evaluation. The Compressed Average Image Intensity (CAII) mechanism is based upon stereoscopic video content analysis, setting its core feature and functionality to serve as a versatile tool for an effective 3DTV service quality assessment. Being an objective type of quality metric it may be utilized as a reliable source of information about the actual performance of a given 3DTV system, under strict providers evaluation. Concerning testing and the overall performance analysis of the CAII metric, the following paper presents comprehensive study of results gathered across several testing routines among selected set of samples of stereoscopic video content. As a result, the designed method for stereoscopic video quality evaluation is investigated across the range of synthetic visual impairments injected into the original video stream.

  16. Dashboard Videos

    ERIC Educational Resources Information Center

    Gleue, Alan D.; Depcik, Chris; Peltier, Ted

    2012-01-01

    Last school year, I had a web link emailed to me entitled "A Dashboard Physics Lesson." The link, created and posted by Dale Basier on his "Lab Out Loud" blog, illustrates video of a car's speedometer synchronized with video of the road. These two separate video streams are compiled into one video that students can watch and analyze. After seeing…

  17. The QoE implications of ultra-high definition video adaptation strategies

    NASA Astrophysics Data System (ADS)

    Nightingale, James; Awobuluyi, Olatunde; Wang, Qi; Alcaraz-Calero, Jose M.; Grecos, Christos

    2016-04-01

    As the capabilities of high-end consumer devices increase, streaming and playback of Ultra-High Definition (UHD) is set to become commonplace. The move to these new, higher resolution, video services is one of the main factors contributing to the predicted continuation of growth in video related traffic in the Internet. This massive increases in bandwidth requirement, even when mitigated by the use of new video compression standards such as H.265, will place an ever-increasing burden on network service providers. This will be especially true in mobile environments where users have come to expect ubiquitous access to content. Consequently, delivering UHD and Full UHD (FUHD) video content is one of the key drivers for future Fifth Generation (5G) mobile networks. One often voiced, but as yet unanswered question, is whether users of mobile devices with modest screen sizes (e.g. smartphones or smaller tablet) will actually benefit from consuming the much higher bandwidth required to watch online UHD video, in terms of an improved user experience. In this paper, we use scalable H.265 encoded video streams to conduct a subjective evaluation of the impact on a user's perception of video quality across a comprehensive range of adaptation strategies, covering each of the three adaptation domains, for UHD and FUHD video. The results of our subjective study provide insightful and useful indications of which methods of adapting UHD and FUHD streams have the least impact on user's perceived QoE. In particular, it was observed that, in over 70% of cases, users were unable to distinguish between full HD (1080p) and UHD (4K) videos when they were unaware of which version was being shown to them. Our results from this evaluation can be used to provide adaptation rule sets that will facilitate fast, QoE aware in-network adaptation of video streams in support of realtime adaptation objectives. Undoubtedly they will also promote discussion around how network service providers manage their relationships with end users and how service level agreements might be shaped to account for what may be viewed as `unproductive' use of bandwidth to deliver very marginal or imperceptible improvements in viewing experience.

  18. Web server for priority ordered multimedia services

    NASA Astrophysics Data System (ADS)

    Celenk, Mehmet; Godavari, Rakesh K.; Vetnes, Vermund

    2001-10-01

    In this work, our aim is to provide finer priority levels in the design of a general-purpose Web multimedia server with provisions of the CM services. The type of services provided include reading/writing a web page, downloading/uploading an audio/video stream, navigating the Web through browsing, and interactive video teleconferencing. The selected priority encoding levels for such operations follow the order of admin read/write, hot page CM and Web multicasting, CM read, Web read, CM write and Web write. Hot pages are the most requested CM streams (e.g., the newest movies, video clips, and HDTV channels) and Web pages (e.g., portal pages of the commercial Internet search engines). Maintaining a list of these hot Web pages and CM streams in a content addressable buffer enables a server to multicast hot streams with lower latency and higher system throughput. Cold Web pages and CM streams are treated as regular Web and CM requests. Interactive CM operations such as pause (P), resume (R), fast-forward (FF), and rewind (RW) have to be executed without allocation of extra resources. The proposed multimedia server model is a part of the distributed network with load balancing schedulers. The SM is connected to an integrated disk scheduler (IDS), which supervises an allocated disk manager. The IDS follows the same priority handling as the SM, and implements a SCAN disk-scheduling method for an improved disk access and a higher throughput. Different disks are used for the Web and CM services in order to meet the QoS requirements of CM services. The IDS ouput is forwarded to an Integrated Transmission Scheduler (ITS). The ITS creates a priority ordered buffering of the retrieved Web pages and CM data streams that are fed into an auto regressive moving average (ARMA) based traffic shaping circuitry before being transmitted through the network.

  19. Vehicle-triggered video compression/decompression for fast and efficient searching in large video databases

    NASA Astrophysics Data System (ADS)

    Bulan, Orhan; Bernal, Edgar A.; Loce, Robert P.; Wu, Wencheng

    2013-03-01

    Video cameras are widely deployed along city streets, interstate highways, traffic lights, stop signs and toll booths by entities that perform traffic monitoring and law enforcement. The videos captured by these cameras are typically compressed and stored in large databases. Performing a rapid search for a specific vehicle within a large database of compressed videos is often required and can be a time-critical life or death situation. In this paper, we propose video compression and decompression algorithms that enable fast and efficient vehicle or, more generally, event searches in large video databases. The proposed algorithm selects reference frames (i.e., I-frames) based on a vehicle having been detected at a specified position within the scene being monitored while compressing a video sequence. A search for a specific vehicle in the compressed video stream is performed across the reference frames only, which does not require decompression of the full video sequence as in traditional search algorithms. Our experimental results on videos captured in a local road show that the proposed algorithm significantly reduces the search space (thus reducing time and computational resources) in vehicle search tasks within compressed video streams, particularly those captured in light traffic volume conditions.

  20. Stochastic Packet Loss Model to Evaluate QoE Impairments

    NASA Astrophysics Data System (ADS)

    Hohlfeld, Oliver

    With provisioning of broadband access for mass market—even in wireless and mobile networks—multimedia content, especially real-time streaming of high-quality audio and video, is extensively viewed and exchanged over the Internet. Quality of Experience (QoE) aspects, describing the service quality perceived by the user, is a vital factor in ensuring customer satisfaction in today's communication networks. Frameworks for accessing quality degradations in streamed video currently are investigated as a complex multi-layered research topic, involving network traffic load, codec functions and measures of user perception of video quality.

  1. Interactive real-time media streaming with reliable communication

    NASA Astrophysics Data System (ADS)

    Pan, Xunyu; Free, Kevin M.

    2014-02-01

    Streaming media is a recent technique for delivering multimedia information from a source provider to an end- user over the Internet. The major advantage of this technique is that the media player can start playing a multimedia file even before the entire file is transmitted. Most streaming media applications are currently implemented based on the client-server architecture, where a server system hosts the media file and a client system connects to this server system to download the file. Although the client-server architecture is successful in many situations, it may not be ideal to rely on such a system to provide the streaming service as users may be required to register an account using personal information in order to use the service. This is troublesome if a user wishes to watch a movie simultaneously while interacting with a friend in another part of the world over the Internet. In this paper, we describe a new real-time media streaming application implemented on a peer-to-peer (P2P) architecture in order to overcome these challenges within a mobile environment. When using the peer-to-peer architecture, streaming media is shared directly between end-users, called peers, with minimal or no reliance on a dedicated server. Based on the proposed software pɛvμa (pronounced [revma]), named for the Greek word meaning stream, we can host a media file on any computer and directly stream it to a connected partner. To accomplish this, pɛvμa utilizes the Microsoft .NET Framework and Windows Presentation Framework, which are widely available on various types of windows-compatible personal computers and mobile devices. With specially designed multi-threaded algorithms, the application can stream HD video at speeds upwards of 20 Mbps using the User Datagram Protocol (UDP). Streaming and playback are handled using synchronized threads that communicate with one another once a connection is established. Alteration of playback, such as pausing playback or tracking to a different spot in the media file, will be reflected in all media streams. These techniques are designed to allow users at different locations to simultaneously view a full length HD video and interactively control the media streaming session. To create a sustainable media stream with high quality, our system supports UDP packet loss recovery at high transmission speed using custom File- Buffers. Traditional real-time streaming protocols such as Real-time Transport Protocol/RTP Control Protocol (RTP/RTCP) provide no such error recovery mechanism. Finally, the system also features an Instant Messenger that allows users to perform social interactions with one another while they enjoy a media file. The ultimate goal of the application is to offer users a hassle free way to watch a media file over long distances without having to upload any personal information into a third party database. Moreover, the users can communicate with each other and stream media directly from one mobile device to another while maintaining an independence from traditional sign up required by most streaming services.

  2. An architecture of entropy decoder, inverse quantiser and predictor for multi-standard video decoding

    NASA Astrophysics Data System (ADS)

    Liu, Leibo; Chen, Yingjie; Yin, Shouyi; Lei, Hao; He, Guanghui; Wei, Shaojun

    2014-07-01

    A VLSI architecture for entropy decoder, inverse quantiser and predictor is proposed in this article. This architecture is used for decoding video streams of three standards on a single chip, i.e. H.264/AVC, AVS (China National Audio Video coding Standard) and MPEG2. The proposed scheme is called MPMP (Macro-block-Parallel based Multilevel Pipeline), which is intended to improve the decoding performance to satisfy the real-time requirements while maintaining a reasonable area and power consumption. Several techniques, such as slice level pipeline, MB (Macro-Block) level pipeline, MB level parallel, etc., are adopted. Input and output buffers for the inverse quantiser and predictor are shared by the decoding engines for H.264, AVS and MPEG2, therefore effectively reducing the implementation overhead. Simulation shows that decoding process consumes 512, 435 and 438 clock cycles per MB in H.264, AVS and MPEG2, respectively. Owing to the proposed techniques, the video decoder can support H.264 HP (High Profile) 1920 × 1088@30fps (frame per second) streams, AVS JP (Jizhun Profile) 1920 × 1088@41fps streams and MPEG2 MP (Main Profile) 1920 × 1088@39fps streams when exploiting a 200 MHz working frequency.

  3. 47 CFR 79.3 - Video description of video programming.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... transmission by a video programming distributor. (8) Children's Programming. Television programming directed at children 16 years of age and under. (b) The following video programming distributors must provide... or on children's programming, on each programming stream on which they carry one of the top four...

  4. Fragility issues of medical video streaming over 802.11e-WLAN m-health environments.

    PubMed

    Tan, Yow-Yiong Edwin; Philip, Nada; Istepanian, Robert H

    2006-01-01

    This paper presents some of the fragility issues of a medical video streaming over 802.11e-WLAN in m-health applications. In particular, we present a medical channel-adaptive fair allocation (MCAFA) scheme for enhanced QoS support for IEEE 802.11 (WLAN), as a modification for the standard 802.11e enhanced distributed coordination function (EDCF) is proposed for enhanced medical data performance. The medical channel-adaptive fair allocation (MCAFA) proposed extends the EDCF, by halving the contention window (CW) after zeta consecutive successful transmissions to reduce the collision probability when channel is busy. Simulation results show that MCAFA outperforms EDCF in-terms of overall performance relevant to the requirements of high throughput of medical data and video streaming traffic in 3G/WLAN wireless environments.

  5. A time-varying subjective quality model for mobile streaming videos with stalling events

    NASA Astrophysics Data System (ADS)

    Ghadiyaram, Deepti; Pan, Janice; Bovik, Alan C.

    2015-09-01

    Over-the-top mobile video streaming is invariably influenced by volatile network conditions which cause playback interruptions (stalling events), thereby impairing users' quality of experience (QoE). Developing models that can accurately predict users' QoE could enable the more efficient design of quality-control protocols for video streaming networks that reduce network operational costs while still delivering high-quality video content to the customers. Existing objective models that predict QoE are based on global video features, such as the number of stall events and their lengths, and are trained and validated on a small pool of ad hoc video datasets, most of which are not publicly available. The model we propose in this work goes beyond previous models as it also accounts for the fundamental effect that a viewer's recent level of satisfaction or dissatisfaction has on their overall viewing experience. In other words, the proposed model accounts for and adapts to the recency, or hysteresis effect caused by a stall event in addition to accounting for the lengths, frequency of occurrence, and the positions of stall events - factors that interact in a complex way to affect a user's QoE. On the recently introduced LIVE-Avvasi Mobile Video Database, which consists of 180 distorted videos of varied content that are afflicted solely with over 25 unique realistic stalling events, we trained and validated our model to accurately predict the QoE, attaining standout QoE prediction performance.

  6. Automated Production of Movies on a Cluster of Computers

    NASA Technical Reports Server (NTRS)

    Nail, Jasper; Le, Duong; Nail, William L.; Nail, William

    2008-01-01

    A method of accelerating and facilitating production of video and film motion-picture products, and software and generic designs of computer hardware to implement the method, are undergoing development. The method provides for automation of most of the tedious and repetitive tasks involved in editing and otherwise processing raw digitized imagery into final motion-picture products. The method was conceived to satisfy requirements, in industrial and scientific testing, for rapid processing of multiple streams of simultaneously captured raw video imagery into documentation in the form of edited video imagery and video derived data products for technical review and analysis. In the production of such video technical documentation, unlike in production of motion-picture products for entertainment, (1) it is often necessary to produce multiple video derived data products, (2) there are usually no second chances to repeat acquisition of raw imagery, (3) it is often desired to produce final products within minutes rather than hours, days, or months, and (4) consistency and quality, rather than aesthetics, are the primary criteria for judging the products. In the present method, the workflow has both serial and parallel aspects: processing can begin before all the raw imagery has been acquired, each video stream can be subjected to different stages of processing simultaneously on different computers that may be grouped into one or more cluster(s), and the final product may consist of multiple video streams. Results of processing on different computers are shared, so that workers can collaborate effectively.

  7. Sub-component modeling for face image reconstruction in video communications

    NASA Astrophysics Data System (ADS)

    Shiell, Derek J.; Xiao, Jing; Katsaggelos, Aggelos K.

    2008-08-01

    Emerging communications trends point to streaming video as a new form of content delivery. These systems are implemented over wired systems, such as cable or ethernet, and wireless networks, cell phones, and portable game systems. These communications systems require sophisticated methods of compression and error-resilience encoding to enable communications across band-limited and noisy delivery channels. Additionally, the transmitted video data must be of high enough quality to ensure a satisfactory end-user experience. Traditionally, video compression makes use of temporal and spatial coherence to reduce the information required to represent an image. In many communications systems, the communications channel is characterized by a probabilistic model which describes the capacity or fidelity of the channel. The implication is that information is lost or distorted in the channel, and requires concealment on the receiving end. We demonstrate a generative model based transmission scheme to compress human face images in video, which has the advantages of a potentially higher compression ratio, while maintaining robustness to errors and data corruption. This is accomplished by training an offline face model and using the model to reconstruct face images on the receiving end. We propose a sub-component AAM modeling the appearance of sub-facial components individually, and show face reconstruction results under different types of video degradation using a weighted and non-weighted version of the sub-component AAM.

  8. Learner Outcomes and Satisfaction: A Comparison of Live Video-Streamed Instruction, Satellite Broadcast Instruction, and Face-to-Face Instruction

    ERIC Educational Resources Information Center

    Abdous, M'hammed; Yoshimura, Miki

    2010-01-01

    This study examined the final grade and satisfaction level differences among students taking specific courses using three different methods: face-to-face in class, via satellite broadcasting at remote sites, and via live video-streaming at home or at work. In each case, the same course was taught by the same instructor in all three delivery…

  9. Weighted-MSE based on saliency map for assessing video quality of H.264 video streams

    NASA Astrophysics Data System (ADS)

    Boujut, H.; Benois-Pineau, J.; Hadar, O.; Ahmed, T.; Bonnet, P.

    2011-01-01

    Human vision system is very complex and has been studied for many years specifically for purposes of efficient encoding of visual, e.g. video content from digital TV. There have been physiological and psychological evidences which indicate that viewers do not pay equal attention to all exposed visual information, but only focus on certain areas known as focus of attention (FOA) or saliency regions. In this work, we propose a novel based objective quality assessment metric, for assessing the perceptual quality of decoded video sequences affected by transmission errors and packed loses. The proposed method weights the Mean Square Error (MSE), Weighted-MSE (WMSE), according to the calculated saliency map at each pixel. Our method was validated trough subjective quality experiments.

  10. Data compression/error correction digital test system. Appendix 2: Theory of operation

    NASA Technical Reports Server (NTRS)

    1972-01-01

    An overall block diagram of the DC/EC digital system test is shown. The system is divided into two major units: the transmitter and the receiver. In operation, the transmitter and receiver are connected only by a real or simulated transmission link. The system inputs consist of: (1) standard format TV video, (2) two channels of analog voice, and (3) one serial PCM bit stream.

  11. Flexible server architecture for resource-optimal presentation of Internet multimedia streams to the client

    NASA Astrophysics Data System (ADS)

    Boenisch, Holger; Froitzheim, Konrad

    1999-12-01

    The transfer of live media streams such as video and audio over the Internet is subject to several problems, static and dynamic by nature. Important quality of service (QoS) parameters do not only differ between various receivers depending on their network access, service provider, and nationality, the QoS is also variable in time. Moreover the installed receiver base is heterogeneous with respect to operating system, browser or client software, and browser version. We present a new concept for serving live media streams. It is not longer based on the current one-size-fits all paradigm, where the server offers just one stream. Our compresslet system takes the opposite approach: it builds media streams `to order' and `just in time'. Every client subscribing to a media stream uses a servlet loaded into the media server to generate a tailored data stream for his resources and constraints. The server is designed such that commonly used components for media streams are computed once. The compresslets use these prefabricated components, code additional data if necessary, and construct the data stream based on the dynamic available QoS and other client constraints. A client-specific encoding leads to resource- optimal presentation that is especially useful for the presentation of complex multimedia documents on a variety of output devices.

  12. Dynamic quality of service model for improving performance of multimedia real-time transmission in industrial networks.

    PubMed

    Gopalakrishnan, Ravichandran C; Karunakaran, Manivannan

    2014-01-01

    Nowadays, quality of service (QoS) is very popular in various research areas like distributed systems, multimedia real-time applications and networking. The requirements of these systems are to satisfy reliability, uptime, security constraints and throughput as well as application specific requirements. The real-time multimedia applications are commonly distributed over the network and meet various time constraints across networks without creating any intervention over control flows. In particular, video compressors make variable bit-rate streams that mismatch the constant-bit-rate channels typically provided by classical real-time protocols, severely reducing the efficiency of network utilization. Thus, it is necessary to enlarge the communication bandwidth to transfer the compressed multimedia streams using Flexible Time Triggered- Enhanced Switched Ethernet (FTT-ESE) protocol. FTT-ESE provides automation to calculate the compression level and change the bandwidth of the stream. This paper focuses on low-latency multimedia transmission over Ethernet with dynamic quality-of-service (QoS) management. This proposed framework deals with a dynamic QoS for multimedia transmission over Ethernet with FTT-ESE protocol. This paper also presents distinct QoS metrics based both on the image quality and network features. Some experiments with recorded and live video streams show the advantages of the proposed framework. To validate the solution we have designed and implemented a simulator based on the Matlab/Simulink, which is a tool to evaluate different network architecture using Simulink blocks.

  13. Distributed Coding/Decoding Complexity in Video Sensor Networks

    PubMed Central

    Cordeiro, Paulo J.; Assunção, Pedro

    2012-01-01

    Video Sensor Networks (VSNs) are recent communication infrastructures used to capture and transmit dense visual information from an application context. In such large scale environments which include video coding, transmission and display/storage, there are several open problems to overcome in practical implementations. This paper addresses the most relevant challenges posed by VSNs, namely stringent bandwidth usage and processing time/power constraints. In particular, the paper proposes a novel VSN architecture where large sets of visual sensors with embedded processors are used for compression and transmission of coded streams to gateways, which in turn transrate the incoming streams and adapt them to the variable complexity requirements of both the sensor encoders and end-user decoder terminals. Such gateways provide real-time transcoding functionalities for bandwidth adaptation and coding/decoding complexity distribution by transferring the most complex video encoding/decoding tasks to the transcoding gateway at the expense of a limited increase in bit rate. Then, a method to reduce the decoding complexity, suitable for system-on-chip implementation, is proposed to operate at the transcoding gateway whenever decoders with constrained resources are targeted. The results show that the proposed method achieves good performance and its inclusion into the VSN infrastructure provides an additional level of complexity control functionality. PMID:22736972

  14. Distributed coding/decoding complexity in video sensor networks.

    PubMed

    Cordeiro, Paulo J; Assunção, Pedro

    2012-01-01

    Video Sensor Networks (VSNs) are recent communication infrastructures used to capture and transmit dense visual information from an application context. In such large scale environments which include video coding, transmission and display/storage, there are several open problems to overcome in practical implementations. This paper addresses the most relevant challenges posed by VSNs, namely stringent bandwidth usage and processing time/power constraints. In particular, the paper proposes a novel VSN architecture where large sets of visual sensors with embedded processors are used for compression and transmission of coded streams to gateways, which in turn transrate the incoming streams and adapt them to the variable complexity requirements of both the sensor encoders and end-user decoder terminals. Such gateways provide real-time transcoding functionalities for bandwidth adaptation and coding/decoding complexity distribution by transferring the most complex video encoding/decoding tasks to the transcoding gateway at the expense of a limited increase in bit rate. Then, a method to reduce the decoding complexity, suitable for system-on-chip implementation, is proposed to operate at the transcoding gateway whenever decoders with constrained resources are targeted. The results show that the proposed method achieves good performance and its inclusion into the VSN infrastructure provides an additional level of complexity control functionality.

  15. Artificial vision support system (AVS(2)) for improved prosthetic vision.

    PubMed

    Fink, Wolfgang; Tarbell, Mark A

    2014-11-01

    State-of-the-art and upcoming camera-driven, implanted artificial vision systems provide only tens to hundreds of electrodes, affording only limited visual perception for blind subjects. Therefore, real time image processing is crucial to enhance and optimize this limited perception. Since tens or hundreds of pixels/electrodes allow only for a very crude approximation of the typically megapixel optical resolution of the external camera image feed, the preservation and enhancement of contrast differences and transitions, such as edges, are especially important compared to picture details such as object texture. An Artificial Vision Support System (AVS(2)) is devised that displays the captured video stream in a pixelation conforming to the dimension of the epi-retinal implant electrode array. AVS(2), using efficient image processing modules, modifies the captured video stream in real time, enhancing 'present but hidden' objects to overcome inadequacies or extremes in the camera imagery. As a result, visual prosthesis carriers may now be able to discern such objects in their 'field-of-view', thus enabling mobility in environments that would otherwise be too hazardous to navigate. The image processing modules can be engaged repeatedly in a user-defined order, which is a unique capability. AVS(2) is directly applicable to any artificial vision system that is based on an imaging modality (video, infrared, sound, ultrasound, microwave, radar, etc.) as the first step in the stimulation/processing cascade, such as: retinal implants (i.e. epi-retinal, sub-retinal, suprachoroidal), optic nerve implants, cortical implants, electric tongue stimulators, or tactile stimulators.

  16. Development of a Video Coding Scheme for Analyzing the Usability and Usefulness of Health Information Systems.

    PubMed

    Kushniruk, Andre W; Borycki, Elizabeth M

    2015-01-01

    Usability has been identified as a key issue in health informatics. Worldwide numerous projects have been carried out in an attempt to increase and optimize health system usability. Usability testing, involving observing end users interacting with systems, has been widely applied and numerous publications have appeared describing such studies. However, to date, fewer works have been published describing methodological approaches to analyzing the rich data stream that results from usability testing. This includes analysis of video, audio and screen recordings. In this paper we describe our work in the development and application of a coding scheme for analyzing the usability of health information systems. The phases involved in such analyses are described.

  17. Priority-based methods for reducing the impact of packet loss on HEVC encoded video streams

    NASA Astrophysics Data System (ADS)

    Nightingale, James; Wang, Qi; Grecos, Christos

    2013-02-01

    The rapid growth in the use of video streaming over IP networks has outstripped the rate at which new network infrastructure has been deployed. These bandwidth-hungry applications now comprise a significant part of all Internet traffic and present major challenges for network service providers. The situation is more acute in mobile networks where the available bandwidth is often limited. Work towards the standardisation of High Efficiency Video Coding (HEVC), the next generation video coding scheme, is currently on track for completion in 2013. HEVC offers the prospect of a 50% improvement in compression over the current H.264 Advanced Video Coding standard (H.264/AVC) for the same quality. However, there has been very little published research on HEVC streaming or the challenges of delivering HEVC streams in resource-constrained network environments. In this paper we consider the problem of adapting an HEVC encoded video stream to meet the bandwidth limitation in a mobile networks environment. Video sequences were encoded using the Test Model under Consideration (TMuC HM6) for HEVC. Network abstraction layers (NAL) units were packetized, on a one NAL unit per RTP packet basis, and transmitted over a realistic hybrid wired/wireless testbed configured with dynamically changing network path conditions and multiple independent network paths from the streamer to the client. Two different schemes for the prioritisation of RTP packets, based on the NAL units they contain, have been implemented and empirically compared using a range of video sequences, encoder configurations, bandwidths and network topologies. In the first prioritisation method the importance of an RTP packet was determined by the type of picture and the temporal switching point information carried in the NAL unit header. Packets containing parameter set NAL units and video coding layer (VCL) NAL units of the instantaneous decoder refresh (IDR) and the clean random access (CRA) pictures were given the highest priority followed by NAL units containing pictures used as reference pictures from which others can be predicted. The second method assigned a priority to each NAL unit based on the rate-distortion cost of the VCL coding units contained in the NAL unit. The sum of the rate-distortion costs of each coding unit contained in a NAL unit was used as the priority weighting. The preliminary results of extensive experiments have shown that all three schemes offered an improvement in PSNR, when comparing original and decoded received streams, over uncontrolled packet loss. Using the first method consistently delivered a significant average improvement of 0.97dB over the uncontrolled scenario while the second method provided a measurable, but less consistent, improvement across the range of testing conditions and encoder configurations.

  18. Stereoscopic augmented reality for laparoscopic surgery.

    PubMed

    Kang, Xin; Azizian, Mahdi; Wilson, Emmanuel; Wu, Kyle; Martin, Aaron D; Kane, Timothy D; Peters, Craig A; Cleary, Kevin; Shekhar, Raj

    2014-07-01

    Conventional laparoscopes provide a flat representation of the three-dimensional (3D) operating field and are incapable of visualizing internal structures located beneath visible organ surfaces. Computed tomography (CT) and magnetic resonance (MR) images are difficult to fuse in real time with laparoscopic views due to the deformable nature of soft-tissue organs. Utilizing emerging camera technology, we have developed a real-time stereoscopic augmented-reality (AR) system for laparoscopic surgery by merging live laparoscopic ultrasound (LUS) with stereoscopic video. The system creates two new visual cues: (1) perception of true depth with improved understanding of 3D spatial relationships among anatomical structures, and (2) visualization of critical internal structures along with a more comprehensive visualization of the operating field. The stereoscopic AR system has been designed for near-term clinical translation with seamless integration into the existing surgical workflow. It is composed of a stereoscopic vision system, a LUS system, and an optical tracker. Specialized software processes streams of imaging data from the tracked devices and registers those in real time. The resulting two ultrasound-augmented video streams (one for the left and one for the right eye) give a live stereoscopic AR view of the operating field. The team conducted a series of stereoscopic AR interrogations of the liver, gallbladder, biliary tree, and kidneys in two swine. The preclinical studies demonstrated the feasibility of the stereoscopic AR system during in vivo procedures. Major internal structures could be easily identified. The system exhibited unobservable latency with acceptable image-to-video registration accuracy. We presented the first in vivo use of a complete system with stereoscopic AR visualization capability. This new capability introduces new visual cues and enhances visualization of the surgical anatomy. The system shows promise to improve the precision and expand the capacity of minimally invasive laparoscopic surgeries.

  19. DAM-ing the Digital Flood

    ERIC Educational Resources Information Center

    Raths, David

    2008-01-01

    With the widespread digitization of art, photography, and music, plus the introduction of streaming video, many colleges and universities are realizing that they must develop or purchase systems to preserve their school's digitized objects; that they must create searchable databases so that researchers can find and share copies of digital files;…

  20. Bringing the Field to the Supervisor: Innovation in Distance Supervision for Field-Based Experiences Using Mobile Technologies

    ERIC Educational Resources Information Center

    Schmidt, Matthew; Gage, Ashley MacSuga; Gage, Nicholas; Cox, Penny; McLeskey, James

    2015-01-01

    This paper provides a summary of the design, development, and evaluation of a mobile distance supervision system for teacher interns in their field-based teaching experiences. Developed as part of the University of Florida's Restructuring and Improving Teacher Education 325T grant project, the prototype system streams video of teachers in rural…

  1. Using Learning Styles and Viewing Styles in Streaming Video

    ERIC Educational Resources Information Center

    de Boer, Jelle; Kommers, Piet A. M.; de Brock, Bert

    2011-01-01

    Improving the effectiveness of learning when students observe video lectures becomes urgent with the rising advent of (web-based) video materials. Vital questions are how students differ in their learning preferences and what patterns in viewing video can be detected in log files. Our experiments inventory students' viewing patterns while watching…

  2. Tile prediction schemes for wide area motion imagery maps in GIS

    NASA Astrophysics Data System (ADS)

    Michael, Chris J.; Lin, Bruce Y.

    2017-11-01

    Wide-area surveillance, traffic monitoring, and emergency management are just several of many applications benefiting from the incorporation of Wide-Area Motion Imagery (WAMI) maps into geographic information systems. Though the use of motion imagery as a GIS base map via the Web Map Service (WMS) standard is not a new concept, effectively streaming imagery is particularly challenging due to its large scale and the multidimensionally interactive nature of clients that use WMS. Ineffective streaming from a server to one or more clients can unnecessarily overwhelm network bandwidth and cause frustratingly large amounts of latency in visualization to the user. Seamlessly streaming WAMI through GIS requires good prediction to accurately guess the tiles of the video that will be traversed in the near future. In this study, we present an experimental framework for such prediction schemes by presenting a stochastic interaction model that represents a human user's interaction with a GIS video map. We then propose several algorithms by which the tiles of the stream may be predicted. Results collected both within the experimental framework and using human analyst trajectories show that, though each algorithm thrives under certain constraints, the novel Markovian algorithm yields the best results overall. Furthermore, we make the argument that the proposed experimental framework is sufficient for the study of these prediction schemes.

  3. Improving P2P live-content delivery using SVC

    NASA Astrophysics Data System (ADS)

    Schierl, T.; Sánchez, Y.; Hellge, C.; Wiegand, T.

    2010-07-01

    P2P content delivery techniques for video transmission have become of high interest in the last years. With the involvement of client into the delivery process, P2P approaches can significantly reduce the load and cost on servers, especially for popular services. However, previous studies have already pointed out the unreliability of P2P-based live streaming approaches due to peer churn, where peers may ungracefully leave the P2P infrastructure, typically an overlay networks. Peers ungracefully leaving the system cause connection losses in the overlay, which require repair operations. During such repair operations, which typically take a few roundtrip times, no data is received from the lost connection. While taking low delay for fast-channel tune-in into account as a key feature for broadcast-like streaming applications, the P2P live streaming approach can only rely on a certain media pre-buffer during such repair operations. In this paper, multi-tree based Application Layer Multicast as a P2P overlay technique for live streaming is considered. The use of Flow Forwarding (FF), a.k.a. Retransmission, or Forward Error Correction (FEC) in combination with Scalable video Coding (SVC) for concealment during overlay repair operations is shown. Furthermore the benefits of using SVC over the use of AVC single layer transmission are presented.

  4. Supporting Seamless Mobility for P2P Live Streaming

    PubMed Central

    Kim, Eunsam; Kim, Sangjin; Lee, Choonhwa

    2014-01-01

    With advent of various mobile devices with powerful networking and computing capabilities, the users' demand to enjoy live video streaming services such as IPTV with mobile devices has been increasing rapidly. However, it is challenging to get over the degradation of service quality due to data loss caused by the handover. Although many handover schemes were proposed at protocol layers below the application layer, they inherently suffer from data loss while the network is being disconnected during the handover. We therefore propose an efficient application-layer handover scheme to support seamless mobility for P2P live streaming. By simulation experiments, we show that the P2P live streaming system with our proposed handover scheme can improve the playback continuity significantly compared to that without our scheme. PMID:24977171

  5. Summarizing Audiovisual Contents of a Video Program

    NASA Astrophysics Data System (ADS)

    Gong, Yihong

    2003-12-01

    In this paper, we focus on video programs that are intended to disseminate information and knowledge such as news, documentaries, seminars, etc, and present an audiovisual summarization system that summarizes the audio and visual contents of the given video separately, and then integrating the two summaries with a partial alignment. The audio summary is created by selecting spoken sentences that best present the main content of the audio speech while the visual summary is created by eliminating duplicates/redundancies and preserving visually rich contents in the image stream. The alignment operation aims to synchronize each spoken sentence in the audio summary with its corresponding speaker's face and to preserve the rich content in the visual summary. A Bipartite Graph-based audiovisual alignment algorithm is developed to efficiently find the best alignment solution that satisfies these alignment requirements. With the proposed system, we strive to produce a video summary that: (1) provides a natural visual and audio content overview, and (2) maximizes the coverage for both audio and visual contents of the original video without having to sacrifice either of them.

  6. Atomization of metal (Materials Preparation Center)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    2010-01-01

    Atomization of metal requires high pressure gas and specialized chambers for cooling and collecting the powders without contamination. The critical step for morphological control is the impingement of the gas on the melt stream. The video is a color video of a liquid metal stream being atomized by high pressure gas. This material was cast at the Ames Laboratory's Materials Preparation Center http://www.mpc.ameslab.gov WARNING - AUDIO IS LOUD.

  7. Privacy-protecting video surveillance

    NASA Astrophysics Data System (ADS)

    Wickramasuriya, Jehan; Alhazzazi, Mohanned; Datt, Mahesh; Mehrotra, Sharad; Venkatasubramanian, Nalini

    2005-02-01

    Forms of surveillance are very quickly becoming an integral part of crime control policy, crisis management, social control theory and community consciousness. In turn, it has been used as a simple and effective solution to many of these problems. However, privacy-related concerns have been expressed over the development and deployment of this technology. Used properly, video cameras help expose wrongdoing but typically come at the cost of privacy to those not involved in any maleficent activity. This work describes the design and implementation of a real-time, privacy-protecting video surveillance infrastructure that fuses additional sensor information (e.g. Radio-frequency Identification) with video streams and an access control framework in order to make decisions about how and when to display the individuals under surveillance. This video surveillance system is a particular instance of a more general paradigm of privacy-protecting data collection. In this paper we describe in detail the video processing techniques used in order to achieve real-time tracking of users in pervasive spaces while utilizing the additional sensor data provided by various instrumented sensors. In particular, we discuss background modeling techniques, object tracking and implementation techniques that pertain to the overall development of this system.

  8. Multi-Target Camera Tracking, Hand-off and Display LDRD 158819 Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, Robert J.

    2014-10-01

    Modern security control rooms gather video and sensor feeds from tens to hundreds of cameras. Advanced camera analytics can detect motion from individual video streams and convert unexpected motion into alarms, but the interpretation of these alarms depends heavily upon human operators. Unfortunately, these operators can be overwhelmed when a large number of events happen simultaneously, or lulled into complacency due to frequent false alarms. This LDRD project has focused on improving video surveillance-based security systems by changing the fundamental focus from the cameras to the targets being tracked. If properly integrated, more cameras shouldn’t lead to more alarms, moremore » monitors, more operators, and increased response latency but instead should lead to better information and more rapid response times. For the course of the LDRD we have been developing algorithms that take live video imagery from multiple video cameras, identify individual moving targets from the background imagery, and then display the results in a single 3D interactive video. In this document we summarize the work in developing this multi-camera, multi-target system, including lessons learned, tools developed, technologies explored, and a description of current capability.« less

  9. Multi-target camera tracking, hand-off and display LDRD 158819 final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, Robert J.

    2014-10-01

    Modern security control rooms gather video and sensor feeds from tens to hundreds of cameras. Advanced camera analytics can detect motion from individual video streams and convert unexpected motion into alarms, but the interpretation of these alarms depends heavily upon human operators. Unfortunately, these operators can be overwhelmed when a large number of events happen simultaneously, or lulled into complacency due to frequent false alarms. This LDRD project has focused on improving video surveillance-based security systems by changing the fundamental focus from the cameras to the targets being tracked. If properly integrated, more cameras shouldn't lead to more alarms, moremore » monitors, more operators, and increased response latency but instead should lead to better information and more rapid response times. For the course of the LDRD we have been developing algorithms that take live video imagery from multiple video cameras, identifies individual moving targets from the background imagery, and then displays the results in a single 3D interactive video. In this document we summarize the work in developing this multi-camera, multi-target system, including lessons learned, tools developed, technologies explored, and a description of current capability.« less

  10. Eye gaze correction with stereovision for video-teleconferencing.

    PubMed

    Yang, Ruigang; Zhang, Zhengyou

    2004-07-01

    The lack of eye contact in desktop video teleconferencing substantially reduces the effectiveness of video contents. While expensive and bulky hardware is available on the market to correct eye gaze, researchers have been trying to provide a practical software-based solution to bring video-teleconferencing one step closer to the mass market. This paper presents a novel approach: Based on stereo analysis combined with rich domain knowledge (a personalized face model), we synthesize, using graphics hardware, a virtual video that maintains eye contact. A 3D stereo head tracker with a personalized face model is used to compute initial correspondences across two views. More correspondences are then added through template and feature matching. Finally, all the correspondence information is fused together for view synthesis using view morphing techniques. The combined methods greatly enhance the accuracy and robustness of the synthesized views. Our current system is able to generate an eye-gaze corrected video stream at five frames per second on a commodity 1 GHz PC.

  11. Video capture of clinical care to enhance patient safety

    PubMed Central

    Weinger, M; Gonzales, D; Slagle, J; Syeed, M

    2004-01-01

    

 Experience from other domains suggests that videotaping and analyzing actual clinical care can provide valuable insights for enhancing patient safety through improvements in the process of care. Methods are described for the videotaping and analysis of clinical care using a high quality portable multi-angle digital video system that enables simultaneous capture of vital signs and time code synchronization of all data streams. An observer can conduct clinician performance assessment (such as workload measurements or behavioral task analysis) either in real time (during videotaping) or while viewing previously recorded videotapes. Supplemental data are synchronized with the video record and stored electronically in a hierarchical database. The video records are transferred to DVD, resulting in a small, cheap, and accessible archive. A number of technical and logistical issues are discussed, including consent of patients and clinicians, maintaining subject privacy and confidentiality, and data security. Using anesthesiology as a test environment, over 270 clinical cases (872 hours) have been successfully videotaped and processed using the system. PMID:15069222

  12. Capabilities Assessment and Employment Recommendations for Full Motion Video Optical Navigation Exploitation (FMV-ONE)

    DTIC Science & Technology

    2015-06-01

    GEOINT geospatial intelligence GFC ground force commander GPS global positioning system GUI graphical user interface HA/DR humanitarian...transport stream UAS unmanned aerial system . See UAV. UAV unmanned aerial vehicle. See UAS. VM virtual machine VMU Marine Unmanned Aerial Vehicle... Unmanned Air Systems (UASs). Current programs promise to dramatically increase the number of FMV feeds in the near future. However, there are too

  13. Streaming Media Seminar--Effective Development and Distribution of Streaming Multimedia in Education

    ERIC Educational Resources Information Center

    Mainhart, Robert; Gerraughty, James; Anderson, Kristine M.

    2004-01-01

    Concisely defined, "streaming media" is moving video and/or audio transmitted over the Internet for immediate viewing/listening by an end user. However, at Saint Francis University's Center of Excellence for Remote and Medically Under-Served Areas (CERMUSA), streaming media is approached from a broader perspective. The working definition includes…

  14. High speed data compactor

    DOEpatents

    Baumbaugh, Alan E.; Knickerbocker, Kelly L.

    1988-06-04

    A method and apparatus for suppressing from transmission, non-informational data words from a source of data words such as a video camera. Data words having values greater than a predetermined threshold are transmitted whereas data words having values less than a predetermined threshold are not transmitted but their occurrences instead are counted. Before being transmitted, the count of occurrences of invalid data words and valid data words are appended with flag digits which a receiving system decodes. The original data stream is fully reconstructable from the stream of valid data words and count of invalid data words.

  15. Use of streamed internet video for cytology training and education: www.PathLab.org.

    PubMed

    Poller, David; Ljung, Britt-Marie; Gonda, Peter

    2009-05-01

    An Internet-based method is described for submission of video clips to a website editor to be reviewed, edited, and then uploaded onto a video server, with a hypertext link to a website. The information on the webpages is searchable via the website sitemap on Internet search engines. A survey of video users who accessed a single 59-minute FNA cytology training cytology video via the website showed a mean score for usefulness for specialists/consultants of 3.75, range 1-5, n = 16, usefulness for trainees mean score was 4.4, range 3-5, n = 12, with a mean score for visual and sound quality of 3.9, range 2-5, n = 16. Fifteen out of 17 respondents thought that posting video training material on the Internet was a good idea, and 9 of 17 respondents would also consider submitting training videos to a similar website. This brief exercise has shown that there is value in posting educational or training video content on the Internet and that the use of streamed video accessed via the Internet will be of increasing importance. (c) 2009 Wiley-Liss, Inc.

  16. JXTA: A Technology Facilitating Mobile P2P Health Management System

    PubMed Central

    Rajkumar, Rajasekaran; Nallani Chackravatula Sriman, Narayana Iyengar

    2012-01-01

    Objectives Mobile JXTA (Juxtapose) gaining momentum and has attracted the interest of doctors and patients through P2P service that transmits messages. Audio and video can also be transmitted through JXTA. The use of mobile streaming mechanism with the support of mobile hospital management and healthcare system would enable better interaction between doctors, nurses, and the hospital. Experimental results demonstrate good performance in comparison with conventional systems. This study evaluates P2P JXTA/JXME (JXTA functionality to MIDP devices.) which facilitates peer-to-peer application+ using mobile-constraint devices. Also a proven learning algorithm was used to automatically send and process sorted patient data to nurses. Methods From December 2010 to December 2011, a total of 500 patients were referred to our hospital due to minor health problems and were monitored. We selected all of the peer groups and the control server, which controlled the BMO (Block Medical Officer) peer groups and BMO through the doctor peer groups, and prescriptions were delivered to the patient’s mobile phones through the JXTA/ JXME network. Results All 500 patients were registered in the JXTA network. Among these, 300 patient histories were referred to the record peer group by the doctors, 100 patients were referred to the external doctor peer group, and 100 patients were registered as new users in the JXTA/JXME network. Conclusion This system was developed for mobile streaming applications and was designed to support the mobile health management system using JXTA/ JXME. The simulated results show that this system can carry out streaming audio and video applications. Controlling and monitoring by the doctor peer group makes the system more flexible and structured. Enhanced studies are needed to improve knowledge mining and cloud-based M health management technology in comparison with the traditional system. PMID:24159509

  17. Markerless client-server augmented reality system with natural features

    NASA Astrophysics Data System (ADS)

    Ning, Shuangning; Sang, Xinzhu; Chen, Duo

    2017-10-01

    A markerless client-server augmented reality system is presented. In this research, the more extensive and mature virtual reality head-mounted display is adopted to assist the implementation of augmented reality. The viewer is provided an image in front of their eyes with the head-mounted display. The front-facing camera is used to capture video signals into the workstation. The generated virtual scene is merged with the outside world information received from the camera. The integrated video is sent to the helmet display system. The distinguishing feature and novelty is to realize the augmented reality with natural features instead of marker, which address the limitations of the marker, such as only black and white, the inapplicability of different environment conditions, and particularly cannot work when the marker is partially blocked. Further, 3D stereoscopic perception of virtual animation model is achieved. The high-speed and stable socket native communication method is adopted for transmission of the key video stream data, which can reduce the calculation burden of the system.

  18. Encryption for confidentiality of the network and influence of this to the quality of streaming video through network

    NASA Astrophysics Data System (ADS)

    Sevcik, L.; Uhrin, D.; Frnda, J.; Voznak, M.; Toral-Cruz, Homer; Mikulec, M.; Jakovlev, Sergej

    2015-05-01

    Nowadays, the interest in real-time services, like audio and video, is growing. These services are mostly transmitted over packet networks, which are based on IP protocol. It leads to analyses of these services and their behavior in such networks which are becoming more frequent. Video has become the significant part of all data traffic sent via IP networks. In general, a video service is one-way service (except e.g. video calls) and network delay is not such an important factor as in a voice service. Dominant network factors that influence the final video quality are especially packet loss, delay variation and the capacity of the transmission links. Analysis of video quality concentrates on the resistance of video codecs to packet loss in the network, which causes artefacts in the video. IPsec provides confidentiality in terms of safety, integrity and non-repudiation (using HMAC-SHA1 and 3DES encryption for confidentiality and AES in CBC mode) with an authentication header and ESP (Encapsulating Security Payload). The paper brings a detailed view of the performance of video streaming over an IP-based network. We compared quality of video with packet loss and encryption as well. The measured results demonstrated the relation between the video codec type and bitrate to the final video quality.

  19. Deep Sea Gazing: Making Ship-Based Research Aboard RV Falkor Relevant and Accessible

    NASA Astrophysics Data System (ADS)

    Wiener, C.; Zykov, V.; Miller, A.; Pace, L. J.; Ferrini, V. L.; Friedman, A.

    2016-02-01

    Schmidt Ocean Institute (SOI) is a private, non-profit operating foundation established to advance the understanding of the world's oceans through technological advancement, intelligent observation, and open sharing of information. Our research vessel Falkorprovides ship time to selected scientists and supports a wide range of scientific functions, including ROV operations with live streaming capabilities. Since 2013, SOI has live streamed 55 ROV dives in high definition and recorded them onto YouTube. This has totaled over 327 hours of video which received 1,450, 461 views in 2014. SOI is one of the only research programs that makes their entire dive series available online, creating a rich collection of video data sets. In doing this, we provide an opportunity for scientists to make new discoveries in the video data that may have been missed earlier. These data sets are also available to students, allowing them to engage with real data in the classroom. SOI's video collection is also being used in a newly developed video management system, Ocean Video Lab. Telepresence-enabled research is an important component of Falkor cruises, which is exemplified by several that were conducted in 2015. This presentation will share a few case studies including an image tagging citizen science project conducted through the Squidle interface in partnership with the Australian Center for Field Robotics. Using real-time image data collected in the Timor Sea, numerous shore-based citizens created seafloor image tags that could be used by a machine learning algorithms on Falkor's high performance computer (HPC) to accomplish habitat characterization. With the use of the HPC system real-time robot tracking, image tagging, and other outreach connections were made possible, allowing scientists on board to engage with the public and build their knowledge base. The above mentioned examples will be used to demonstrate the benefits of remote data analysis and participatory engagement in science-based telepresence.

  20. Eye movements while viewing narrated, captioned, and silent videos

    PubMed Central

    Ross, Nicholas M.; Kowler, Eileen

    2013-01-01

    Videos are often accompanied by narration delivered either by an audio stream or by captions, yet little is known about saccadic patterns while viewing narrated video displays. Eye movements were recorded while viewing video clips with (a) audio narration, (b) captions, (c) no narration, or (d) concurrent captions and audio. A surprisingly large proportion of time (>40%) was spent reading captions even in the presence of a redundant audio stream. Redundant audio did not affect the saccadic reading patterns but did lead to skipping of some portions of the captions and to delays of saccades made into the caption region. In the absence of captions, fixations were drawn to regions with a high density of information, such as the central region of the display, and to regions with high levels of temporal change (actions and events), regardless of the presence of narration. The strong attraction to captions, with or without redundant audio, raises the question of what determines how time is apportioned between captions and video regions so as to minimize information loss. The strategies of apportioning time may be based on several factors, including the inherent attraction of the line of sight to any available text, the moment by moment impressions of the relative importance of the information in the caption and the video, and the drive to integrate visual text accompanied by audio into a single narrative stream. PMID:23457357

  1. 47 CFR 79.3 - Video description of video programming.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... programming distributor. (8) Children's Programming. Television programming directed at children 16 years of... provide 50 hours of video description per calendar quarter, either during prime time or on children's... calendar quarter, either during prime time or on children's programming, on each programming stream on...

  2. 47 CFR 79.3 - Video description of video programming.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... programming distributor. (8) Children's Programming. Television programming directed at children 16 years of... provide 50 hours of video description per calendar quarter, either during prime time or on children's... calendar quarter, either during prime time or on children's programming, on each programming stream on...

  3. Research methods of plasma stream interaction with heat-resistant materials

    NASA Astrophysics Data System (ADS)

    Tyuftyaev, A. S.; Gadzhiev, M. Kh; Sargsyan, M. A.; Chinnov, V. F.; Demirov, N. A.; Kavyrshin, D. I.; Ageev, A. G.; Khromov, M. A.

    2016-11-01

    An experimental automated system was designed and constructed for studying the parameters and characteristics of non-stationary interacting system high-enthalpy-plasma stream-investigated sample: enthalpy of plasma in the incident stream; speed and temperature of plasma stream; temperature of electrons and heavy particles, ionic composition and their spatial distribution; heat flux incident on the sample (kW/cm2); surface temperature of the sample; ablation of the sample material, and others. Measurements of achievable plasma heat flux levels are carried out by calorimetry of plasma streams incident on the surface of multisection copper calorimeter. Determination of acceleration characteristics for profiled plasma torch nozzle, as well as the gas flow rate is produced by measuring the total pressure using the Pitot tube. Video visualization of interacting system is carried out using synchronized high-speed cameras. Micropyrometry of the selected zone on the sample surface is carried out by high-speed, three-wavelength pyrometer. To measure the rate of mass loss of the sample, in addition to the weighing method of evaluation the methods of laser knife and two-position stereoscopy are used. Plasma and sample emission characteristics are performed with two separate spectrometers.

  4. Cross-Modal Approach for Karaoke Artifacts Correction

    NASA Astrophysics Data System (ADS)

    Yan, Wei-Qi; Kankanhalli, Mohan S.

    In this chapter, we combine adaptive sampling in conjunction with video analogies (VA) to correct the audio stream in the karaoke environment κ= {κ (t) : κ (t) = (U(t), K(t)), t in ({t}s, {t}e)} where t s and t e are start time and end time respectively, U(t) is the user multimedia data. We employ multiple streams from the karaoke data K(t) = ({K}_{V }(t), {K}M(t), {K}S(t)), where K V (t), K M (t) and K S (t) are the video, musical accompaniment and original singer's rendition respectively along with the user multimedia data U(t) = ({U}A(t),{U}_{V }(t)) where U V (t) is the user video captured with a camera and U A (t) is the user's rendition of the song. We analyze the audio and video streaming features Ψ (κ ) = {Ψ (U(t), K(t))} = {Ψ (U(t)), Ψ (K(t))} = {{Ψ }U(t), {Ψ }K(t)}, to produce the corrected singing, namely output U '(t), which is made as close as possible to the original singer's rendition. Note that Ψ represents any kind of feature processing.

  5. Cross-Modal Approach for Karaoke Artifacts Correction

    NASA Astrophysics Data System (ADS)

    Yan, Wei-Qi; Kankanhalli, Mohan S.

    In this chapter, we combine adaptive sampling in conjunction with video analogies (VA) to correct the audio stream in the karaoke environment kappa= {kappa (t) : kappa (t) = (U(t), K(t)), t in ({t}s, {t}e)} where t s and t e are start time and end time respectively, U(t) is the user multimedia data. We employ multiple streams from the karaoke data K(t) = ({K}_{V }(t), {K}M(t), {K}S(t)), where K V (t), K M (t) and K S (t) are the video, musical accompaniment and original singer's rendition respectively along with the user multimedia data U(t) = ({U}A(t),{U}_{V }(t)) where U V (t) is the user video captured with a camera and U A (t) is the user's rendition of the song. We analyze the audio and video streaming features Ψ (kappa ) = {Ψ (U(t), K(t))} = {Ψ (U(t)), Ψ (K(t))} = {{Ψ }U(t), {Ψ }K(t)}, to produce the corrected singing, namely output U ' (t), which is made as close as possible to the original singer's rendition. Note that Ψ represents any kind of feature processing.

  6. Tracking people and cars using 3D modeling and CCTV.

    PubMed

    Edelman, Gerda; Bijhold, Jurrien

    2010-10-10

    The aim of this study was to find a method for the reconstruction of movements of people and cars using CCTV footage and a 3D model of the environment. A procedure is proposed, in which video streams are synchronized and displayed in a 3D model, by using virtual cameras. People and cars are represented by cylinders and boxes, which are moved in the 3D model, according to their movements as shown in the video streams. The procedure was developed and tested in an experimental setup with test persons who logged their GPS coordinates as a recording of the ground truth. Results showed that it is possible to implement this procedure and to reconstruct movements of people and cars from video recordings. The procedure was also applied to a forensic case. In this work we experienced that more situational awareness was created by the 3D model, which made it easier to track people on multiple video streams. Based on all experiences from the experimental set up and the case, recommendations are formulated for use in practice. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  7. An innovative experimental setup for Large Scale Particle Image Velocimetry measurements in riverine environments

    NASA Astrophysics Data System (ADS)

    Tauro, Flavia; Olivieri, Giorgio; Porfiri, Maurizio; Grimaldi, Salvatore

    2014-05-01

    Large Scale Particle Image Velocimetry (LSPIV) is a powerful methodology to nonintrusively monitor surface flows. Its use has been beneficial to the development of rating curves in riverine environments and to map geomorphic features in natural waterways. Typical LSPIV experimental setups rely on the use of mast-mounted cameras for the acquisition of natural stream reaches. Such cameras are installed on stream banks and are angled with respect to the water surface to capture large scale fields of view. Despite its promise and the simplicity of the setup, the practical implementation of LSPIV is affected by several challenges, including the acquisition of ground reference points for image calibration and time-consuming and highly user-assisted procedures to orthorectify images. In this work, we perform LSPIV studies on stream sections in the Aniene and Tiber basins, Italy. To alleviate the limitations of traditional LSPIV implementations, we propose an improved video acquisition setup comprising a telescopic, an inexpensive GoPro Hero 3 video camera, and a system of two lasers. The setup allows for maintaining the camera axis perpendicular to the water surface, thus mitigating uncertainties related to image orthorectification. Further, the mast encases a laser system for remote image calibration, thus allowing for nonintrusively calibrating videos without acquiring ground reference points. We conduct measurements on two different water bodies to outline the performance of the methodology in case of varying flow regimes, illumination conditions, and distribution of surface tracers. Specifically, the Aniene river is characterized by high surface flow velocity, the presence of abundant, homogeneously distributed ripples and water reflections, and a meagre number of buoyant tracers. On the other hand, the Tiber river presents lower surface flows, isolated reflections, and several floating objects. Videos are processed through image-based analyses to correct for lens distortions and analyzed with a commercially available PIV software. Surface flow velocity estimates are compared to supervised measurements performed by visually tracking objects floating on the stream surface and to rating curves developed by the Ufficio Idrografico e Mareografico (UIM) at Regione Lazio, Italy. Experimental findings demonstrate that the presence of tracers is crucial for surface flow velocity estimates. Further, considering surface ripples and patterns may lead to underestimations in LSPIV analyses.

  8. Video Analysis in Multi-Intelligence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Key, Everett Kiusan; Van Buren, Kendra Lu; Warren, Will

    This is a project which was performed by a graduated high school student at Los Alamos National Laboratory (LANL). The goal of the Multi-intelligence (MINT) project is to determine the state of a facility from multiple data streams. The data streams are indirect observations. The researcher is using DARHT (Dual-Axis Radiographic Hydrodynamic Test Facility) as a proof of concept. In summary, videos from the DARHT facility contain a rich amount of information. Distribution of car activity can inform us about the state of the facility. Counting large vehicles shows promise as another feature for identifying the state of operations. Signalmore » processing techniques are limited by the low resolution and compression of the videos. We are working on integrating these features with features obtained from other data streams to contribute to the MINT project. Future work can pursue other observations, such as when the gate is functioning or non-functioning.« less

  9. An Energy-Efficient and High-Quality Video Transmission Architecture in Wireless Video-Based Sensor Networks.

    PubMed

    Aghdasi, Hadi S; Abbaspour, Maghsoud; Moghadam, Mohsen Ebrahimi; Samei, Yasaman

    2008-08-04

    Technological progress in the fields of Micro Electro-Mechanical Systems (MEMS) and wireless communications and also the availability of CMOS cameras, microphones and small-scale array sensors, which may ubiquitously capture multimedia content from the field, have fostered the development of low-cost limited resources Wireless Video-based Sensor Networks (WVSN). With regards to the constraints of videobased sensor nodes and wireless sensor networks, a supporting video stream is not easy to implement with the present sensor network protocols. In this paper, a thorough architecture is presented for video transmission over WVSN called Energy-efficient and high-Quality Video transmission Architecture (EQV-Architecture). This architecture influences three layers of communication protocol stack and considers wireless video sensor nodes constraints like limited process and energy resources while video quality is preserved in the receiver side. Application, transport, and network layers are the layers in which the compression protocol, transport protocol, and routing protocol are proposed respectively, also a dropping scheme is presented in network layer. Simulation results over various environments with dissimilar conditions revealed the effectiveness of the architecture in improving the lifetime of the network as well as preserving the video quality.

  10. A distributed approach for optimizing cascaded classifier topologies in real-time stream mining systems.

    PubMed

    Foo, Brian; van der Schaar, Mihaela

    2010-11-01

    In this paper, we discuss distributed optimization techniques for configuring classifiers in a real-time, informationally-distributed stream mining system. Due to the large volume of streaming data, stream mining systems must often cope with overload, which can lead to poor performance and intolerable processing delay for real-time applications. Furthermore, optimizing over an entire system of classifiers is a difficult task since changing the filtering process at one classifier can impact both the feature values of data arriving at classifiers further downstream and thus, the classification performance achieved by an ensemble of classifiers, as well as the end-to-end processing delay. To address this problem, this paper makes three main contributions: 1) Based on classification and queuing theoretic models, we propose a utility metric that captures both the performance and the delay of a binary filtering classifier system. 2) We introduce a low-complexity framework for estimating the system utility by observing, estimating, and/or exchanging parameters between the inter-related classifiers deployed across the system. 3) We provide distributed algorithms to reconfigure the system, and analyze the algorithms based on their convergence properties, optimality, information exchange overhead, and rate of adaptation to non-stationary data sources. We provide results using different video classifier systems.

  11. Our experiences with development of digitised video streams and their use in animal-free medical education.

    PubMed

    Cervinka, Miroslav; Cervinková, Zuzana; Novák, Jan; Spicák, Jan; Rudolf, Emil; Peychl, Jan

    2004-06-01

    Alternatives and their teaching are an essential part of the curricula at the Faculty of Medicine. Dynamic screen-based video recordings are the most important type of alternative models employed for teaching purposes. Currently, the majority of teaching materials for this purpose are based on PowerPoint presentations, which are very popular because of their high versatility and visual impact. Furthermore, current developments in the field of image capturing devices and software enable the use of digitised video streams, tailored precisely to the specific situation. Here, we demonstrate that with reasonable financial resources, it is possible to prepare video sequences and to introduce them into the PowerPoint presentation, thereby shaping the teaching process according to individual students' needs and specificities.

  12. Integrating distributed multimedia systems and interactive television networks

    NASA Astrophysics Data System (ADS)

    Shvartsman, Alex A.

    1996-01-01

    Recent advances in networks, storage and video delivery systems are about to make commercial deployment of interactive multimedia services over digital television networks a reality. The emerging components individually have the potential to satisfy the technical requirements in the near future. However, no single vendor is offering a complete end-to-end commercially-deployable and scalable interactive multimedia applications systems over digital/analog television systems. Integrating a large set of maturing sub-assemblies and interactive multimedia applications is a major task in deploying such systems. Here we deal with integration issues, requirements and trade-offs in building delivery platforms and applications for interactive television services. Such integration efforts must overcome lack of standards, and deal with unpredictable development cycles and quality problems of leading- edge technology. There are also the conflicting goals of optimizing systems for video delivery while enabling highly interactive distributed applications. It is becoming possible to deliver continuous video streams from specific sources, but it is difficult and expensive to provide the ability to rapidly switch among multiple sources of video and data. Finally, there is the ever- present challenge of integrating and deploying expensive systems whose scalability and extensibility is limited, while ensuring some resiliency in the face of inevitable changes. This proceedings version of the paper is an extended abstract.

  13. Digital Video (DV): A Primer for Developing an Enterprise Video Strategy

    NASA Astrophysics Data System (ADS)

    Talovich, Thomas L.

    2002-09-01

    The purpose of this thesis is to provide an overview of digital video production and delivery. The thesis presents independent research demonstrating the educational value of incorporating video and multimedia content in training and education programs. The thesis explains the fundamental concepts associated with the process of planning, preparing, and publishing video content and assists in the development of follow-on strategies for incorporation of video content into distance training and education programs. The thesis provides an overview of the following technologies: Digital Video, Digital Video Editors, Video Compression, Streaming Video, and Optical Storage Media.

  14. Taking the High Road: Privacy in the Age of Drones

    ERIC Educational Resources Information Center

    Hamilton, Lucas; Harrington, Michael; Lawrence, Cameron; Perrot, Remy; Studer, Severin

    2017-01-01

    This case examines the technological, ethical and legal issues surrounding the use of drones in business. Mary McKay, a recent Management Information Systems (MIS) graduate sets up a professional photography and videography business. She gains a leg up on the competition with drone-mounted cameras and live video streaming through the free…

  15. Simulation of water vapor condensation on LOX droplet surface using liquid nitrogen

    NASA Technical Reports Server (NTRS)

    Powell, Eugene A.

    1988-01-01

    The formation of ice or water layers on liquid oxygen (LOX) droplets in the Space Shuttle Main Engine (SSME) environment was investigated. Formulation of such ice/water layers is indicated by phase-equilibrium considerations under conditions of high partial pressure of water vapor (steam) and low LOX droplet temperature prevailing in the SSME preburner or main chamber. An experimental investigation was begun using liquid nitrogen as a LOX simulant. A monodisperse liquid nitrogen droplet generator was developed which uses an acoustic driver to force the stream of liquid emerging from a capillary tube to break up into a stream of regularly space uniformly sized spherical droplets. The atmospheric pressure liquid nitrogen in the droplet generator reservoir was cooled below its boiling point to prevent two phase flow from occurring in the capillary tube. An existing steam chamber was modified for injection of liquid nitrogen droplets into atmospheric pressure superheated steam. The droplets were imaged using a stroboscopic video system and a laser shadowgraphy system. Several tests were conducted in which liquid nitrogen droplets were injected into the steam chamber. Under conditions of periodic droplet formation, images of 600 micron diameter liquid nitrogen droplets were obtained with the stroboscopic video systems.

  16. Resource optimized TTSH-URA for multimedia stream authentication in swallowable-capsule-based wireless body sensor networks.

    PubMed

    Wang, Wei; Wang, Chunqiu; Zhao, Min

    2014-03-01

    To ease the burdens on the hospitalization capacity, an emerging swallowable-capsule technology has evolved to serve as a remote gastrointestinal (GI) disease examination technique with the aid of the wireless body sensor network (WBSN). Secure multimedia transmission in such a swallowable-capsule-based WBSN faces critical challenges including energy efficiency and content quality guarantee. In this paper, we propose a joint resource allocation and stream authentication scheme to maintain the best possible video quality while ensuring security and energy efficiency in GI-WBSNs. The contribution of this research is twofold. First, we establish a unique signature-hash (S-H) diversity approach in the authentication domain to optimize video authentication robustness and the authentication bit rate overhead over a wireless channel. Based on the full exploration of S-H authentication diversity, we propose a new two-tier signature-hash (TTSH) stream authentication scheme to improve the video quality by reducing authentication dependence overhead while protecting its integrity. Second, we propose to combine this authentication scheme with a unique S-H oriented unequal resource allocation (URA) scheme to improve the energy-distortion-authentication performance of wireless video delivery in GI-WBSN. Our analysis and simulation results demonstrate that the proposed TTSH with URA scheme achieves considerable gain in both authenticated video quality and energy efficiency.

  17. Realization and optimization of AES algorithm on the TMS320DM6446 based on DaVinci technology

    NASA Astrophysics Data System (ADS)

    Jia, Wen-bin; Xiao, Fu-hai

    2013-03-01

    The application of AES algorithm in the digital cinema system avoids video data to be illegal theft or malicious tampering, and solves its security problems. At the same time, in order to meet the requirements of the real-time, scene and transparent encryption of high-speed data streams of audio and video in the information security field, through the in-depth analysis of AES algorithm principle, based on the hardware platform of TMS320DM6446, with the software framework structure of DaVinci, this paper proposes the specific realization methods of AES algorithm in digital video system and its optimization solutions. The test results show digital movies encrypted by AES128 can not play normally, which ensures the security of digital movies. Through the comparison of the performance of AES128 algorithm before optimization and after, the correctness and validity of improved algorithm is verified.

  18. Integration of Geographical Information Systems and Geophysical Applications with Distributed Computing Technologies.

    NASA Astrophysics Data System (ADS)

    Pierce, M. E.; Aktas, M. S.; Aydin, G.; Fox, G. C.; Gadgil, H.; Sayar, A.

    2005-12-01

    We examine the application of Web Service Architectures and Grid-based distributed computing technologies to geophysics and geo-informatics. We are particularly interested in the integration of Geographical Information System (GIS) services with distributed data mining applications. GIS services provide the general purpose framework for building archival data services, real time streaming data services, and map-based visualization services that may be integrated with data mining and other applications through the use of distributed messaging systems and Web Service orchestration tools. Building upon on our previous work in these areas, we present our current research efforts. These include fundamental investigations into increasing XML-based Web service performance, supporting real time data streams, and integrating GIS mapping tools with audio/video collaboration systems for shared display and annotation.

  19. Strategies for Transporting Data Between Classified and Unclassified Networks

    DTIC Science & Technology

    2016-03-01

    datagram protocol (UDP) must be used. The UDP is typically used when speed is a higher priority than data integrity, such as in music or video streaming ...and the exit point of data are separate and can be tightly controlled. This does effectively prevent the comingling of data and is used in industry to...perform functions such as streaming video and audio from secure to insecure networks (ref. 1). A second disadvantage lies in the fact that the

  20. Automated Music Video Generation Using Multi-level Feature-based Segmentation

    NASA Astrophysics Data System (ADS)

    Yoon, Jong-Chul; Lee, In-Kwon; Byun, Siwoo

    The expansion of the home video market has created a requirement for video editing tools to allow ordinary people to assemble videos from short clips. However, professional skills are still necessary to create a music video, which requires a stream to be synchronized with pre-composed music. Because the music and the video are pre-generated in separate environments, even a professional producer usually requires a number of trials to obtain a satisfactory synchronization, which is something that most amateurs are unable to achieve.

  1. Secured web-based video repository for multicenter studies

    PubMed Central

    Yan, Ling; Hicks, Matt; Winslow, Korey; Comella, Cynthia; Ludlow, Christy; Jinnah, H. A; Rosen, Ami R; Wright, Laura; Galpern, Wendy R; Perlmutter, Joel S

    2015-01-01

    Background We developed a novel secured web-based dystonia video repository for the Dystonia Coalition, part of the Rare Disease Clinical Research network funded by the Office of Rare Diseases Research and the National Institute of Neurological Disorders and Stroke. A critical component of phenotypic data collection for all projects of the Dystonia Coalition includes a standardized video of each participant. We now describe our method for collecting, serving and securing these videos that is widely applicable to other studies. Methods Each recruiting site uploads standardized videos to a centralized secured server for processing to permit website posting. The streaming technology used to view the videos from the website does not allow downloading of video files. With appropriate institutional review board approval and agreement with the hosting institution, users can search and view selected videos on the website using customizable, permissions-based access that maintains security yet facilitates research and quality control. Results This approach provides a convenient platform for researchers across institutions to evaluate and analyze shared video data. We have applied this methodology for quality control, confirmation of diagnoses, validation of rating scales, and implementation of new research projects. Conclusions We believe our system can be a model for similar projects that require access to common video resources. PMID:25630890

  2. StreamWorks: the live and on-demand audio/video server and its applications in medical information systems

    NASA Astrophysics Data System (ADS)

    Akrout, Nabil M.; Gordon, Howard; Palisson, Patrice M.; Prost, Remy; Goutte, Robert

    1996-05-01

    Facing a world undergoing fundamental and rapid change, healthcare organizations are seeking ways to increase innovation, quality, productivity, and patient value, keys to more effective care. Individual clinics acting alone can respond in only a limited way, so re- engineering the process key which services are delivered demands real-time collaborative technology that provides immediate information sharing, improving the management and coordination of information in cross-functional teams. StreamWorks is a development stage architecture that uses a distribution technique to deliver an advanced information management system for telemedicine. The challenge of StreamWorks in telemedicine is to enable equity of the quality of Health Care of Telecommunications and Information Technology also to patients in less favored regions, like India or China, where the quality of medical care varies greatly by region, but where there are some very current communications facilities.

  3. Psychovisual masks and intelligent streaming RTP techniques for the MPEG-4 standard

    NASA Astrophysics Data System (ADS)

    Mecocci, Alessandro; Falconi, Francesco

    2003-06-01

    In today multimedia audio-video communication systems, data compression plays a fundamental role by reducing the bandwidth waste and the costs of the infrastructures and equipments. Among the different compression standards, the MPEG-4 is becoming more and more accepted and widespread. Even if one of the fundamental aspects of this standard is the possibility of separately coding video objects (i.e. to separate moving objects from the background and adapt the coding strategy to the video content), currently implemented codecs work only at the full-frame level. In this way, many advantages of the flexible MPEG-4 syntax are missed. This lack is due both to the difficulties in properly segmenting moving objects in real scenes (featuring an arbitrary motion of the objects and of the acquisition sensor), and to the current use of these codecs, that are mainly oriented towards the market of DVD backups (a full-frame approach is enough for these applications). In this paper we propose a codec for MPEG-4 real-time object streaming, that codes separately the moving objects and the scene background. The proposed codec is capable of adapting its strategy during the transmission, by analysing the video currently transmitted and setting the coder parameters and modalities accordingly. For example, the background can be transmitted as a whole or by dividing it into "slightly-detailed" and "highly detailed" zones that are coded in different ways to reduce the bit-rate while preserving the perceived quality. The coder can automatically switch in real-time, from one modality to the other during the transmission, depending on the current video content. Psychovisual masks and other video-content based measurements have been used as inputs for a Self Learning Intelligent Controller (SLIC) that changes the parameters and the transmission modalities. The current implementation is based on the ISO 14496 standard code that allows Video Objects (VO) transmission (other Open Source Codes like: DivX, Xvid, and Cisco"s Mpeg-4IP, have been analyzed but, as for today, they do not support VO). The original code has been deeply modified to integrate the SLIC and to adapt it for real-time streaming. A personal RTP (Real Time Protocol) has been defined and a Client-Server application has been developed. The viewer can decode and demultiplex the stream in real-time, while adapting to the changing modalities adopted by the Server according to the current video content. The proposed codec works as follows: the image background is separated by means of a segmentation module and it is transmitted by means of a wavelet compression scheme similar to that used in the JPEG2000. The VO are coded separately and multiplexed with the background stream. At the receiver the stream is demultiplexed to obtain the background and the VO that are subsequently pasted together. The final quality depends on many factors, in particular: the quantization parameters, the Group Of Video Object (GOV) length, the GOV structure (i.e. the number of I-P-B VOP), the search area for motion compensation. These factors are strongly related to the following measurement parameters (that have been defined during the development): the Objects Apparent Size (OAS) in the scene, the Video Object Incidence factor (VOI), the temporal correlation (measured through the Normalized Mean SAD, NMSAD). The SLIC module analyzes the currently transmitted video and selects the most appropriate settings by choosing from a predefined set of transmission modalities. For example, in the case of a highly temporal correlated sequence, the number of B-VOP is increased to improve the compression ratio. The strategy for the selection of the number of B-VOP turns out to be very different from those reported in the literature for B-frames (adopted for MPEG-1 and MPEG-2), due to the different behaviour of the temporal correlation when limited only to moving objects. The SLIC module also decides how to transmit the background. In our implementation we adopted the Visual Brain theory i.e. the study of what the "psychic eye" can get from a scene. According to this theory, a Psychomask Image Analysis (PIA) module has been developed to extract the visually homogeneous regions of the background. The PIA module produces two complementary masks one for the visually low variance zones and one for the higly variable zones; these zones are compressed with different strategies and encoded into two multiplexed streams. From practical experiments it turned out that the separate coding is advantageous only if the low variance zones exceed 50% of the whole background area (due to the overhead given by the need of transmitting the zone masks). The SLIC module takes care of deciding the appropriate transmission modality by analyzing the results produced by the PIA module. The main features of this codec are: low bitrate, good image quality and coding speed. The current implementation runs in real-time on standard PC platforms, the major limitation being the fixed position of the acquisition sensor. This limitation is due to the difficulties in separating moving objects from the background when the acquisition sensor moves. Our current real-time segmentation module does not produce suitable results if the acquisition sensor moves (only slight oscillatory movements are tolerated). In any case, the system is particularly suitable for tele surveillance applications at low bit-rates, where the camera is usually fixed or alternates among some predetermined positions (our segmentation module is capable of accurately separate moving objects from the static background when the acquisition sensor stops, even if different scenes are seen as a result of the sensor displacements). Moreover, the proposed architecture is general, in the sense that when real-time, robust segmentation systems (capable of separating objects in real-time from the background while the sensor itself is moving) will be available, they can be easily integrated while leaving the rest of the system unchanged. Experimental results related to real sequences for traffic monitoring and for people tracking and afety control are reported and deeply discussed in the paper. The whole system has been implemented in standard ANSI C code and currently runs on standard PCs under Microsoft Windows operating system (Windows 2000 pro and Windows XP).

  4. Exploiting spatio-temporal characteristics of human vision for mobile video applications

    NASA Astrophysics Data System (ADS)

    Jillani, Rashad; Kalva, Hari

    2008-08-01

    Video applications on handheld devices such as smart phones pose a significant challenge to achieve high quality user experience. Recent advances in processor and wireless networking technology are producing a new class of multimedia applications (e.g. video streaming) for mobile handheld devices. These devices are light weight and have modest sizes, and therefore very limited resources - lower processing power, smaller display resolution, lesser memory, and limited battery life as compared to desktop and laptop systems. Multimedia applications on the other hand have extensive processing requirements which make the mobile devices extremely resource hungry. In addition, the device specific properties (e.g. display screen) significantly influence the human perception of multimedia quality. In this paper we propose a saliency based framework that exploits the structure in content creation as well as the human vision system to find the salient points in the incoming bitstream and adapt it according to the target device, thus improving the quality of new adapted area around salient points. Our experimental results indicate that the adaptation process that is cognizant of video content and user preferences can produce better perceptual quality video for mobile devices. Furthermore, we demonstrated how such a framework can affect user experience on a handheld device.

  5. 4DCAPTURE: a general purpose software package for capturing and analyzing two- and three-dimensional motion data acquired from video sequences

    NASA Astrophysics Data System (ADS)

    Walton, James S.; Hodgson, Peter; Hallamasek, Karen; Palmer, Jake

    2003-07-01

    4DVideo is creating a general purpose capability for capturing and analyzing kinematic data from video sequences in near real-time. The core element of this capability is a software package designed for the PC platform. The software ("4DCapture") is designed to capture and manipulate customized AVI files that can contain a variety of synchronized data streams -- including audio, video, centroid locations -- and signals acquired from more traditional sources (such as accelerometers and strain gauges.) The code includes simultaneous capture or playback of multiple video streams, and linear editing of the images (together with the ancilliary data embedded in the files). Corresponding landmarks seen from two or more views are matched automatically, and photogrammetric algorithms permit multiple landmarks to be tracked in two- and three-dimensions -- with or without lens calibrations. Trajectory data can be processed within the main application or they can be exported to a spreadsheet where they can be processed or passed along to a more sophisticated, stand-alone, data analysis application. Previous attempts to develop such applications for high-speed imaging have been limited in their scope, or by the complexity of the application itself. 4DVideo has devised a friendly ("FlowStack") user interface that assists the end-user to capture and treat image sequences in a natural progression. 4DCapture employs the AVI 2.0 standard and DirectX technology which effectively eliminates the file size limitations found in older applications. In early tests, 4DVideo has streamed three RS-170 video sources to disk for more than an hour without loss of data. At this time, the software can acquire video sequences in three ways: (1) directly, from up to three hard-wired cameras supplying RS-170 (monochrome) signals; (2) directly, from a single camera or video recorder supplying an NTSC (color) signal; and (3) by importing existing video streams in the AVI 1.0 or AVI 2.0 formats. The latter is particularly useful for high-speed applications where the raw images are often captured and stored by the camera before being downloaded. Provision has been made to synchronize data acquired from any combination of these video sources using audio and visual "tags." Additional "front-ends," designed for digital cameras, are anticipated.

  6. Multimedia content description framework

    NASA Technical Reports Server (NTRS)

    Bergman, Lawrence David (Inventor); Mohan, Rakesh (Inventor); Li, Chung-Sheng (Inventor); Smith, John Richard (Inventor); Kim, Michelle Yoonk Yung (Inventor)

    2003-01-01

    A framework is provided for describing multimedia content and a system in which a plurality of multimedia storage devices employing the content description methods of the present invention can interoperate. In accordance with one form of the present invention, the content description framework is a description scheme (DS) for describing streams or aggregations of multimedia objects, which may comprise audio, images, video, text, time series, and various other modalities. This description scheme can accommodate an essentially limitless number of descriptors in terms of features, semantics or metadata, and facilitate content-based search, index, and retrieval, among other capabilities, for both streamed or aggregated multimedia objects.

  7. Design and Uses of an Audio/Video Streaming System for Students with Disabilities

    ERIC Educational Resources Information Center

    Hogan, Bryan J.

    2004-01-01

    Within most educational institutes there are a substantial number of students with varying physical and mental disabilities. These might range from difficulty in reading to difficulty in attending the institute. Whatever their disability, it places a barrier between them and their education. In the past few years there have been rapid and striking…

  8. Transforming Education Research Through Open Video Data Sharing.

    PubMed

    Gilmore, Rick O; Adolph, Karen E; Millman, David S; Gordon, Andrew

    2016-01-01

    Open data sharing promises to accelerate the pace of discovery in the developmental and learning sciences, but significant technical, policy, and cultural barriers have limited its adoption. As a result, most research on learning and development remains shrouded in a culture of isolation. Data sharing is the rare exception (Gilmore, 2016). Many researchers who study teaching and learning in classroom, laboratory, museum, and home contexts use video as a primary source of raw research data. Unlike other measures, video captures the complexity, richness, and diversity of behavior. Moreover, because video is self-documenting, it presents significant potential for reuse. However, the potential for reuse goes largely unrealized because videos are rarely shared. Research videos contain information about participants' identities making the materials challenging to share. The large size of video files, diversity of formats, and incompatible software tools pose technical challenges. The Databrary (databrary.org) digital library enables researchers who study learning and development to store, share, stream, and annotate videos. In this article, we describe how Databrary has overcome barriers to sharing research videos and associated data and metadata. Databrary has developed solutions for respecting participants' privacy; for storing, streaming, and sharing videos; and for managing videos and associated metadata. The Databrary experience suggests ways that videos and other identifiable data collected in the context of educational research might be shared. Open data sharing enabled by Databrary can serve as a catalyst for a truly multidisciplinary science of learning.

  9. Transforming Education Research Through Open Video Data Sharing

    PubMed Central

    Gilmore, Rick O.; Adolph, Karen E.; Millman, David S.; Gordon, Andrew

    2016-01-01

    Open data sharing promises to accelerate the pace of discovery in the developmental and learning sciences, but significant technical, policy, and cultural barriers have limited its adoption. As a result, most research on learning and development remains shrouded in a culture of isolation. Data sharing is the rare exception (Gilmore, 2016). Many researchers who study teaching and learning in classroom, laboratory, museum, and home contexts use video as a primary source of raw research data. Unlike other measures, video captures the complexity, richness, and diversity of behavior. Moreover, because video is self-documenting, it presents significant potential for reuse. However, the potential for reuse goes largely unrealized because videos are rarely shared. Research videos contain information about participants’ identities making the materials challenging to share. The large size of video files, diversity of formats, and incompatible software tools pose technical challenges. The Databrary (databrary.org) digital library enables researchers who study learning and development to store, share, stream, and annotate videos. In this article, we describe how Databrary has overcome barriers to sharing research videos and associated data and metadata. Databrary has developed solutions for respecting participants’ privacy; for storing, streaming, and sharing videos; and for managing videos and associated metadata. The Databrary experience suggests ways that videos and other identifiable data collected in the context of educational research might be shared. Open data sharing enabled by Databrary can serve as a catalyst for a truly multidisciplinary science of learning. PMID:28042361

  10. Promoting Academic Programs Using Online Videos

    ERIC Educational Resources Information Center

    Clark, Thomas; Stewart, Julie

    2007-01-01

    In the last 20 years, the Internet has evolved from simply conveying text and then still photographs and music to the present-day medium in which individuals are contributors and consumers of a nearly infinite number of professional and do-it-yourself videos. In this dynamic environment, new generations of Internet users are streaming video and…

  11. Software Quality Measurement for Distributed Systems. Volume 3. Distributed Computing Systems: Impact on Software Quality.

    DTIC Science & Technology

    1983-07-01

    Distributed Computing Systems impact DrnwrR - aehR on Sotwar Quaity. PERFORMING 010. REPORT NUMBER 7. AUTNOW) S. CONTRACT OR GRANT "UMBER(*)IS ThomasY...C31 Application", "Space Systems Network", "Need for Distributed Database Management", and "Adaptive Routing". This is discussed in the last para ...data reduction, buffering, encryption, and error detection and correction functions. Examples of such data streams include imagery data, video

  12. Innovative hyperchaotic encryption algorithm for compressed video

    NASA Astrophysics Data System (ADS)

    Yuan, Chun; Zhong, Yuzhuo; Yang, Shiqiang

    2002-12-01

    It is accepted that stream cryptosystem can achieve good real-time performance and flexibility which implements encryption by selecting few parts of the block data and header information of the compressed video stream. Chaotic random number generator, for example Logistics Map, is a comparatively promising substitute, but it is easily attacked by nonlinear dynamic forecasting and geometric information extracting. In this paper, we present a hyperchaotic cryptography scheme to encrypt the compressed video, which integrates Logistics Map with Z(232 - 1) field linear congruential algorithm to strengthen the security of the mono-chaotic cryptography, meanwhile, the real-time performance and flexibility of the chaotic sequence cryptography are maintained. It also integrates with the dissymmetrical public-key cryptography and implements encryption and identity authentification on control parameters at initialization phase. In accord with the importance of data in compressed video stream, encryption is performed in layered scheme. In the innovative hyperchaotic cryptography, the value and the updating frequency of control parameters can be changed online to satisfy the requirement of the network quality, processor capability and security requirement. The innovative hyperchaotic cryprography proves robust security by cryptoanalysis, shows good real-time performance and flexible implement capability through the arithmetic evaluating and test.

  13. Indexing and retrieval of MPEG compressed video

    NASA Astrophysics Data System (ADS)

    Kobla, Vikrant; Doermann, David S.

    1998-04-01

    To keep pace with the increased popularity of digital video as an archival medium, the development of techniques for fast and efficient analysis of ideo streams is essential. In particular, solutions to the problems of storing, indexing, browsing, and retrieving video data from large multimedia databases are necessary to a low access to these collections. Given that video is often stored efficiently in a compressed format, the costly overhead of decompression can be reduced by analyzing the compressed representation directly. In earlier work, we presented compressed domain parsing techniques which identified shots, subshots, and scenes. In this article, we present efficient key frame selection, feature extraction, indexing, and retrieval techniques that are directly applicable to MPEG compressed video. We develop a frame type independent representation which normalizes spatial and temporal features including frame type, frame size, macroblock encoding, and motion compensation vectors. Features for indexing are derived directly from this representation and mapped to a low- dimensional space where they can be accessed using standard database techniques. Spatial information is used as primary index into the database and temporal information is used to rank retrieved clips and enhance the robustness of the system. The techniques presented enable efficient indexing, querying, and retrieval of compressed video as demonstrated by our system which typically takes a fraction of a second to retrieve similar video scenes from a database, with over 95 percent recall.

  14. Low-complexity transcoding algorithm from H.264/AVC to SVC using data mining

    NASA Astrophysics Data System (ADS)

    Garrido-Cantos, Rosario; De Cock, Jan; Martínez, Jose Luis; Van Leuven, Sebastian; Cuenca, Pedro; Garrido, Antonio

    2013-12-01

    Nowadays, networks and terminals with diverse characteristics of bandwidth and capabilities coexist. To ensure a good quality of experience, this diverse environment demands adaptability of the video stream. In general, video contents are compressed to save storage capacity and to reduce the bandwidth required for its transmission. Therefore, if these compressed video streams were compressed using scalable video coding schemes, they would be able to adapt to those heterogeneous networks and a wide range of terminals. Since the majority of the multimedia contents are compressed using H.264/AVC, they cannot benefit from that scalability. This paper proposes a low-complexity algorithm to convert an H.264/AVC bitstream without scalability to scalable bitstreams with temporal scalability in baseline and main profiles by accelerating the mode decision task of the scalable video coding encoding stage using machine learning tools. The results show that when our technique is applied, the complexity is reduced by 87% while maintaining coding efficiency.

  15. SCTP as scalable video coding transport

    NASA Astrophysics Data System (ADS)

    Ortiz, Jordi; Graciá, Eduardo Martínez; Skarmeta, Antonio F.

    2013-12-01

    This study presents an evaluation of the Stream Transmission Control Protocol (SCTP) for the transport of the scalable video codec (SVC), proposed by MPEG as an extension to H.264/AVC. Both technologies fit together properly. On the one hand, SVC permits to split easily the bitstream into substreams carrying different video layers, each with different importance for the reconstruction of the complete video sequence at the receiver end. On the other hand, SCTP includes features, such as the multi-streaming and multi-homing capabilities, that permit to transport robustly and efficiently the SVC layers. Several transmission strategies supported on baseline SCTP and its concurrent multipath transfer (CMT) extension are compared with the classical solutions based on the Transmission Control Protocol (TCP) and the Realtime Transmission Protocol (RTP). Using ns-2 simulations, it is shown that CMT-SCTP outperforms TCP and RTP in error-prone networking environments. The comparison is established according to several performance measurements, including delay, throughput, packet loss, and peak signal-to-noise ratio of the received video.

  16. MWAHCA: a multimedia wireless ad hoc cluster architecture.

    PubMed

    Diaz, Juan R; Lloret, Jaime; Jimenez, Jose M; Sendra, Sandra

    2014-01-01

    Wireless Ad hoc networks provide a flexible and adaptable infrastructure to transport data over a great variety of environments. Recently, real-time audio and video data transmission has been increased due to the appearance of many multimedia applications. One of the major challenges is to ensure the quality of multimedia streams when they have passed through a wireless ad hoc network. It requires adapting the network architecture to the multimedia QoS requirements. In this paper we propose a new architecture to organize and manage cluster-based ad hoc networks in order to provide multimedia streams. Proposed architecture adapts the network wireless topology in order to improve the quality of audio and video transmissions. In order to achieve this goal, the architecture uses some information such as each node's capacity and the QoS parameters (bandwidth, delay, jitter, and packet loss). The architecture splits the network into clusters which are specialized in specific multimedia traffic. The real system performance study provided at the end of the paper will demonstrate the feasibility of the proposal.

  17. Deep learning architecture for recognition of abnormal activities

    NASA Astrophysics Data System (ADS)

    Khatrouch, Marwa; Gnouma, Mariem; Ejbali, Ridha; Zaied, Mourad

    2018-04-01

    The video surveillance is one of the key areas in computer vision researches. The scientific challenge in this field involves the implementation of automatic systems to obtain detailed information about individuals and groups behaviors. In particular, the detection of abnormal movements of groups or individuals requires a fine analysis of frames in the video stream. In this article, we propose a new method to detect anomalies in crowded scenes. We try to categorize the video in a supervised mode accompanied by unsupervised learning using the principle of the autoencoder. In order to construct an informative concept for the recognition of these behaviors, we use a technique of representation based on the superposition of human silhouettes. The evaluation of the UMN dataset demonstrates the effectiveness of the proposed approach.

  18. Remotely accessible laboratory for MEMS testing

    NASA Astrophysics Data System (ADS)

    Sivakumar, Ganapathy; Mulsow, Matthew; Melinger, Aaron; Lacouture, Shelby; Dallas, Tim E.

    2010-02-01

    We report on the construction of a remotely accessible and interactive laboratory for testing microdevices (aka: MicroElectroMechancial Systems - MEMS). Enabling expanded utilization of microdevices for research, commercial, and educational purposes is very important for driving the creation of future MEMS devices and applications. Unfortunately, the relatively high costs associated with MEMS devices and testing infrastructure makes widespread access to the world of MEMS difficult. The creation of a virtual lab to control and actuate MEMS devices over the internet helps spread knowledge to a larger audience. A host laboratory has been established that contains a digital microscope, microdevices, controllers, and computers that can be logged into through the internet. The overall layout of the tele-operated MEMS laboratory system can be divided into two major parts: the server side and the client side. The server-side is present at Texas Tech University, and hosts a server machine that runs the Linux operating system and is used for interfacing the MEMS lab with the outside world via internet. The controls from the clients are transferred to the lab side through the server interface. The server interacts with the electronics required to drive the MEMS devices using a range of National Instruments hardware and LabView Virtual Instruments. An optical microscope (100 ×) with a CCD video camera is used to capture images of the operating MEMS. The server broadcasts the live video stream over the internet to the clients through the website. When the button is pressed on the website, the MEMS device responds and the video stream shows the movement in close to real time.

  19. Detection of illegal transfer of videos over the Internet

    NASA Astrophysics Data System (ADS)

    Chaisorn, Lekha; Sainui, Janya; Manders, Corey

    2010-07-01

    In this paper, a method for detecting infringements or modifications of a video in real-time is proposed. The method first segments a video stream into shots, after which it extracts some reference frames as keyframes. This process is performed employing a Singular Value Decomposition (SVD) technique developed in this work. Next, for each input video (represented by its keyframes), ordinal-based signature and SIFT (Scale Invariant Feature Transform) descriptors are generated. The ordinal-based method employs a two-level bitmap indexing scheme to construct the index for each video signature. The first level clusters all input keyframes into k clusters while the second level converts the ordinal-based signatures into bitmap vectors. On the other hand, the SIFT-based method directly uses the descriptors as the index. Given a suspect video (being streamed or transferred on the Internet), we generate the signature (ordinal and SIFT descriptors) then we compute similarity between its signature and those signatures in the database based on ordinal signature and SIFT descriptors separately. For similarity measure, besides the Euclidean distance, Boolean operators are also utilized during the matching process. We have tested our system by performing several experiments on 50 videos (each about 1/2 hour in duration) obtained from the TRECVID 2006 data set. For experiments set up, we refer to the conditions provided by TRECVID 2009 on "Content-based copy detection" task. In addition, we also refer to the requirements issued in the call for proposals by MPEG standard on the similar task. Initial result shows that our framework is effective and robust. As compared to our previous work, on top of the achievement we obtained by reducing the storage space and time taken in the ordinal based method, by introducing the SIFT features, we could achieve an overall accuracy in F1 measure of about 96% (improved about 8%).

  20. Mobile Vehicle Teleoperated Over Wireless IP

    DTIC Science & Technology

    2007-06-13

    VideoLAN software suite. The VLC media player portion of this suite handles net- work streaming of video, as well as the receipt and display of the video...is found in appendix C.7. Video Display The video feed is displayed for the operator using VLC opened independently from the control sending program...This gives the operator the most choice in how to configure the display. To connect VLC to the feed all you need is the IP address from the Java

  1. Multimodal Speaker Diarization.

    PubMed

    Noulas, A; Englebienne, G; Krose, B J A

    2012-01-01

    We present a novel probabilistic framework that fuses information coming from the audio and video modality to perform speaker diarization. The proposed framework is a Dynamic Bayesian Network (DBN) that is an extension of a factorial Hidden Markov Model (fHMM) and models the people appearing in an audiovisual recording as multimodal entities that generate observations in the audio stream, the video stream, and the joint audiovisual space. The framework is very robust to different contexts, makes no assumptions about the location of the recording equipment, and does not require labeled training data as it acquires the model parameters using the Expectation Maximization (EM) algorithm. We apply the proposed model to two meeting videos and a news broadcast video, all of which come from publicly available data sets. The results acquired in speaker diarization are in favor of the proposed multimodal framework, which outperforms the single modality analysis results and improves over the state-of-the-art audio-based speaker diarization.

  2. Reliable Adaptive Video Streaming Driven by Perceptual Semantics for Situational Awareness

    PubMed Central

    Pimentel-Niño, M. A.; Saxena, Paresh; Vazquez-Castro, M. A.

    2015-01-01

    A novel cross-layer optimized video adaptation driven by perceptual semantics is presented. The design target is streamed live video to enhance situational awareness in challenging communications conditions. Conventional solutions for recreational applications are inadequate and novel quality of experience (QoE) framework is proposed which allows fully controlled adaptation and enables perceptual semantic feedback. The framework relies on temporal/spatial abstraction for video applications serving beyond recreational purposes. An underlying cross-layer optimization technique takes into account feedback on network congestion (time) and erasures (space) to best distribute available (scarce) bandwidth. Systematic random linear network coding (SRNC) adds reliability while preserving perceptual semantics. Objective metrics of the perceptual features in QoE show homogeneous high performance when using the proposed scheme. Finally, the proposed scheme is in line with content-aware trends, by complying with information-centric-networking philosophy and architecture. PMID:26247057

  3. Context-Aware Fusion of RGB and Thermal Imagery for Traffic Monitoring

    PubMed Central

    Alldieck, Thiemo; Bahnsen, Chris H.; Moeslund, Thomas B.

    2016-01-01

    In order to enable a robust 24-h monitoring of traffic under changing environmental conditions, it is beneficial to observe the traffic scene using several sensors, preferably from different modalities. To fully benefit from multi-modal sensor output, however, one must fuse the data. This paper introduces a new approach for fusing color RGB and thermal video streams by using not only the information from the videos themselves, but also the available contextual information of a scene. The contextual information is used to judge the quality of a particular modality and guides the fusion of two parallel segmentation pipelines of the RGB and thermal video streams. The potential of the proposed context-aware fusion is demonstrated by extensive tests of quantitative and qualitative characteristics on existing and novel video datasets and benchmarked against competing approaches to multi-modal fusion. PMID:27869730

  4. Flexible video conference system based on ASICs and DSPs

    NASA Astrophysics Data System (ADS)

    Hu, Qiang; Yu, Songyu

    1995-02-01

    In this paper, a video conference system we developed recently is presented. In this system the video codec is compatible with CCITT H.261, the audio codec is compatible with G.711 and G.722, the channel interface circuit is designed according to CCITT H.221. In this paper emphasis is given to the video codec, which is both flexible and robust. The video codec is based on LSI LOGIC Corporation's L64700 series video compression chipset. The main function blocks of H.261, such as DCT, motion estimation, VLC, VLD, are performed by this chipset, but the chipset is a nude chipset, no peripheral function, such as memory interface, is integrated into it, this results in great difficulty to implement the system. To implement the frame buffer controller, a DSP-TMS 320c25 and a group of GALs is used, SRAM is used as a current and previous frame buffer, the DSP is not only the controller of the frame buffer, it's also the controller of the whole video codec. Because of the use of the DSP, the architecture of the video codec is very flexible, many system parameters can be reconfigured for different applications. The architecture of the whole video codec is a streamline structure. In H.261, BCH(511,493) coding is recommended to work against random errors in transmission, but if burst error occurs, it causes serious result. To solve this problem, an interleaving method is used, that means the BCH code is interleaved before it's transmitted, in the receiver it is interleaved again and the bit stream is in the original order, but the error bits are distributed into several BCH words, and the BCH decoder is able to correct it. Considering that extreme conditions may occur, a function block is implemented which is somewhat like a watchdog, it assures that the receiver can recover from errors no matter what serious error occurs in transmission. In developing the video conference system, a new synchronization problem must be solved, the monitor on the receiver can't be easily synchronized with the camera on another side, a new method is described in detail which can solve this problem successfully.

  5. Development and preliminary validation of an interactive remote physical therapy system.

    PubMed

    Mishra, Anup K; Skubic, Marjorie; Abbott, Carmen

    2015-01-01

    In this paper, we present an interactive physical therapy system (IPTS) for remote quantitative assessment of clients in the home. The system consists of two different interactive interfaces connected through a network, for a real-time low latency video conference using audio, video, skeletal, and depth data streams from a Microsoft Kinect. To test the potential of IPTS, experiments were conducted with 5 independent living senior subjects in Kansas City, MO. Also, experiments were conducted in the lab to validate the real-time biomechanical measures calculated using the skeletal data from the Microsoft Xbox 360 Kinect and Microsoft Xbox One Kinect, with ground truth data from a Vicon motion capture system. Good agreements were found in the validation tests. The results show potential capabilities of the IPTS system to provide remote physical therapy to clients, especially older adults, who may find it difficult to visit the clinic.

  6. "Deja Vu"? A Decade of Research on Language Laboratories, Television and Video in Language Learning

    ERIC Educational Resources Information Center

    Vanderplank, Robert

    2010-01-01

    The developments in the last ten years in the form of DVD, streaming video, video on demand, interactive television and digital language laboratories call for an assessment of the research into language teaching and learning making use of these technologies and the learning paradigms underpinning them. This paper surveys research on language…

  7. Pre-processing SAR image stream to facilitate compression for transport on bandwidth-limited-link

    DOEpatents

    Rush, Bobby G.; Riley, Robert

    2015-09-29

    Pre-processing is applied to a raw VideoSAR (or similar near-video rate) product to transform the image frame sequence into a product that resembles more closely the type of product for which conventional video codecs are designed, while sufficiently maintaining utility and visual quality of the product delivered by the codec.

  8. Video in Distance Education: ITFS vs. Web-Streaming--Evaluation of Student Attitudes

    ERIC Educational Resources Information Center

    Reisslein, Jana; Seeling, Patrick; Reisslein, Martin

    2005-01-01

    The use of video in distance education courses has a long tradition, with many colleges and universities having been delivering distance education courses with video since the 80's using the Instructional Television Fixed Service (ITFS) and cable television. With the emergence of the Internet and the increased access bandwidths from private homes…

  9. Learning a Continuous-Time Streaming Video QoE Model.

    PubMed

    Ghadiyaram, Deepti; Pan, Janice; Bovik, Alan C

    2018-05-01

    Over-the-top adaptive video streaming services are frequently impacted by fluctuating network conditions that can lead to rebuffering events (stalling events) and sudden bitrate changes. These events visually impact video consumers' quality of experience (QoE) and can lead to consumer churn. The development of models that can accurately predict viewers' instantaneous subjective QoE under such volatile network conditions could potentially enable the more efficient design of quality-control protocols for media-driven services, such as YouTube, Amazon, Netflix, and so on. However, most existing models only predict a single overall QoE score on a given video and are based on simple global video features, without accounting for relevant aspects of human perception and behavior. We have created a QoE evaluator, called the time-varying QoE Indexer, that accounts for interactions between stalling events, analyzes the spatial and temporal content of a video, predicts the perceptual video quality, models the state of the client-side data buffer, and consequently predicts continuous-time quality scores that agree quite well with human opinion scores. The new QoE predictor also embeds the impact of relevant human cognitive factors, such as memory and recency, and their complex interactions with the video content being viewed. We evaluated the proposed model on three different video databases and attained standout QoE prediction performance.

  10. Protection of HEVC Video Delivery in Vehicular Networks with RaptorQ Codes

    PubMed Central

    Martínez-Rach, Miguel; López, Otoniel; Malumbres, Manuel Pérez

    2014-01-01

    With future vehicles equipped with processing capability, storage, and communications, vehicular networks will become a reality. A vast number of applications will arise that will make use of this connectivity. Some of them will be based on video streaming. In this paper we focus on HEVC video coding standard streaming in vehicular networks and how it deals with packet losses with the aid of RaptorQ, a Forward Error Correction scheme. As vehicular networks are packet loss prone networks, protection mechanisms are necessary if we want to guarantee a minimum level of quality of experience to the final user. We have run simulations to evaluate which configurations fit better in this type of scenarios. PMID:25136675

  11. Formal testing and utilization of streaming media to improve flight crew safety knowledge.

    PubMed

    Bellazzini, Marc A; Rankin, Peter M; Quisling, Jason; Gangnon, Ronald; Kohrs, Mike

    2008-01-01

    Increased concerns over the safety of air medical transport have prompted development of novel ways to increase safety. The objective of our study was to determine if an Internet streaming media safety video increased crew safety knowledge. 23 out of 40 crew members took an online safety pre-test, watched a safety video specific to our program and completed immediate and long-term post-testing 6 months later. Mean pre-test, post-test and 6 month follow up test scores were 84.9%, 92.3% and 88.4% respectively. There was a statistically significant difference in all scores (p

  12. Huffman coding in advanced audio coding standard

    NASA Astrophysics Data System (ADS)

    Brzuchalski, Grzegorz

    2012-05-01

    This article presents several hardware architectures of Advanced Audio Coding (AAC) Huffman noiseless encoder, its optimisations and working implementation. Much attention has been paid to optimise the demand of hardware resources especially memory size. The aim of design was to get as short binary stream as possible in this standard. The Huffman encoder with whole audio-video system has been implemented in FPGA devices.

  13. All Source Sensor Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    - PNNL, Harold Trease

    2012-10-10

    ASSA is a software application that processes binary data into summarized index tables that can be used to organize features contained within the data. ASSA's index tables can also be used to search for user specified features. ASSA is designed to organize and search for patterns in unstructured binary data streams or archives, such as video, images, audio, and network traffic. ASSA is basically a very general search engine used to search for any pattern in any binary data stream. It has uses in video analytics, image analysis, audio analysis, searching hard-drives, monitoring network traffic, etc.

  14. Viewing Michigan's Digital Future: Results of a Survey of Educators' Use of Digital Video in the USA

    ERIC Educational Resources Information Center

    Mardis, Marcia A.

    2009-01-01

    Digital video is a growing and important presence in student learning. This paper reports the results of a survey of American educators in Michigan (n = 426) conducted in spring 2008. The survey included questions about educators' attitudes toward the streaming and downloadable video services available to them in their schools. The survey results…

  15. Application of MPEG-7 descriptors for content-based indexing of sports videos

    NASA Astrophysics Data System (ADS)

    Hoeynck, Michael; Auweiler, Thorsten; Ohm, Jens-Rainer

    2003-06-01

    The amount of multimedia data available worldwide is increasing every day. There is a vital need to annotate multimedia data in order to allow universal content access and to provide content-based search-and-retrieval functionalities. Since supervised video annotation can be time consuming, an automatic solution is appreciated. We review recent approaches to content-based indexing and annotation of videos for different kind of sports, and present our application for the automatic annotation of equestrian sports videos. Thereby, we especially concentrate on MPEG-7 based feature extraction and content description. We apply different visual descriptors for cut detection. Further, we extract the temporal positions of single obstacles on the course by analyzing MPEG-7 edge information and taking specific domain knowledge into account. Having determined single shot positions as well as the visual highlights, the information is jointly stored together with additional textual information in an MPEG-7 description scheme. Using this information, we generate content summaries which can be utilized in a user front-end in order to provide content-based access to the video stream, but further content-based queries and navigation on a video-on-demand streaming server.

  16. Wireless augmented reality communication system

    NASA Technical Reports Server (NTRS)

    Devereaux, Ann (Inventor); Agan, Martin (Inventor); Jedrey, Thomas (Inventor)

    2006-01-01

    The system of the present invention is a highly integrated radio communication system with a multimedia co-processor which allows true two-way multimedia (video, audio, data) access as well as real-time biomedical monitoring in a pager-sized portable access unit. The system is integrated in a network structure including one or more general purpose nodes for providing a wireless-to-wired interface. The network architecture allows video, audio and data (including biomedical data) streams to be connected directly to external users and devices. The portable access units may also be mated to various non-personal devices such as cameras or environmental sensors for providing a method for setting up wireless sensor nets from which reported data may be accessed through the portable access unit. The reported data may alternatively be automatically logged at a remote computer for access and viewing through a portable access unit, including the user's own.

  17. Wireless Augmented Reality Communication System

    NASA Technical Reports Server (NTRS)

    Jedrey, Thomas (Inventor); Agan, Martin (Inventor); Devereaux, Ann (Inventor)

    2014-01-01

    The system of the present invention is a highly integrated radio communication system with a multimedia co-processor which allows true two-way multimedia (video, audio, data) access as well as real-time biomedical monitoring in a pager-sized portable access unit. The system is integrated in a network structure including one or more general purpose nodes for providing a wireless-to-wired interface. The network architecture allows video, audio and data (including biomedical data) streams to be connected directly to external users and devices. The portable access units may also be mated to various non-personal devices such as cameras or environmental sensors for providing a method for setting up wireless sensor nets from which reported data may be accessed through the portable access unit. The reported data may alternatively be automatically logged at a remote computer for access and viewing through a portable access unit, including the user's own.

  18. Wireless Augmented Reality Communication System

    NASA Technical Reports Server (NTRS)

    Agan, Martin (Inventor); Devereaux, Ann (Inventor); Jedrey, Thomas (Inventor)

    2016-01-01

    The system of the present invention is a highly integrated radio communication system with a multimedia co-processor which allows true two-way multimedia (video, audio, data) access as well as real-time biomedical monitoring in a pager-sized portable access unit. The system is integrated in a network structure including one or more general purpose nodes for providing a wireless-to-wired interface. The network architecture allows video, audio and data (including biomedical data) streams to be connected directly to external users and devices. The portable access units may also be mated to various non-personal devices such as cameras or environmental sensors for providing a method for setting up wireless sensor nets from which reported data may be accessed through the portable access unit. The reported data may alternatively be automatically logged at a remote computer for access and viewing through a portable access unit, including the user's own.

  19. Smartphone-based photoplethysmographic imaging for heart rate monitoring.

    PubMed

    Alafeef, Maha

    2017-07-01

    The purpose of this study is to make use of visible light reflected mode photoplethysmographic (PPG) imaging for heart rate (HR) monitoring via smartphones. The system uses the built-in camera feature in mobile phones to capture video from the subject's index fingertip. The video is processed, and then the PPG signal resulting from the video stream processing is used to calculate the subject's heart rate. Records from 19 subjects were used to evaluate the system's performance. The HR values obtained by the proposed method were compared with the actual HR. The obtained results show an accuracy of 99.7% and a maximum absolute error of 0.4 beats/min where most of the absolute errors lay in the range of 0.04-0.3 beats/min. Given the encouraging results, this type of HR measurement can be adopted with great benefit, especially in the conditions of personal use or home-based care. The proposed method represents an efficient portable solution for HR accurate detection and recording.

  20. OceanVideoLab: A Tool for Exploring Underwater Video

    NASA Astrophysics Data System (ADS)

    Ferrini, V. L.; Morton, J. J.; Wiener, C.

    2016-02-01

    Video imagery acquired with underwater vehicles is an essential tool for characterizing seafloor ecosystems and seafloor geology. It is a fundamental component of ocean exploration that facilitates real-time operations, augments multidisciplinary scientific research, and holds tremendous potential for public outreach and engagement. Acquiring, documenting, managing, preserving and providing access to large volumes of video acquired with underwater vehicles presents a variety of data stewardship challenges to the oceanographic community. As a result, only a fraction of underwater video content collected with research submersibles is documented, discoverable and/or viewable online. With more than 1 billion users, YouTube offers infrastructure that can be leveraged to help address some of the challenges associated with sharing underwater video with a broad global audience. Anyone can post content to YouTube, and some oceanographic organizations, such as the Schmidt Ocean Institute, have begun live-streaming video directly from underwater vehicles. OceanVideoLab (oceanvideolab.org) was developed to help improve access to underwater video through simple annotation, browse functionality, and integration with related environmental data. Any underwater video that is publicly accessible on YouTube can be registered with OceanVideoLab by simply providing a URL. It is strongly recommended that a navigational file also be supplied to enable geo-referencing of observations. Once a video is registered, it can be viewed and annotated using a simple user interface that integrates observations with vehicle navigation data if provided. This interface includes an interactive map and a list of previous annotations that allows users to jump to times of specific observations in the video. Future enhancements to OceanVideoLab will include the deployment of a search interface, the development of an application program interface (API) that will drive the search and enable querying of content by other systems/tools, the integration of related environmental data from complementary data systems (e.g. temperature, bathymetry), and the expansion of infrastructure to enable broad crowdsourcing of annotations.

  1. Overview of the H.264/AVC video coding standard

    NASA Astrophysics Data System (ADS)

    Luthra, Ajay; Topiwala, Pankaj N.

    2003-11-01

    H.264/MPEG-4 AVC is the latest coding standard jointly developed by the Video Coding Experts Group (VCEG) of ITU-T and Moving Picture Experts Group (MPEG) of ISO/IEC. It uses state of the art coding tools and provides enhanced coding efficiency for a wide range of applications including video telephony, video conferencing, TV, storage (DVD and/or hard disk based), streaming video, digital video creation, digital cinema and others. In this paper an overview of this standard is provided. Some comparisons with the existing standards, MPEG-2 and MPEG-4 Part 2, are also provided.

  2. Holodeck: Telepresence Dome Visualization System Simulations

    NASA Technical Reports Server (NTRS)

    Hite, Nicolas

    2012-01-01

    This paper explores the simulation and consideration of different image-projection strategies for the Holodeck, a dome that will be used for highly immersive telepresence operations in future endeavors of the National Aeronautics and Space Administration (NASA). Its visualization system will include a full 360 degree projection onto the dome's interior walls in order to display video streams from both simulations and recorded video. Because humans innately trust their vision to precisely report their surroundings, the Holodeck's visualization system is crucial to its realism. This system will be rigged with an integrated hardware and software infrastructure-namely, a system of projectors that will relay with a Graphics Processing Unit (GPU) and computer to both project images onto the dome and correct warping in those projections in real-time. Using both Computer-Aided Design (CAD) and ray-tracing software, virtual models of various dome/projector geometries were created and simulated via tracking and analysis of virtual light sources, leading to the selection of two possible configurations for installation. Research into image warping and the generation of dome-ready video content was also conducted, including generation of fisheye images, distortion correction, and the generation of a reliable content-generation pipeline.

  3. Carbon, Climate and Cameras: Showcasing Arctic research through multimedia storytelling

    NASA Astrophysics Data System (ADS)

    Tachihara, B. L.; Linder, C. A.; Holmes, R. M.

    2011-12-01

    In July 2011, Tachihara spent three weeks in the Siberian Arctic documenting The Polaris Project, an NSF-funded effort that brings together an international group of undergraduate students and research scientists to study Arctic systems. Using a combination of photography, video and interviews gathered during the field course, we produced a six-minute film focusing on the researchers' quest to track carbon as it moves from terrestrial upland areas into lakes, streams, rivers and eventually into the Arctic Ocean. The overall goal was to communicate the significance of Arctic science in the face of changing climate. Using a selection of clips from the 2011 video, we will discuss the advantages and challenges specific to using multimedia presentations to represent Arctic research, as well as science in general. The full video can be viewed on the Polaris website: http://www.thepolarisproject.org.

  4. Integrated approach to multimodal media content analysis

    NASA Astrophysics Data System (ADS)

    Zhang, Tong; Kuo, C.-C. Jay

    1999-12-01

    In this work, we present a system for the automatic segmentation, indexing and retrieval of audiovisual data based on the combination of audio, visual and textural content analysis. The video stream is demultiplexed into audio, image and caption components. Then, a semantic segmentation of the audio signal based on audio content analysis is conducted, and each segment is indexed as one of the basic audio types. The image sequence is segmented into shots based on visual information analysis, and keyframes are extracted from each shot. Meanwhile, keywords are detected from the closed caption. Index tables are designed for both linear and non-linear access to the video. It is shown by experiments that the proposed methods for multimodal media content analysis are effective. And that the integrated framework achieves satisfactory results for video information filtering and retrieval.

  5. Dynamic full-scalability conversion in scalable video coding

    NASA Astrophysics Data System (ADS)

    Lee, Dong Su; Bae, Tae Meon; Thang, Truong Cong; Ro, Yong Man

    2007-02-01

    For outstanding coding efficiency with scalability functions, SVC (Scalable Video Coding) is being standardized. SVC can support spatial, temporal and SNR scalability and these scalabilities are useful to provide a smooth video streaming service even in a time varying network such as a mobile environment. But current SVC is insufficient to support dynamic video conversion with scalability, thereby the adaptation of bitrate to meet a fluctuating network condition is limited. In this paper, we propose dynamic full-scalability conversion methods for QoS adaptive video streaming in SVC. To accomplish full scalability dynamic conversion, we develop corresponding bitstream extraction, encoding and decoding schemes. At the encoder, we insert the IDR NAL periodically to solve the problems of spatial scalability conversion. At the extractor, we analyze the SVC bitstream to get the information which enable dynamic extraction. Real time extraction is achieved by using this information. Finally, we develop the decoder so that it can manage the changing scalability. Experimental results showed that dynamic full-scalability conversion was verified and it was necessary for time varying network condition.

  6. A randomized controlled trial of soap opera videos streamed to smartphones to reduce risk of sexually transmitted human immunodeficiency virus (HIV) in young urban African American women.

    PubMed

    Jones, Rachel; Hoover, Donald R; Lacroix, Lorraine J

    2013-01-01

    Love, Sex, and Choices (LSC) is a soap opera video series created to reduce HIV sex risk in women. LSC was compared to text messages in a randomized trial in 238 high-risk mostly Black young urban women. 117 received 12-weekly LSC videos, 121 received 12-weekly HIV prevention messages on smartphones. Changes in unprotected sex with high risk partners were compared by mixed models. Unprotected sex with high risk men significantly declined over 6 months post-intervention for both arms, from 21-22 acts to 5-6 (p < 0.001). This reduction was 18 % greater in the video over the text arm, though this difference was not statistically significant. However, the LSC was highly popular and viewers wanted the series to continue. This is the first study to report streaming soap opera video episodes to reduce HIV risk on smartphones. LSC holds promise as an Internet intervention that could be scaled-up and combined with HIV testing. Copyright © 2013 Elsevier Inc. All rights reserved.

  7. Video Compression

    NASA Technical Reports Server (NTRS)

    1996-01-01

    Optivision developed two PC-compatible boards and associated software under a Goddard Space Flight Center Small Business Innovation Research grant for NASA applications in areas such as telerobotics, telesciences and spaceborne experimentation. From this technology, the company used its own funds to develop commercial products, the OPTIVideo MPEG Encoder and Decoder, which are used for realtime video compression and decompression. They are used in commercial applications including interactive video databases and video transmission. The encoder converts video source material to a compressed digital form that can be stored or transmitted, and the decoder decompresses bit streams to provide high quality playback.

  8. Symposium on the Nature of Science—Streaming Video Archive

    Science.gov Websites

    Oddone – Welcome Mark Ratner – Nano 201: A Gentle Introduction to Nanotechnology and Nanoscience Marsha – Incorporating Nanotechnology into the Curriculum (streamed session not available) Rich Marvin – Using

  9. Digital Motion Imagery, Interoperability Challenges for Space Operations

    NASA Technical Reports Server (NTRS)

    Grubbs, Rodney

    2012-01-01

    With advances in available bandwidth from spacecraft and between terrestrial control centers, digital motion imagery and video is becoming more practical as a data gathering tool for science and engineering, as well as for sharing missions with the public. The digital motion imagery and video industry has done a good job of creating standards for compression, distribution, and physical interfaces. Compressed data streams can easily be transmitted or distributed over radio frequency, internet protocol, and other data networks. All of these standards, however, can make sharing video between spacecraft and terrestrial control centers a frustrating and complicated task when different standards and protocols are used by different agencies. This paper will explore the challenges presented by the abundance of motion imagery and video standards, interfaces and protocols with suggestions for common formats that could simplify interoperability between spacecraft and ground support systems. Real-world examples from the International Space Station will be examined. The paper will also discuss recent trends in the development of new video compression algorithms, as well likely expanded use of Delay (or Disruption) Tolerant Networking nodes.

  10. To Stream or Not to Stream in a Quantitative Business Course

    ERIC Educational Resources Information Center

    Buhagiar, Tarek; Potter, Robert

    2010-01-01

    This paper investigates whether there is a difference in student learning in a quantitative business course taught through video streaming with the option of going to a face-to-face lecture, compared to the same course taught only through face-to-face lecture. This topic has been the subject of research in recent years because of the importance of…

  11. Mission critical cloud computing in a week

    NASA Astrophysics Data System (ADS)

    George, B.; Shams, K.; Knight, D.; Kinney, J.

    NASA's vision is to “ reach for new heights and reveal the unknown so that what we do and learn will benefit all humankind.” While our missions provide large volumes of unique and invaluable data to the scientific community, they also serve to inspire and educate the next generation of engineers and scientists. One critical aspect of “ benefiting all humankind” is to make our missions as visible and accessible as possible to facilitate the transfer of scientific knowledge to the public. The recent successful landing of the Curiosity rover on Mars exemplified this vision: we shared the landing event via live video streaming and web experiences with millions of people around the world. The video stream on Curiosity's website was delivered by a highly scalable stack of computing resources in the cloud to cache and distribute the video stream to our viewers. While this work was done in the context of public outreach, it has extensive implications for the development of mission critical, highly available, and elastic applications in the cloud for a diverse set of use cases across NASA.

  12. Statistical data mining of streaming motion data for fall detection in assistive environments.

    PubMed

    Tasoulis, S K; Doukas, C N; Maglogiannis, I; Plagianakos, V P

    2011-01-01

    The analysis of human motion data is interesting for the purpose of activity recognition or emergency event detection, especially in the case of elderly or disabled people living independently in their homes. Several techniques have been proposed for identifying such distress situations using either motion, audio or video sensors on the monitored subject (wearable sensors) or the surrounding environment. The output of such sensors is data streams that require real time recognition, especially in emergency situations, thus traditional classification approaches may not be applicable for immediate alarm triggering or fall prevention. This paper presents a statistical mining methodology that may be used for the specific problem of real time fall detection. Visual data captured from the user's environment, using overhead cameras along with motion data are collected from accelerometers on the subject's body and are fed to the fall detection system. The paper includes the details of the stream data mining methodology incorporated in the system along with an initial evaluation of the achieved accuracy in detecting falls.

  13. Workflow-Oriented Cyberinfrastructure for Sensor Data Analytics

    NASA Astrophysics Data System (ADS)

    Orcutt, J. A.; Rajasekar, A.; Moore, R. W.; Vernon, F.

    2015-12-01

    Sensor streams comprise an increasingly large part of Earth Science data. Analytics based on sensor data require an easy way to perform operations such as acquisition, conversion to physical units, metadata linking, sensor fusion, analysis and visualization on distributed sensor streams. Furthermore, embedding real-time sensor data into scientific workflows is of growing interest. We have implemented a scalable networked architecture that can be used to dynamically access packets of data in a stream from multiple sensors, and perform synthesis and analysis across a distributed network. Our system is based on the integrated Rule Oriented Data System (irods.org), which accesses sensor data from the Antelope Real Time Data System (brtt.com), and provides virtualized access to collections of data streams. We integrate real-time data streaming from different sources, collected for different purposes, on different time and spatial scales, and sensed by different methods. iRODS, noted for its policy-oriented data management, brings to sensor processing features and facilities such as single sign-on, third party access control lists ( ACLs), location transparency, logical resource naming, and server-side modeling capabilities while reducing the burden on sensor network operators. Rich integrated metadata support also makes it straightforward to discover data streams of interest and maintain data provenance. The workflow support in iRODS readily integrates sensor processing into any analytical pipeline. The system is developed as part of the NSF-funded Datanet Federation Consortium (datafed.org). APIs for selecting, opening, reaping and closing sensor streams are provided, along with other helper functions to associate metadata and convert sensor packets into NetCDF and JSON formats. Near real-time sensor data including seismic sensors, environmental sensors, LIDAR and video streams are available through this interface. A system for archiving sensor data and metadata in NetCDF format has been implemented and will be demonstrated at AGU.

  14. Measuring fish and their physical habitats: Versatile 2D and 3D video techniques with user-friendly software

    USGS Publications Warehouse

    Neuswanger, Jason R.; Wipfli, Mark S.; Rosenberger, Amanda E.; Hughes, Nicholas F.

    2017-01-01

    Applications of video in fisheries research range from simple biodiversity surveys to three-dimensional (3D) measurement of complex swimming, schooling, feeding, and territorial behaviors. However, researchers lack a transparently developed, easy-to-use, general purpose tool for 3D video measurement and event logging. Thus, we developed a new measurement system, with freely available, user-friendly software, easily obtained hardware, and flexible underlying mathematical methods capable of high precision and accuracy. The software, VidSync, allows users to efficiently record, organize, and navigate complex 2D or 3D measurements of fish and their physical habitats. Laboratory tests showed submillimetre accuracy in length measurements of 50.8 mm targets at close range, with increasing errors (mostly <1%) at longer range and for longer targets. A field test on juvenile Chinook salmon (Oncorhynchus tshawytscha) feeding behavior in Alaska streams found that individuals within aggregations avoided the immediate proximity of their competitors, out to a distance of 1.0 to 2.9 body lengths. This system makes 3D video measurement a practical tool for laboratory and field studies of aquatic or terrestrial animal behavior and ecology.

  15. Sixty Symbols, by The University of Nottingham

    NASA Astrophysics Data System (ADS)

    MacIsaac, Dan

    2009-11-01

    Faculty at the University of Nottingham are continuing to develop short (5-10 minutes long) insightful video-streamed vignettes for the web. Their earlier sites: Test Tube: Behind the World of Science and the widely known Periodic Table of Videos (a video on each element in the periodic table featured in WebSights last semester) have been joined by a new effort from the faculty of Physics, Astronomy and Engineering-Sixty Symbols: Videos about the Symbols of Physics and Astronomy. I liked the vignette on chi myself.

  16. Aggressive driving video and non-contact enforcement (ADVANCE): drivers' reaction to violation notices : summary of survey results

    DOT National Transportation Integrated Search

    2001-01-01

    ADVANCE is an integration of state of the practice, off-the-shelf technologies which include video, speed measurement, distance measurement, and digital imaging that detects UDAs in the traffic stream and subsequently notifies violators by ma...

  17. Satellite/Terrestrial Networks: End-to-End Communication Interoperability Quality of Service Experiments

    NASA Technical Reports Server (NTRS)

    Ivancic, William D.

    1998-01-01

    Various issues associated with satellite/terrestrial end-to-end communication interoperability are presented in viewgraph form. Specific topics include: 1) Quality of service; 2) ATM performance characteristics; 3) MPEG-2 transport stream mapping to AAL-5; 4) Observation and discussion of compressed video tests over ATM; 5) Digital video over satellites status; 6) Satellite link configurations; 7) MPEG-2 over ATM with binomial errors; 8) MPEG-2 over ATM channel characteristics; 8) MPEG-2 over ATM over emulated satellites; 9) MPEG-2 transport stream with errors; and a 10) Dual decoder test.

  18. Packet spacing : an enabling mechanism for delivering multimedia content in computational grids /

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feng, A. C.; Feng, W. C.; Belford, Geneva G.

    2001-01-01

    Streaming multimedia with UDP has become increasingly popular over distributed systems like the Internet. Scientific applications that stream multimedia include remote computational steering of visualization data and video-on-demand teleconferencing over the Access Grid. However, UDP does not possess a self-regulating, congestion-control mechanism; and most best-efort traflc is served by congestion-controlled TCF! Consequently, UDP steals bandwidth from TCP such that TCP$ows starve for network resources. With the volume of Internet traffic continuing to increase, the perpetuation of UDP-based streaming will cause the Internet to collapse as it did in the mid-1980's due to the use of non-congestion-controlled TCP. To address thismore » problem, we introduce the counterintuitive notion of inter-packet spacing with control feedback to enable UDP-based applications to perform well in the next-generation Internet and computational grids. When compared with traditional UDP-based streaming, we illustrate that our approach can reduce packet loss over SO% without adversely afecting delivered throughput. Keywords: network protocol, multimedia, packet spacing, streaming, TCI: UDlq rate-adjusting congestion control, computational grid, Access Grid.« less

  19. Towards real-time remote processing of laparoscopic video

    NASA Astrophysics Data System (ADS)

    Ronaghi, Zahra; Duffy, Edward B.; Kwartowitz, David M.

    2015-03-01

    Laparoscopic surgery is a minimally invasive surgical technique where surgeons insert a small video camera into the patient's body to visualize internal organs and small tools to perform surgical procedures. However, the benefit of small incisions has a drawback of limited visualization of subsurface tissues, which can lead to navigational challenges in the delivering of therapy. Image-guided surgery (IGS) uses images to map subsurface structures and can reduce the limitations of laparoscopic surgery. One particular laparoscopic camera system of interest is the vision system of the daVinci-Si robotic surgical system (Intuitive Surgical, Sunnyvale, CA, USA). The video streams generate approximately 360 megabytes of data per second, demonstrating a trend towards increased data sizes in medicine, primarily due to higher-resolution video cameras and imaging equipment. Processing this data on a bedside PC has become challenging and a high-performance computing (HPC) environment may not always be available at the point of care. To process this data on remote HPC clusters at the typical 30 frames per second (fps) rate, it is required that each 11.9 MB video frame be processed by a server and returned within 1/30th of a second. The ability to acquire, process and visualize data in real-time is essential for performance of complex tasks as well as minimizing risk to the patient. As a result, utilizing high-speed networks to access computing clusters will lead to real-time medical image processing and improve surgical experiences by providing real-time augmented laparoscopic data. We aim to develop a medical video processing system using an OpenFlow software defined network that is capable of connecting to multiple remote medical facilities and HPC servers.

  20. Exploiting semantics for sensor re-calibration in event detection systems

    NASA Astrophysics Data System (ADS)

    Vaisenberg, Ronen; Ji, Shengyue; Hore, Bijit; Mehrotra, Sharad; Venkatasubramanian, Nalini

    2008-01-01

    Event detection from a video stream is becoming an important and challenging task in surveillance and sentient systems. While computer vision has been extensively studied to solve different kinds of detection problems over time, it is still a hard problem and even in a controlled environment only simple events can be detected with a high degree of accuracy. Instead of struggling to improve event detection using image processing only, we bring in semantics to direct traditional image processing. Semantics are the underlying facts that hide beneath video frames, which can not be "seen" directly by image processing. In this work we demonstrate that time sequence semantics can be exploited to guide unsupervised re-calibration of the event detection system. We present an instantiation of our ideas by using an appliance as an example--Coffee Pot level detection based on video data--to show that semantics can guide the re-calibration of the detection model. This work exploits time sequence semantics to detect when re-calibration is required to automatically relearn a new detection model for the newly evolved system state and to resume monitoring with a higher rate of accuracy.

  1. Surgical videos online: a survey of prominent sources and future trends.

    PubMed

    Dinscore, Amanda; Andres, Amy

    2010-01-01

    This article determines the extent of the online availability and quality of surgical videos for the educational benefit of the surgical community. A comprehensive survey was performed that compared a number of online sites providing surgical videos according to their content, production quality, authority, audience, navigability, and other features. Methods for evaluating video content are discussed as well as possible future directions and emerging trends. Surgical videos are a valuable tool for demonstrating and teaching surgical technique and, despite room for growth in this area, advances in streaming video technology have made providing and accessing these resources easier than ever before.

  2. Adaptive UEP and Packet Size Assignment for Scalable Video Transmission over Burst-Error Channels

    NASA Astrophysics Data System (ADS)

    Lee, Chen-Wei; Yang, Chu-Sing; Su, Yih-Ching

    2006-12-01

    This work proposes an adaptive unequal error protection (UEP) and packet size assignment scheme for scalable video transmission over a burst-error channel. An analytic model is developed to evaluate the impact of channel bit error rate on the quality of streaming scalable video. A video transmission scheme, which combines the adaptive assignment of packet size with unequal error protection to increase the end-to-end video quality, is proposed. Several distinct scalable video transmission schemes over burst-error channel have been compared, and the simulation results reveal that the proposed transmission schemes can react to varying channel conditions with less and smoother quality degradation.

  3. System and method for image registration of multiple video streams

    DOEpatents

    Dillavou, Marcus W.; Shum, Phillip Corey; Guthrie, Baron L.; Shenai, Mahesh B.; Deaton, Drew Steven; May, Matthew Benton

    2018-02-06

    Provided herein are methods and systems for image registration from multiple sources. A method for image registration includes rendering a common field of interest that reflects a presence of a plurality of elements, wherein at least one of the elements is a remote element located remotely from another of the elements and updating the common field of interest such that the presence of the at least one of the elements is registered relative to another of the elements.

  4. Remote stereoscopic video play platform for naked eyes based on the Android system

    NASA Astrophysics Data System (ADS)

    Jia, Changxin; Sang, Xinzhu; Liu, Jing; Cheng, Mingsheng

    2014-11-01

    As people's life quality have been improved significantly, the traditional 2D video technology can not meet people's urgent desire for a better video quality, which leads to the rapid development of 3D video technology. Simultaneously people want to watch 3D video in portable devices,. For achieving the above purpose, we set up a remote stereoscopic video play platform. The platform consists of a server and clients. The server is used for transmission of different formats of video and the client is responsible for receiving remote video for the next decoding and pixel restructuring. We utilize and improve Live555 as video transmission server. Live555 is a cross-platform open source project which provides solutions for streaming media such as RTSP protocol and supports transmission of multiple video formats. At the receiving end, we use our laboratory own player. The player for Android, which is with all the basic functions as the ordinary players do and able to play normal 2D video, is the basic structure for redevelopment. Also RTSP is implemented into this structure for telecommunication. In order to achieve stereoscopic display, we need to make pixel rearrangement in this player's decoding part. The decoding part is the local code which JNI interface calls so that we can extract video frames more effectively. The video formats that we process are left and right, up and down and nine grids. In the design and development, a large number of key technologies from Android application development have been employed, including a variety of wireless transmission, pixel restructuring and JNI call. By employing these key technologies, the design plan has been finally completed. After some updates and optimizations, the video player can play remote 3D video well anytime and anywhere and meet people's requirement.

  5. Representing videos in tangible products

    NASA Astrophysics Data System (ADS)

    Fageth, Reiner; Weiting, Ralf

    2014-03-01

    Videos can be taken with nearly every camera, digital point and shoot cameras, DSLRs as well as smartphones and more and more with so-called action cameras mounted on sports devices. The implementation of videos while generating QR codes and relevant pictures out of the video stream via a software implementation was contents in last years' paper. This year we present first data about what contents is displayed and how the users represent their videos in printed products, e.g. CEWE PHOTOBOOKS and greeting cards. We report the share of the different video formats used, the number of images extracted out of the video in order to represent the video, the positions in the book and different design strategies compared to regular books.

  6. Study of Temporal Effects on Subjective Video Quality of Experience.

    PubMed

    Bampis, Christos George; Zhi Li; Moorthy, Anush Krishna; Katsavounidis, Ioannis; Aaron, Anne; Bovik, Alan Conrad

    2017-11-01

    HTTP adaptive streaming is being increasingly deployed by network content providers, such as Netflix and YouTube. By dividing video content into data chunks encoded at different bitrates, a client is able to request the appropriate bitrate for the segment to be played next based on the estimated network conditions. However, this can introduce a number of impairments, including compression artifacts and rebuffering events, which can severely impact an end-user's quality of experience (QoE). We have recently created a new video quality database, which simulates a typical video streaming application, using long video sequences and interesting Netflix content. Going beyond previous efforts, the new database contains highly diverse and contemporary content, and it includes the subjective opinions of a sizable number of human subjects regarding the effects on QoE of both rebuffering and compression distortions. We observed that rebuffering is always obvious and unpleasant to subjects, while bitrate changes may be less obvious due to content-related dependencies. Transient bitrate drops were preferable over rebuffering only on low complexity video content, while consistently low bitrates were poorly tolerated. We evaluated different objective video quality assessment algorithms on our database and found that objective video quality models are unreliable for QoE prediction on videos suffering from both rebuffering events and bitrate changes. This implies the need for more general QoE models that take into account objective quality models, rebuffering-aware information, and memory. The publicly available video content as well as metadata for all of the videos in the new database can be found at http://live.ece.utexas.edu/research/LIVE_NFLXStudy/nflx_index.html.

  7. Ice-Borehole Probe

    NASA Technical Reports Server (NTRS)

    Behar, Alberto; Carsey, Frank; Lane, Arthur; Engelhardt, Herman

    2006-01-01

    An instrumentation system has been developed for studying interactions between a glacier or ice sheet and the underlying rock and/or soil. Prior borehole imaging systems have been used in well-drilling and mineral-exploration applications and for studying relatively thin valley glaciers, but have not been used for studying thick ice sheets like those of Antarctica. The system includes a cylindrical imaging probe that is lowered into a hole that has been bored through the ice to the ice/bedrock interface by use of an established hot-water-jet technique. The images acquired by the cameras yield information on the movement of the ice relative to the bedrock and on visible features of the lower structure of the ice sheet, including ice layers formed at different times, bubbles, and mineralogical inclusions. At the time of reporting the information for this article, the system was just deployed in two boreholes on the Amery ice shelf in East Antarctica and after successful 2000 2001 deployments in 4 boreholes at Ice Stream C, West Antarctica, and in 2002 at Black Rapids Glacier, Alaska. The probe is designed to operate at temperatures from 40 to +40 C and to withstand the cold, wet, high-pressure [130-atm (13.20-MPa)] environment at the bottom of a water-filled borehole in ice as deep as 1.6 km. A current version is being outfitted to service 2.4-km-deep boreholes at the Rutford Ice Stream in West Antarctica. The probe (see figure) contains a sidelooking charge-coupled-device (CCD) camera that generates both a real-time analog video signal and a sequence of still-image data, and contains a digital videotape recorder. The probe also contains a downward-looking CCD analog video camera, plus halogen lamps to illuminate the fields of view of both cameras. The analog video outputs of the cameras are converted to optical signals that are transmitted to a surface station via optical fibers in a cable. Electric power is supplied to the probe through wires in the cable at a potential of 170 VDC. A DC-to-DC converter steps the supply down to 12 VDC for the lights, cameras, and image-data-transmission circuitry. Heat generated by dissipation of electric power in the probe is removed simply by conduction through the probe housing to the visible features of the lower structure of the ice sheet, including ice layers formed at different times, bubbles, and mineralogical inclusions. At the time of reporting the information for this article, the system was just deployed in two boreholes on the Amery ice shelf in East Antarctica and after successful 2000 2001 deployments in 4 boreholes at Ice Stream C, West Antarctica, and in 2002 at Black Rapids Glacier, Alaska. The probe is designed to operate at temperatures from 40 to +40 C and to withstand the cold, wet, high-pressure [130-atm (13.20-MPa)] environment at the bottom of a water-filled borehole in ice as deep as 1.6 km. A current version is being outfitted to service 2.4-km-deep boreholes at the Rutford Ice Stream in West Antarctica. The probe (see figure) contains a sidelooking charge-coupled-device (CCD) camera that generates both a real-time analog video signal and a sequence of still-image data, and contains a digital videotape recorder. The probe also contains a downward-looking CCD analog video camera, plus halogen lamps to illuminate the fields of view of both cameras. The analog video outputs of the cameras are converted to optical signals that are transmitted to a surface station via optical fibers in a cable. Electric power is supplied to the probe through wires in the cable at a potential of 170 VDC. A DC-to-DC converter steps the supply down to 12 VDC for the lights, cameras, and image-datatransmission circuitry. Heat generated by dissipation of electric power in the probe is removed simply by conduction through the probe housing to the visible features of the lower structure of the ice sheet, including ice layers formed at different times, bubbles, and mineralogical inclusions. At thime of reporting the information for this article, the system was just deployed in two boreholes on the Amery ice shelf in East Antarctica and after successful 2000 2001 deployments in 4 boreholes at Ice Stream C, West Antarctica, and in 2002 at Black Rapids Glacier, Alaska. The probe is designed to operate at temperatures from 40 to +40 C and to withstand the cold, wet, high-pressure [130-atm (13.20-MPa)] environment at the bottom of a water-filled borehole in ice as deep as 1.6 km. A current version is being outfitted to service 2.4-km-deep boreholes at the Rutford Ice Stream in West Antarctica. The probe (see figure) contains a sidelooking charge-coupled-device (CCD) camera that generates both a real-time analog video signal and a sequence of still-image data, and contains a digital videotape recorder. The probe also contains a downward-looking CCD analog video camera, plus halogen lamps to illuminate the fields of view of both cameras. The analog video outputs of the cameras are converted to optical signals that are transmitted to a surface station via optical fibers in a cable. Electric power is supplied to the probe through wires in the cable at a potential of 170 VDC. A DC-to-DC converter steps the supply down to 12 VDC for the lights, cameras, and image-datatransmission circuitry. Heat generated by dissipation of electric power in the probe is removed simply by conduction through the probe housing to the adjacent water and ice.

  8. MWAHCA: A Multimedia Wireless Ad Hoc Cluster Architecture

    PubMed Central

    Diaz, Juan R.; Jimenez, Jose M.; Sendra, Sandra

    2014-01-01

    Wireless Ad hoc networks provide a flexible and adaptable infrastructure to transport data over a great variety of environments. Recently, real-time audio and video data transmission has been increased due to the appearance of many multimedia applications. One of the major challenges is to ensure the quality of multimedia streams when they have passed through a wireless ad hoc network. It requires adapting the network architecture to the multimedia QoS requirements. In this paper we propose a new architecture to organize and manage cluster-based ad hoc networks in order to provide multimedia streams. Proposed architecture adapts the network wireless topology in order to improve the quality of audio and video transmissions. In order to achieve this goal, the architecture uses some information such as each node's capacity and the QoS parameters (bandwidth, delay, jitter, and packet loss). The architecture splits the network into clusters which are specialized in specific multimedia traffic. The real system performance study provided at the end of the paper will demonstrate the feasibility of the proposal. PMID:24737996

  9. UAV-guided navigation for ground robot tele-operation in a military reconnaissance environment.

    PubMed

    Chen, Jessie Y C

    2010-08-01

    A military reconnaissance environment was simulated to examine the performance of ground robotics operators who were instructed to utilise streaming video from an unmanned aerial vehicle (UAV) to navigate his/her ground robot to the locations of the targets. The effects of participants' spatial ability on their performance and workload were also investigated. Results showed that participants' overall performance (speed and accuracy) was better when she/he had access to images from larger UAVs with fixed orientations, compared with other UAV conditions (baseline- no UAV, micro air vehicle and UAV with orbiting views). Participants experienced the highest workload when the UAV was orbiting. Those individuals with higher spatial ability performed significantly better and reported less workload than those with lower spatial ability. The results of the current study will further understanding of ground robot operators' target search performance based on streaming video from UAVs. The results will also facilitate the implementation of ground/air robots in military environments and will be useful to the future military system design and training community.

  10. 3D Tracking of Mating Events in Wild Swarms of the Malaria Mosquito Anopheles gambiae

    PubMed Central

    Butail, Sachit; Manoukis, Nicholas; Diallo, Moussa; Yaro, Alpha S.; Dao, Adama; Traoré, Sekou F.; Ribeiro, José M.; Lehmann, Tovi; Paley, Derek A.

    2013-01-01

    We describe an automated tracking system that allows us to reconstruct the 3D kinematics of individual mosquitoes in swarms of Anopheles gambiae. The inputs to the tracking system are video streams recorded from a stereo camera system. The tracker uses a two-pass procedure to automatically localize and track mosquitoes within the swarm. A human-in-the-loop step verifies the estimates and connects broken tracks. The tracker performance is illustrated using footage of mating events filmed in Mali in August 2010. PMID:22254411

  11. SonotaCo network and CAMS

    NASA Astrophysics Data System (ADS)

    Koseki, Masahiro

    2017-04-01

    You might think that the newly developed CAMS video system can get much more fine results than the system of SonotaCo network. There are small differences between them surely, but the comparison between the statistics of them reveals both data are comparable in accuracy. We find that the SonotaCo system cannot detect slow velocity meteors as good as CAMS but it is superior for faster meteors. There is a more important difference between them, that is, the definition of meteor showers and this is resulting in curious stream data.

  12. Sensor fusion and augmented reality with the SAFIRE system

    NASA Astrophysics Data System (ADS)

    Saponaro, Philip; Treible, Wayne; Phelan, Brian; Sherbondy, Kelly; Kambhamettu, Chandra

    2018-04-01

    The Spectrally Agile Frequency-Incrementing Reconfigurable (SAFIRE) mobile radar system was developed and exercised at an arid U.S. test site. The system can detect hidden target using radar, a global positioning system (GPS), dual stereo color cameras, and dual stereo thermal cameras. An Augmented Reality (AR) software interface allows the user to see a single fused video stream containing the SAR, color, and thermal imagery. The stereo sensors allow the AR system to display both fused 2D imagery and 3D metric reconstructions, where the user can "fly" around the 3D model and switch between the modalities.

  13. Activity recognition using Video Event Segmentation with Text (VEST)

    NASA Astrophysics Data System (ADS)

    Holloway, Hillary; Jones, Eric K.; Kaluzniacki, Andrew; Blasch, Erik; Tierno, Jorge

    2014-06-01

    Multi-Intelligence (multi-INT) data includes video, text, and signals that require analysis by operators. Analysis methods include information fusion approaches such as filtering, correlation, and association. In this paper, we discuss the Video Event Segmentation with Text (VEST) method, which provides event boundaries of an activity to compile related message and video clips for future interest. VEST infers meaningful activities by clustering multiple streams of time-sequenced multi-INT intelligence data and derived fusion products. We discuss exemplar results that segment raw full-motion video (FMV) data by using extracted commentary message timestamps, FMV metadata, and user-defined queries.

  14. Method of determining the necessary number of observations for video stream documents recognition

    NASA Astrophysics Data System (ADS)

    Arlazarov, Vladimir V.; Bulatov, Konstantin; Manzhikov, Temudzhin; Slavin, Oleg; Janiszewski, Igor

    2018-04-01

    This paper discusses a task of document recognition on a sequence of video frames. In order to optimize the processing speed an estimation is performed of stability of recognition results obtained from several video frames. Considering identity document (Russian internal passport) recognition on a mobile device it is shown that significant decrease is possible of the number of observations necessary for obtaining precise recognition result.

  15. Remotely supported prehospital ultrasound: A feasibility study of real-time image transmission and expert guidance to aid diagnosis in remote and rural communities.

    PubMed

    Eadie, Leila; Mulhern, John; Regan, Luke; Mort, Alasdair; Shannon, Helen; Macaden, Ashish; Wilson, Philip

    2017-01-01

    Introduction Our aim is to expedite prehospital assessment of remote and rural patients using remotely-supported ultrasound and satellite/cellular communications. In this paradigm, paramedics are remotely-supported ultrasound operators, guided by hospital-based specialists, to record images before receiving diagnostic advice. Technology can support users in areas with little access to medical imaging and suboptimal communications coverage by connecting to multiple cellular networks and/or satellites to stream live ultrasound and audio-video. Methods An ambulance-based demonstrator system captured standard trauma and novel transcranial ultrasound scans from 10 healthy volunteers at 16 locations across the Scottish Highlands. Volunteers underwent brief scanning training before receiving expert guidance via the communications link. Ultrasound images were streamed with an audio/video feed to reviewers for interpretation. Two sessions were transmitted via satellite and 21 used cellular networks. Reviewers rated image and communication quality, and their utility for diagnosis. Transmission latency and bandwidth were recorded, and effects of scanner and reviewer experience were assessed. Results Appropriate views were provided in 94% of the simulated trauma scans. The mean upload rate was 835/150 kbps and mean latency was 114/2072 ms for cellular and satellite networks, respectively. Scanning experience had a significant impact on time to achieve a diagnostic image, and review of offline scans required significantly less time than live-streamed scans. Discussion This prehospital ultrasound system could facilitate early diagnosis and streamlining of treatment pathways for remote emergency patients, being particularly applicable in rural areas worldwide with poor communications infrastructure and extensive transport times.

  16. Social learning in nest-building birds watching live-streaming video demonstrators.

    PubMed

    Guillette, Lauren M; Healy, Susan D

    2018-02-13

    Determining the role that social learning plays in construction behaviours, such as nest building or tool manufacture, could be improved if more experimental control could be gained over the exact public information that is provided by the demonstrator, to the observing individual. Using video playback allows the experimenter to choose what information is provided, but will only be useful in determining the role of social learning if observers attend to, and learn from, videos in a manner that is similar to live demonstration. The goal of the current experiment was to test whether live-streamed video presentations of nest building by zebra finches Taeniopygia guttata would lead observers to copy the material choice demonstrated to them. Here, males that had not previously built a nest were given an initial preference test between materials of two colours. Those observers then watched live-stream footage of a familiar demonstrator building a nest with material of the colour that the observer did not prefer. After this experience, observers were given the chance to build a nest with materials of the two colours. Although two-thirds of the observer males preferred material of the demonstrated colour after viewing the demonstrator build a nest with material of that colour more than they had previously, their preference for the demonstrated material was not as strong as that of observers that had viewed live demonstrator builders in a previous experiment. Our results suggest researchers should proceed with caution before using video demonstration in tests of social learning. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  17. Robust video transmission with distributed source coded auxiliary channel.

    PubMed

    Wang, Jiajun; Majumdar, Abhik; Ramchandran, Kannan

    2009-12-01

    We propose a novel solution to the problem of robust, low-latency video transmission over lossy channels. Predictive video codecs, such as MPEG and H.26x, are very susceptible to prediction mismatch between encoder and decoder or "drift" when there are packet losses. These mismatches lead to a significant degradation in the decoded quality. To address this problem, we propose an auxiliary codec system that sends additional information alongside an MPEG or H.26x compressed video stream to correct for errors in decoded frames and mitigate drift. The proposed system is based on the principles of distributed source coding and uses the (possibly erroneous) MPEG/H.26x decoder reconstruction as side information at the auxiliary decoder. The distributed source coding framework depends upon knowing the statistical dependency (or correlation) between the source and the side information. We propose a recursive algorithm to analytically track the correlation between the original source frame and the erroneous MPEG/H.26x decoded frame. Finally, we propose a rate-distortion optimization scheme to allocate the rate used by the auxiliary encoder among the encoding blocks within a video frame. We implement the proposed system and present extensive simulation results that demonstrate significant gains in performance both visually and objectively (on the order of 2 dB in PSNR over forward error correction based solutions and 1.5 dB in PSNR over intrarefresh based solutions for typical scenarios) under tight latency constraints.

  18. Beaming into the News: A System for and Case Study of Tele-Immersive Journalism.

    PubMed

    Kishore, Sameer; Navarro, Xavi; Dominguez, Eva; de la Pena, Nonny; Slater, Mel

    2016-05-25

    We show how a combination of virtual reality and robotics can be used to beam a physical representation of a person to a distant location, and describe an application of this system in the context of journalism. Full body motion capture data of a person is streamed and mapped in real time, onto the limbs of a humanoid robot present at the remote location. A pair of cameras in the robot's 'eyes' stream stereoscopic video back to the HMD worn by the visitor, and a two-way audio connection allows the visitor to talk to people in the remote destination. By fusing the multisensory data of the visitor with the robot, the visitor's 'consciousness' is transformed to the robot's body. This system was used by a journalist to interview a neuroscientist and a chef 900 miles distant, about food for the brain, resulting in an article published in the popular press.

  19. Beaming into the News: A System for and Case Study of Tele-Immersive Journalism.

    PubMed

    Kishore, Sameer; Navarro, Xavi; Dominguez, Eva; De La Pena, Nonny; Slater, Mel

    2018-03-01

    We show how a combination of virtual reality and robotics can be used to beam a physical representation of a person to a distant location, and describe an application of this system in the context of journalism. Full body motion capture data of a person is streamed and mapped in real time, onto the limbs of a humanoid robot present at the remote location. A pair of cameras in the robots eyes stream stereoscopic video back to the HMD worn by the visitor, and a two-way audio connection allows the visitor to talk to people in the remote destination. By fusing the multisensory data of the visitor with the robot, the visitors consciousness is transformed to the robots body. This system was used by a journalist to interview a neuroscientist and a chef 900 miles distant, about food for the brain, resulting in an article published in the popular press.

  20. Modeling Periodic Impulsive Effects on Online TV Series Diffusion.

    PubMed

    Fu, Peihua; Zhu, Anding; Fang, Qiwen; Wang, Xi

    Online broadcasting substantially affects the production, distribution, and profit of TV series. In addition, online word-of-mouth significantly affects the diffusion of TV series. Because on-demand streaming rates are the most important factor that influences the earnings of online video suppliers, streaming statistics and forecasting trends are valuable. In this paper, we investigate the effects of periodic impulsive stimulation and pre-launch promotion on on-demand streaming dynamics. We consider imbalanced audience feverish distribution using an impulsive susceptible-infected-removed(SIR)-like model. In addition, we perform a correlation analysis of online buzz volume based on Baidu Index data. We propose a PI-SIR model to evolve audience dynamics and translate them into on-demand streaming fluctuations, which can be observed and comprehended by online video suppliers. Six South Korean TV series datasets are used to test the model. We develop a coarse-to-fine two-step fitting scheme to estimate the model parameters, first by fitting inter-period accumulation and then by fitting inner-period feverish distribution. We find that audience members display similar viewing habits. That is, they seek new episodes every update day but fade away. This outcome means that impulsive intensity plays a crucial role in on-demand streaming diffusion. In addition, the initial audience size and online buzz are significant factors. On-demand streaming fluctuation is highly correlated with online buzz fluctuation. To stimulate audience attention and interpersonal diffusion, it is worthwhile to invest in promotion near update days. Strong pre-launch promotion is also a good marketing tool to improve overall performance. It is not advisable for online video providers to promote several popular TV series on the same update day. Inter-period accumulation is a feasible forecasting tool to predict the future trend of the on-demand streaming amount. The buzz in public social communities also represents a highly correlated analysis tool to evaluate the advertising value of TV series.

  1. Modeling Periodic Impulsive Effects on Online TV Series Diffusion

    PubMed Central

    Fang, Qiwen; Wang, Xi

    2016-01-01

    Background Online broadcasting substantially affects the production, distribution, and profit of TV series. In addition, online word-of-mouth significantly affects the diffusion of TV series. Because on-demand streaming rates are the most important factor that influences the earnings of online video suppliers, streaming statistics and forecasting trends are valuable. In this paper, we investigate the effects of periodic impulsive stimulation and pre-launch promotion on on-demand streaming dynamics. We consider imbalanced audience feverish distribution using an impulsive susceptible-infected-removed(SIR)-like model. In addition, we perform a correlation analysis of online buzz volume based on Baidu Index data. Methods We propose a PI-SIR model to evolve audience dynamics and translate them into on-demand streaming fluctuations, which can be observed and comprehended by online video suppliers. Six South Korean TV series datasets are used to test the model. We develop a coarse-to-fine two-step fitting scheme to estimate the model parameters, first by fitting inter-period accumulation and then by fitting inner-period feverish distribution. Results We find that audience members display similar viewing habits. That is, they seek new episodes every update day but fade away. This outcome means that impulsive intensity plays a crucial role in on-demand streaming diffusion. In addition, the initial audience size and online buzz are significant factors. On-demand streaming fluctuation is highly correlated with online buzz fluctuation. Conclusion To stimulate audience attention and interpersonal diffusion, it is worthwhile to invest in promotion near update days. Strong pre-launch promotion is also a good marketing tool to improve overall performance. It is not advisable for online video providers to promote several popular TV series on the same update day. Inter-period accumulation is a feasible forecasting tool to predict the future trend of the on-demand streaming amount. The buzz in public social communities also represents a highly correlated analysis tool to evaluate the advertising value of TV series. PMID:27669520

  2. An Unequal Secure Encryption Scheme for H.264/AVC Video Compression Standard

    NASA Astrophysics Data System (ADS)

    Fan, Yibo; Wang, Jidong; Ikenaga, Takeshi; Tsunoo, Yukiyasu; Goto, Satoshi

    H.264/AVC is the newest video coding standard. There are many new features in it which can be easily used for video encryption. In this paper, we propose a new scheme to do video encryption for H.264/AVC video compression standard. We define Unequal Secure Encryption (USE) as an approach that applies different encryption schemes (with different security strength) to different parts of compressed video data. This USE scheme includes two parts: video data classification and unequal secure video data encryption. Firstly, we classify the video data into two partitions: Important data partition and unimportant data partition. Important data partition has small size with high secure protection, while unimportant data partition has large size with low secure protection. Secondly, we use AES as a block cipher to encrypt the important data partition and use LEX as a stream cipher to encrypt the unimportant data partition. AES is the most widely used symmetric cryptography which can ensure high security. LEX is a new stream cipher which is based on AES and its computational cost is much lower than AES. In this way, our scheme can achieve both high security and low computational cost. Besides the USE scheme, we propose a low cost design of hybrid AES/LEX encryption module. Our experimental results show that the computational cost of the USE scheme is low (about 25% of naive encryption at Level 0 with VEA used). The hardware cost for hybrid AES/LEX module is 4678 Gates and the AES encryption throughput is about 50Mbps.

  3. 75 FR 2511 - Manual for Courts-Martial; Proposed Amendments

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-01-15

    ... persons of the same or opposite sex; (b) Bestiality; (c) Masturbation; (d) Sadistic or masochistic abuse...'' includes any developed or undeveloped photograph, picture, film or video; any digital or computer image, picture, film or video made by any means, including those transmitted by any means including streaming...

  4. Digital Video: Get with It!

    ERIC Educational Resources Information Center

    Van Horn, Royal

    2001-01-01

    Several years after the first audiovisual Macintosh computer appeared, most educators are still oblivious of this technology. Almost every other economic sector (including the porn industry) makes abundant use of digital and streaming video. Desktop movie production is so easy that primary grade students can do it. Tips are provided. (MLH)

  5. Next-Gen Video

    ERIC Educational Resources Information Center

    Arnn, Barbara

    2007-01-01

    This article discusses how schools across the US are using the latest videoconference and audio/video streaming technologies creatively to move to the next level of their very specific needs. At the Georgia Institute of Technology in Atlanta, the technology that is the backbone of the school's extensive distance learning program has to be…

  6. 78 FR 31800 - Accessible Emergency Information, and Apparatus Requirements for Emergency Information and Video...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-05-24

    ...] Accessible Emergency Information, and Apparatus Requirements for Emergency Information and Video Description... should be the obligation of the apparatus manufacturer, under section 203, to ensure that the devices are... secondary audio stream on all equipment, including older equipment. In the absence of an industry solution...

  7. Low-cost telepresence for collaborative virtual environments.

    PubMed

    Rhee, Seon-Min; Ziegler, Remo; Park, Jiyoung; Naef, Martin; Gross, Markus; Kim, Myoung-Hee

    2007-01-01

    We present a novel low-cost method for visual communication and telepresence in a CAVE -like environment, relying on 2D stereo-based video avatars. The system combines a selection of proven efficient algorithms and approximations in a unique way, resulting in a convincing stereoscopic real-time representation of a remote user acquired in a spatially immersive display. The system was designed to extend existing projection systems with acquisition capabilities requiring minimal hardware modifications and cost. The system uses infrared-based image segmentation to enable concurrent acquisition and projection in an immersive environment without a static background. The system consists of two color cameras and two additional b/w cameras used for segmentation in the near-IR spectrum. There is no need for special optics as the mask and color image are merged using image-warping based on a depth estimation. The resulting stereo image stream is compressed, streamed across a network, and displayed as a frame-sequential stereo texture on a billboard in the remote virtual environment.

  8. Taking Science On-air with Google+

    NASA Astrophysics Data System (ADS)

    Gay, P.

    2014-01-01

    Cost has long been a deterrent when trying to stream live events to large audiences. While streaming providers like UStream have free options, they include advertising and typically limit broadcasts to originating from a single location. In the autumn of 2011, Google premiered a new, free, video streaming tool -- Hangouts on Air -- as part of their Google+ social network. This platform allows up to ten different computers to stream live content to an unlimited audience, and automatically archives that content to YouTube. In this article we discuss best practices for using this technology to stream events over the internet.

  9. Streaming Media for Web Based Training.

    ERIC Educational Resources Information Center

    Childers, Chad; Rizzo, Frank; Bangert, Linda

    This paper discusses streaming media for World Wide Web-based training (WBT). The first section addresses WBT in the 21st century, including the Synchronized Multimedia Integration Language (SMIL) standard that allows multimedia content such as text, pictures, sound, and video to be synchronized for a coherent learning experience. The second…

  10. Academic podcasting: quality media delivery.

    PubMed

    Tripp, Jacob S; Duvall, Scott L; Cowan, Derek L; Kamauu, Aaron W C

    2006-01-01

    A video podcast of the CME-approved University of Utah Department of Biomedical Informatics seminar was created in order to address issues with streaming video quality, take advantage of popular web-based syndication methods, and make the files available for convenient, subscription-based download. An RSS feed, which is automatically generated, contains links to the media files and allows viewers to easily subscribe to the weekly seminars in a format that guarantees consistent video quality.

  11. Performance evaluation of MPEG internet video coding

    NASA Astrophysics Data System (ADS)

    Luo, Jiajia; Wang, Ronggang; Fan, Kui; Wang, Zhenyu; Li, Ge; Wang, Wenmin

    2016-09-01

    Internet Video Coding (IVC) has been developed in MPEG by combining well-known existing technology elements and new coding tools with royalty-free declarations. In June 2015, IVC project was approved as ISO/IEC 14496-33 (MPEG- 4 Internet Video Coding). It is believed that this standard can be highly beneficial for video services in the Internet domain. This paper evaluates the objective and subjective performances of IVC by comparing it against Web Video Coding (WVC), Video Coding for Browsers (VCB) and AVC High Profile. Experimental results show that IVC's compression performance is approximately equal to that of the AVC High Profile for typical operational settings, both for streaming and low-delay applications, and is better than WVC and VCB.

  12. Optimal JPWL Forward Error Correction Rate Allocation for Robust JPEG 2000 Images and Video Streaming over Mobile Ad Hoc Networks

    NASA Astrophysics Data System (ADS)

    Agueh, Max; Diouris, Jean-François; Diop, Magaye; Devaux, François-Olivier; De Vleeschouwer, Christophe; Macq, Benoit

    2008-12-01

    Based on the analysis of real mobile ad hoc network (MANET) traces, we derive in this paper an optimal wireless JPEG 2000 compliant forward error correction (FEC) rate allocation scheme for a robust streaming of images and videos over MANET. The packet-based proposed scheme has a low complexity and is compliant to JPWL, the 11th part of the JPEG 2000 standard. The effectiveness of the proposed method is evaluated using a wireless Motion JPEG 2000 client/server application; and the ability of the optimal scheme to guarantee quality of service (QoS) to wireless clients is demonstrated.

  13. Bandwidth auction for SVC streaming in dynamic multi-overlay

    NASA Astrophysics Data System (ADS)

    Xiong, Yanting; Zou, Junni; Xiong, Hongkai

    2010-07-01

    In this paper, we study the optimal bandwidth allocation for scalable video coding (SVC) streaming in multiple overlays. We model the whole bandwidth request and distribution process as a set of decentralized auction games between the competing peers. For the upstream peer, a bandwidth allocation mechanism is introduced to maximize the aggregate revenue. For the downstream peer, a dynamic bidding strategy is proposed. It achieves maximum utility and efficient resource usage by collaborating with a content-aware layer dropping/adding strategy. Also, the convergence of the proposed auction games is theoretically proved. Experimental results show that the auction strategies can adapt to dynamic join of competing peers and video layers.

  14. Using Video to Communicate Scientific Findings -- Habitat Connections in Urban Streams

    NASA Astrophysics Data System (ADS)

    Harned, D. A.; Moorman, M.; Fitzpatrick, F. A.; McMahon, G.

    2011-12-01

    The U.S Geological Survey (USGS) National Water-Quality Assessment Program (NAWQA) provides information about (1) water-quality conditions and how those conditions vary locally, regionally, and nationally, (2) water-quality trends, and (3) factors that affect those conditions. As part of the NAWQA Program, the Effects of Urbanization on Stream Ecosystems (EUSE) study examined the vulnerability and resilience of streams to urbanization. Completion of the EUSE study has resulted in over 20 scientific publications. Video podcasts are being used in addition to these publications to communicate the relevance of these scientific findings to more general audiences such as resource managers, educational groups, public officials, and the general public. An example of one of the podcasts is a film examining effects of urbanization on stream habitat. "Habitat Connections in Urban Streams" explores how urbanization changes some of the physical features that provide in-stream habitat and examines examples of stream restoration projects designed to improve stream form and function. The "connections" theme is emphasized, including the connection of in-stream habitats from the headwaters to the stream mouth; connections between stream habitat and the surrounding floodplains, wetlands and basin; and connections between streams and people-- resource managers, public officials, scientists, and the general public. Examples of innovative stream restoration projects in Baltimore Maryland; Milwaukee, Wisconsin; and Portland Oregon are shown with interviews of managers, engineers, scientists, and others describing the projects. The film is combined with a website with links to extended film versions of the stream-restoration project interviews. The website and films are an example of USGS efforts aimed at improving science communication to a general audience. The film is available for access from the EUSE website: http://water.usgs.gov/nawqa/urban/html/podcasts.html. Additional films are planned to be released in 2012 on other USGS project results and programs.

  15. Webcasting in home and hospice care services: virtual communication in home care.

    PubMed

    Smith-Stoner, Marilyn

    2011-06-01

    The access to free live webcasting over home computers was much more available in 2007, when three military leaders from West Point, with the purpose of helping military personnel stay connected with their families when deployed, developed Ustream.tv. There are many types of Web-based video streaming applications. This article describes Ustream, a free and effective communication tool to virtually connect staff. There are many features in Ustream, but the most useful for home care and hospice service providers is its ability to broadcast sound and video to anyone with a broadband Internet connection, a chat room for users to interact during a presentation, and the ability to have a "co-host" or second person also broadcast simultaneously. Agencies that provide community-based services in the home will benefit from integration of Web-based video streaming into their communication strategy.

  16. Accidental Turbulent Discharge Rate Estimation from Videos

    NASA Astrophysics Data System (ADS)

    Ibarra, Eric; Shaffer, Franklin; Savaş, Ömer

    2015-11-01

    A technique to estimate the volumetric discharge rate in accidental oil releases using high speed video streams is described. The essence of the method is similar to PIV processing, however the cross correlation is carried out on the visible features of the efflux, which are usually turbulent, opaque and immiscible. The key step in the process is to perform a pixelwise time filtering on the video stream, in which the parameters are commensurate with the scales of the large eddies. The velocity field extracted from the shell of visible features is then used to construct an approximate velocity profile within the discharge. The technique has been tested on laboratory experiments using both water and oil jets at Re ~105 . The technique is accurate to 20%, which is sufficient for initial responders to deploy adequate resources for containment. The software package requires minimal user input and is intended for deployment on an ROV in the field. Supported by DOI via NETL.

  17. Rapid Development of Orion Structural Test Systems

    NASA Astrophysics Data System (ADS)

    Baker, Dave

    2012-07-01

    NASA is currently validating the Orion spacecraft design for human space flight. Three systems developed by G Systems using hardware and software from National Instruments play an important role in the testing of the new Multi- purpose crew vehicle (MPCV). A custom pressurization and venting system enables engineers to apply pressure inside the test article for measuring strain. A custom data acquisition system synchronizes over 1,800 channels of analog data. This data, along with multiple video and audio streams and calculated data, can be viewed, saved, and replayed in real-time on multiple client stations. This paper presents design features and how the system works together in a distributed fashion.

  18. Interacting with mobile devices by fusion eye and hand gestures recognition systems based on decision tree approach

    NASA Astrophysics Data System (ADS)

    Elleuch, Hanene; Wali, Ali; Samet, Anis; Alimi, Adel M.

    2017-03-01

    Two systems of eyes and hand gestures recognition are used to control mobile devices. Based on a real-time video streaming captured from the device's camera, the first system recognizes the motion of user's eyes and the second one detects the static hand gestures. To avoid any confusion between natural and intentional movements we developed a system to fuse the decision coming from eyes and hands gesture recognition systems. The phase of fusion was based on decision tree approach. We conducted a study on 5 volunteers and the results that our system is robust and competitive.

  19. A Participative Tool for Sharing, Annotating and Archiving Submarine Video Data

    NASA Astrophysics Data System (ADS)

    Marcon, Y.; Kottmann, R.; Ratmeyer, V.; Boetius, A.

    2016-02-01

    Oceans cover more than 70 percent of the Earth's surface and are known to play an essential role on all of the Earth systems and cycles. However, less than 5 percent of the ocean bottom has been explored and many aspects of the deep-sea world remain poorly understood. Increasing our ocean literacy is a necessity in order for specialists and non-specialists to better grasp the roles of the ocean on the Earth's system, its resources, and the impact of human activities on it. Due to technological advances, deep-sea research produces ever-increasing amounts of scientific video data. However, using such data for science communication and public outreach purposes remains difficult as tools for accessing/sharing such scientific data are often lacking. Indeed, there is no common solution for the management and analysis of marine video data, which are often scattered across multiple research institutes or working groups and it is difficult to get an overview of the whereabouts of those data. The VIDLIB Deep-Sea Video Platform is a web-based tool for sharing/annotating time-coded deep-sea video data. VIDLIB provides a participatory way to share and analyze video data. Scientists can share expert knowledge for video analysis without the need to upload/download large video files. Also, VIDLIB offers streaming capabilities and has potential for participatory science and science communication in that non-specialists can ask questions on what they see and get answers from scientists. Such a tool is highly valuable in terms of scientific public outreach and popular science. Video data are by far the most efficient way to communicate scientific findings to a non-expert public. VIDLIB is being used for studying the impact of deep-sea mining on benthic communities as well as for exploration in polar regions. We will present the structure and workflow of VIDLIB as well as an example of video analysis. VIDLIB (http://vidlib.marum.de) is funded by the EU EUROFLEET project and the Helmholtz Alliance ROBEX.

  20. A new display stream compression standard under development in VESA

    NASA Astrophysics Data System (ADS)

    Jacobson, Natan; Thirumalai, Vijayaraghavan; Joshi, Rajan; Goel, James

    2017-09-01

    The Advanced Display Stream Compression (ADSC) codec project is in development in response to a call for technologies from the Video Electronics Standards Association (VESA). This codec targets visually lossless compression of display streams at a high compression rate (typically 6 bits/pixel) for mobile/VR/HDR applications. Functionality of the ADSC codec is described in this paper, and subjective trials results are provided using the ISO 29170-2 testing protocol.

  1. Using Contemporary Technology Tools to Improve the Effectiveness of Teacher Educators in Special Education

    ERIC Educational Resources Information Center

    O'Brien, Chris; Aguinaga, Nancy J.; Hines, Rebecca; Hartshorne, Richard

    2011-01-01

    Ongoing developments in educational technology, including web-based instruction, streaming video, podcasting, video-conferencing, and the use of wikis and blogs to create learning communities, have substantial impact on distance education and preparation of special educators in rural communities. These developments can be overwhelming, however,…

  2. Turning Lemons into Lemonade: Teaching Assistive Technology through Wikis and Embedded Video

    ERIC Educational Resources Information Center

    Dreon, Oliver, Jr.; Dietrich, Nanette I.

    2009-01-01

    The authors teach instructional technology courses to pre-service teachers at Millersville University of Pennsylvania. The focus of the instructional technology courses is on the authentic use of instructional and assistive technology in the K-12 classroom. In this article, the authors describe how they utilize streaming videos in an educational…

  3. Toward a Video Pedagogy: A Teaching Typology with Learning Goals

    ERIC Educational Resources Information Center

    Andrist, Lester; Chepp, Valerie; Dean, Paul; Miller, Michael V.

    2014-01-01

    Given the massive volume of course-relevant videos now available on the Internet, this article outlines a pedagogy to facilitate the instructional employment of such materials. First, we describe special features of streaming media that have enabled their use in the classroom. Next, we introduce a typology comprised of six categories (conjuncture,…

  4. 78 FR 77074 - Accessibility of User Interfaces, and Video Programming Guides and Menus; Accessible Emergency...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-12-20

    ... Apparatus Requirements for Emergency Information and Video Description: Implementation of the Twenty- First... of apparatus covered by the CVAA to provide access to the secondary audio stream used for audible... availability of accessible equipment and, if so, what those notification requirements should be. The Commission...

  5. Coronal Hole Facing Earth

    NASA Image and Video Library

    2018-05-08

    An extensive equatorial coronal hole has rotated so that it is now facing Earth (May 2-4, 2018). The dark coronal hole extends about halfway across the solar disk. It was observed in a wavelength of extreme ultraviolet light. This magnetically open area is streaming solar wind (i.e., a stream of charged particles released from the sun) into space. When Earth enters a solar wind stream and the stream interacts with our magnetosphere, we often experience nice displays of aurora. Videos are available at https://photojournal.jpl.nasa.gov/catalog/PIA00624

  6. Direct Methanol Fuel Cell (DMFC) Battery Replacement Program

    DTIC Science & Technology

    2013-01-29

    selection of the Reynold’s number enables use of water for simulation of gas or liquid flow. Introduction of dye to the flow stream, with video...calibrated using a soap -film flow meter (Bubble-o-meter, Dublin, OH). Eleven Array system temperature regions were set as follows prior to start of...expected. The ar- ray flow proceeds down the columns: column effects would be more likely than row effects from a design of experiments perspective

  7. A web-based system for home monitoring of patients with Parkinson's disease using wearable sensors.

    PubMed

    Chen, Bor-Rong; Patel, Shyamal; Buckley, Thomas; Rednic, Ramona; McClure, Douglas J; Shih, Ludy; Tarsy, Daniel; Welsh, Matt; Bonato, Paolo

    2011-03-01

    This letter introduces MercuryLive, a platform to enable home monitoring of patients with Parkinson's disease (PD) using wearable sensors. MercuryLive contains three tiers: a resource-aware data collection engine that relies upon wearable sensors, web services for live streaming and storage of sensor data, and a web-based graphical user interface client with video conferencing capability. Besides, the platform has the capability of analyzing sensor (i.e., accelerometer) data to reliably estimate clinical scores capturing the severity of tremor, bradykinesia, and dyskinesia. Testing results showed an average data latency of less than 400 ms and video latency of about 200 ms with video frame rate of about 13 frames/s when 800 kb/s of bandwidth were available and we used a 40% video compression, and data feature upload requiring 1 min of extra time following a 10 min interactive session. These results indicate that the proposed platform is suitable to monitor patients with PD to facilitate the titration of medications in the late stages of the disease.

  8. From watermarking to in-band enrichment: future trends

    NASA Astrophysics Data System (ADS)

    Mitrea, M.; Prêteux, F.

    2009-02-01

    Coming across with the emerging Knowledge Society, the enriched video is nowadays a hot research topic, from both academic and industrial perspectives. The principle consists in associating to the video stream some metadata of various types (textual, audio, video, executable codes, ...). This new content is to be further exploited in a large variety of applications, like interactive DTV, games, e-learning, and data mining, for instance. This paper brings into evidence the potentiality of the watermarking techniques for such an application. By inserting the enrichment data into the very video to be enriched, three main advantages are ensured. First, no additional complexity is required from the terminal and the representation format point of view. Secondly, no backward compatibility issue is encountered, thus allowing a unique system to accommodate services from several generations. Finally, the network adaptation constraints are alleviated. The discussion is structured on both theoretical aspects (the accurate evaluation of the watermarking capacity in several reallife scenarios) as well as on applications developed under the framework of the R&D contracts conducted at the ARTEMIS Department.

  9. Multimedia applications in nursing curriculum: the process of producing streaming videos for medication administration skills.

    PubMed

    Sowan, Azizeh K

    2014-07-01

    Streaming videos (SVs) are commonly used multimedia applications in clinical health education. However, there are several negative aspects related to the production and delivery of SVs. Only a few published studies have included sufficient descriptions of the videos and the production process and design innovations. This paper describes the production of innovative SVs for medication administration skills for undergraduate nursing students at a public university in Jordan and focuses on the ethical and cultural issues in producing this type of learning resource. The curriculum development committee approved the modification of educational techniques for medication administration procedures to include SVs within an interactive web-based learning environment. The production process of the videos adhered to established principles for "protecting patients' rights when filming and recording" and included: preproduction, production and postproduction phases. Medication administration skills were videotaped in a skills laboratory where they are usually taught to students and also in a hospital setting with real patients. The lab videos included critical points and Do's and Don'ts and the hospital videos fostered real-world practices. The range of time of the videos was reasonable to eliminate technical difficulty in access. Eight SVs were produced that covered different types of the medication administration skills. The production of SVs required the collaborative efforts of experts in IT, multimedia, nursing and informatics educators, and nursing care providers. Results showed that the videos were well-perceived by students, and the instructors who taught the course. The process of producing the videos in this project can be used as a valuable framework for schools considering utilizing multimedia applications in teaching. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  10. Evaluation of a virtual pet visit system with live video streaming of patient images over the Internet in a companion animal intensive care unit in the Netherlands.

    PubMed

    Robben, Joris H; Melsen, Diede N; Almalik, Osama; Roomer, Wendy; Endenburg, Nienke

    2016-05-01

    To evaluate the impact of a virtual pet visit system ("TelePet" System, TPS) on owners and staff of a companion animal ICU. Longitudinal interventional study (2010-2013). Companion animal ICU at a university veterinary medical teaching hospital. Pet owners, ICU technicians. The introduction of the TPS, with live video streaming of patient images over the Internet, in a companion animal ICU. Pet owners experienced TPS as a valuable extra service. Most TPS users (72.4%) experienced less anxiety and felt less need (40.4% of TPS users) to visit their hospitalized pet in person. Most users (83.5%) shared TPS access with their family. The introduction of the TPS did not improve overall owner satisfaction, except for the score on "quality of medical treatment." Seven of 26 indicators of owner satisfaction were awarded higher scores by TPS users than by TPS nonusers in the survey after the introduction of the system. However, the lack of randomization of owners might have influenced findings. The enthusiasm of the ICU technicians for the system was tempered by the negative feedback from a small number of owners. Nevertheless they recognized the value of the system for owners. The system was user friendly and ICU staff and TPS users experienced few technical problems. As veterinary healthcare is moving toward a more client-centered approach, a virtual pet visit system, such as TPS, is a relatively simple application that may improve the well-being of most owners during the hospitalization of their pet. © Veterinary Emergency and Critical Care Society 2016.

  11. Asynchronous Video Streaming vs. Synchronous Videoconferencing for Teaching a Pharmacogenetic Pharmacotherapy Course

    PubMed Central

    2007-01-01

    Objectives To compare students' performance and course evaluations for a pharmacogenetic pharmacotherapy course taught by synchronous videoconferencing method via the Internet and for the same course taught via asynchronous video streaming via the Internet. Methods In spring 2005, a pharmacogenetic therapy course was taught to 73 students located on Amarillo, Lubbock, and Dallas campuses using synchronous videoconferencing, and in spring 2006, to 78 students located on the same 3 campuses using asynchronous video streaming. A course evaluation was administered to each group at the end of the courses. Results Students in the asynchronous setting had final course grades of 89% ± 7% compared to the mean final course grade of 87% ± 7% in the synchronous group (p = 0.05). Regardless of which technology was used, average course grades did not differ significantly among the 3 campus sites. Significantly more of the students in the asynchronous setting agreed (57%) with the statement that they could read the lecture notes and absorb the content on their own without attending the class than students in the synchronous class (23%; chi-square test; p < 0.001). Conclusions Students in both asynchronous and synchronous settings performed well. However, students taught using asynchronous videotaped lectures had lower satisfaction with the method of content delivery, and preferred live interactive sessions or a mix of interactive sessions and asynchronous videos over delivery of content using the synchronous or asynchronous method alone. PMID:17429516

  12. On-line content creation for photo products: understanding what the user wants

    NASA Astrophysics Data System (ADS)

    Fageth, Reiner

    2015-03-01

    This paper describes how videos can be implemented into printed photo books and greeting cards. We will show that - surprisingly or not- pictures from videos are similarly used such as classical images to tell compelling stories. Videos can be taken with nearly every camera, digital point and shoot cameras, DSLRs as well as smartphones and more and more with so-called action cameras mounted on sports devices. The implementation of videos while generating QR codes and relevant pictures out of the video stream via a software implementation was contents in last years' paper. This year we present first data about what contents is displayed and how the users represent their videos in printed products, e.g. CEWE PHOTOBOOKS and greeting cards. We report the share of the different video formats used.

  13. Quality Scalability Aware Watermarking for Visual Content.

    PubMed

    Bhowmik, Deepayan; Abhayaratne, Charith

    2016-11-01

    Scalable coding-based content adaptation poses serious challenges to traditional watermarking algorithms, which do not consider the scalable coding structure and hence cannot guarantee correct watermark extraction in media consumption chain. In this paper, we propose a novel concept of scalable blind watermarking that ensures more robust watermark extraction at various compression ratios while not effecting the visual quality of host media. The proposed algorithm generates scalable and robust watermarked image code-stream that allows the user to constrain embedding distortion for target content adaptations. The watermarked image code-stream consists of hierarchically nested joint distortion-robustness coding atoms. The code-stream is generated by proposing a new wavelet domain blind watermarking algorithm guided by a quantization based binary tree. The code-stream can be truncated at any distortion-robustness atom to generate the watermarked image with the desired distortion-robustness requirements. A blind extractor is capable of extracting watermark data from the watermarked images. The algorithm is further extended to incorporate a bit-plane discarding-based quantization model used in scalable coding-based content adaptation, e.g., JPEG2000. This improves the robustness against quality scalability of JPEG2000 compression. The simulation results verify the feasibility of the proposed concept, its applications, and its improved robustness against quality scalable content adaptation. Our proposed algorithm also outperforms existing methods showing 35% improvement. In terms of robustness to quality scalable video content adaptation using Motion JPEG2000 and wavelet-based scalable video coding, the proposed method shows major improvement for video watermarking.

  14. Comparative analysis of video processing and 3D rendering for cloud video games using different virtualization technologies

    NASA Astrophysics Data System (ADS)

    Bada, Adedayo; Alcaraz-Calero, Jose M.; Wang, Qi; Grecos, Christos

    2014-05-01

    This paper describes a comprehensive empirical performance evaluation of 3D video processing employing the physical/virtual architecture implemented in a cloud environment. Different virtualization technologies, virtual video cards and various 3D benchmarks tools have been utilized in order to analyse the optimal performance in the context of 3D online gaming applications. This study highlights 3D video rendering performance under each type of hypervisors, and other factors including network I/O, disk I/O and memory usage. Comparisons of these factors under well-known virtual display technologies such as VNC, Spice and Virtual 3D adaptors reveal the strengths and weaknesses of the various hypervisors with respect to 3D video rendering and streaming.

  15. Next Generation Integrated Environment for Collaborative Work Across Internets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harvey B. Newman

    2009-02-24

    We are now well-advanced in our development, prototyping and deployment of a high performance next generation Integrated Environment for Collaborative Work. The system, aimed at using the capability of ESnet and Internet2 for rapid data exchange, is based on the Virtual Room Videoconferencing System (VRVS) developed by Caltech. The VRVS system has been chosen by the Internet2 Digital Video (I2-DV) Initiative as a preferred foundation for the development of advanced video, audio and multimedia collaborative applications by the Internet2 community. Today, the system supports high-end, broadcast-quality interactivity, while enabling a wide variety of clients (Mbone, H.323) to participate in themore » same conference by running different standard protocols in different contexts with different bandwidth connection limitations, has a fully Web-integrated user interface, developers and administrative APIs, a widely scalable video network topology based on both multicast domains and unicast tunnels, and demonstrated multiplatform support. This has led to its rapidly expanding production use for national and international scientific collaborations in more than 60 countries. We are also in the process of creating a 'testbed video network' and developing the necessary middleware to support a set of new and essential requirements for rapid data exchange, and a high level of interactivity in large-scale scientific collaborations. These include a set of tunable, scalable differentiated network services adapted to each of the data streams associated with a large number of collaborative sessions, policy-based and network state-based resource scheduling, authentication, and optional encryption to maintain confidentiality of inter-personal communications. High performance testbed video networks will be established in ESnet and Internet2 to test and tune the implementation, using a few target application-sets.« less

  16. Using dynamic mode decomposition for real-time background/foreground separation in video

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kutz, Jose Nathan; Grosek, Jacob; Brunton, Steven

    The technique of dynamic mode decomposition (DMD) is disclosed herein for the purpose of robustly separating video frames into background (low-rank) and foreground (sparse) components in real-time. Foreground/background separation is achieved at the computational cost of just one singular value decomposition (SVD) and one linear equation solve, thus producing results orders of magnitude faster than robust principal component analysis (RPCA). Additional techniques, including techniques for analyzing the video for multi-resolution time-scale components, and techniques for reusing computations to allow processing of streaming video in real time, are also described herein.

  17. Application of M-JPEG compression hardware to dynamic stimulus production.

    PubMed

    Mulligan, J B

    1997-01-01

    Inexpensive circuit boards have appeared on the market which transform a normal micro-computer's disk drive into a video disk capable of playing extended video sequences in real time. This technology enables the performance of experiments which were previously impossible, or at least prohibitively expensive. The new technology achieves this capability using special-purpose hardware to compress and decompress individual video frames, enabling a video stream to be transferred over relatively low-bandwidth disk interfaces. This paper will describe the use of such devices for visual psychophysics and present the technical issues that must be considered when evaluating individual products.

  18. Live streaming video for medical education: a laboratory model.

    PubMed

    Gandsas, Alejandro; McIntire, Katherine; Palli, Guillermo; Park, Adrian

    2002-10-01

    At the University of Kentucky (UK), we applied streaming video technology to develop a webcast model that will allow institutions to broadcast live and prerecorded surgeries, conferences, and courses in real time over networks (the Internet or an intranet). We successfully broadcast a prerecorded laparoscopic paraesophageal hernia repair to domestic and international clients by using desktop computers equipped with off-the-shelf, streaming-enabled software and standard hardware and operating systems. A web-based user interface made accessing the educational material as simple as a mouse click and allowed clients to participate in the broadcast event via an embedded e-mail/chat module. Three client computers (two connected to the Internet and a third connected to the UK intranet) requested and displayed the surgical film by means of seven common network connection configurations. Significantly, no difference in image resolution was detected with the use of a connection speed faster than 128 kilobytes per second (kbps). At this connection speed, an average bandwidth of 32.7 kbps was used, and although a 15-second delay was experienced from the time of data request to data display, the surgical film streamed continuously from beginning to end at a mean rate of 14.4 frames per second (fps). The clients easily identified all anatomic structures in full color motion, clearly followed all steps of the surgical procedure, and successfully asked questions and made comments by using the e-mail/chat module while viewing the surgery. With minimal financial investment, we have created an interactive virtual classroom with the potential to attract a global audience. Our webcast model represents a simple and practical method for institutions to supplement undergraduate and graduate surgical education and offer continuing medical education credits in a way that is convenient for clients (surgeons, students, residents, others). In the future, physicians may access streaming webcast material wirelessly with hand-held computers, so that they will be freed from computer stations.

  19. Combining Live Video and Audio Broadcasting, Synchronous Chat, and Asynchronous Open Forum Discussions in Distance Education

    ERIC Educational Resources Information Center

    Teng, Tian-Lih; Taveras, Marypat

    2004-01-01

    This article outlines the evolution of a unique distance education program that began as a hybrid--combining face-to-face instruction with asynchronous online teaching--and evolved to become an innovative combination of synchronous education using live streaming video, audio, and chat over the Internet, blended with asynchronous online discussions…

  20. Unmanned Warfare: Second and Third Order Effects Stemming from the Afghan Operational Environment between 2001 and 2010

    DTIC Science & Technology

    2011-06-10

    the very nature of warfare took a dramatic step into the future. With new assets capable of remaining airborne for nearly 24 hours and live video ...warfare took a dramatic step into the future. With new assets capable of remaining airborne for nearly 24 hours and live video feeds streaming to...shape the battlefield during protracted combat operations. From the real time video feeds, to the 24 hour coverage of an area of interest, tangible

  1. An intelligent surveillance platform for large metropolitan areas with dense sensor deployment.

    PubMed

    Fernández, Jorge; Calavia, Lorena; Baladrón, Carlos; Aguiar, Javier M; Carro, Belén; Sánchez-Esguevillas, Antonio; Alonso-López, Jesus A; Smilansky, Zeev

    2013-06-07

    This paper presents an intelligent surveillance platform based on the usage of large numbers of inexpensive sensors designed and developed inside the European Eureka Celtic project HuSIMS. With the aim of maximizing the number of deployable units while keeping monetary and resource/bandwidth costs at a minimum, the surveillance platform is based on the usage of inexpensive visual sensors which apply efficient motion detection and tracking algorithms to transform the video signal in a set of motion parameters. In order to automate the analysis of the myriad of data streams generated by the visual sensors, the platform's control center includes an alarm detection engine which comprises three components applying three different Artificial Intelligence strategies in parallel. These strategies are generic, domain-independent approaches which are able to operate in several domains (traffic surveillance, vandalism prevention, perimeter security, etc.). The architecture is completed with a versatile communication network which facilitates data collection from the visual sensors and alarm and video stream distribution towards the emergency teams. The resulting surveillance system is extremely suitable for its deployment in metropolitan areas, smart cities, and large facilities, mainly because cheap visual sensors and autonomous alarm detection facilitate dense sensor network deployments for wide and detailed coverage.

  2. Binding and unbinding the auditory and visual streams in the McGurk effect.

    PubMed

    Nahorna, Olha; Berthommier, Frédéric; Schwartz, Jean-Luc

    2012-08-01

    Subjects presented with coherent auditory and visual streams generally fuse them into a single percept. This results in enhanced intelligibility in noise, or in visual modification of the auditory percept in the McGurk effect. It is classically considered that processing is done independently in the auditory and visual systems before interaction occurs at a certain representational stage, resulting in an integrated percept. However, some behavioral and neurophysiological data suggest the existence of a two-stage process. A first stage would involve binding together the appropriate pieces of audio and video information before fusion per se in a second stage. Then it should be possible to design experiments leading to unbinding. It is shown here that if a given McGurk stimulus is preceded by an incoherent audiovisual context, the amount of McGurk effect is largely reduced. Various kinds of incoherent contexts (acoustic syllables dubbed on video sentences or phonetic or temporal modifications of the acoustic content of a regular sequence of audiovisual syllables) can significantly reduce the McGurk effect even when they are short (less than 4 s). The data are interpreted in the framework of a two-stage "binding and fusion" model for audiovisual speech perception.

  3. Snapshot spectral and polarimetric imaging; target identification with multispectral video

    NASA Astrophysics Data System (ADS)

    Bartlett, Brent D.; Rodriguez, Mikel D.

    2013-05-01

    As the number of pixels continue to grow in consumer and scientific imaging devices, it has become feasible to collect the incident light field. In this paper, an imaging device developed around light field imaging is used to collect multispectral and polarimetric imagery in a snapshot fashion. The sensor is described and a video data set is shown highlighting the advantage of snapshot spectral imaging. Several novel computer vision approaches are applied to the video cubes to perform scene characterization and target identification. It is shown how the addition of spectral and polarimetric data to the video stream allows for multi-target identification and tracking not possible with traditional RGB video collection.

  4. Improved technical performance of a multifunctional prehospital telemedicine system between the research phase and the routine use phase - an observational study.

    PubMed

    Felzen, Marc; Brokmann, Jörg C; Beckers, Stefan K; Czaplik, Michael; Hirsch, Frederik; Tamm, Miriam; Rossaint, Rolf; Bergrath, Sebastian

    2017-04-01

    Introduction Telemedical concepts in emergency medical services (EMS) lead to improved process times and patient outcomes, but their technical performance has thus far been insufficient; nevertheless, the concept was transferred into EMS routine care in Aachen, Germany. This study evaluated the system's technical performance and compared it to a precursor system. Methods The telemedicine system was implemented on seven ambulances and a teleconsultation centre staffed with experienced EMS physicians was established in April 2014. Telemedical applications included mobile vital data, 12-lead, picture transmission and video streaming from inside the ambulances. The tele-EMS physician filled in a questionnaire regarding the technical performance of the applications, background noise and assessed clinical values of the transmitted pictures and videos after each mission between 15 May 2014-15 October 2014. Results Teleconsultation was established during 539 emergency cases. In 83% of the cases ( n = 447), only the paramedics and the tele-EMS physician were involved. Transmission success rates ranged from 98% (audio connection) to 93% (12-lead electrocardiogram (ECG) transmission). All functionalities, except video transmission, were significantly better than the pilot project ( p < 0.05). Severe background noise was detected to a lesser extent ( p = 0.0004) and the clinical value of the pictures and videos were considered significantly more valuable. Discussion The multifunctional system is now sufficient for routine use and is the most reliable mobile emergency telemedicine system compared to other published projects. Dropouts were due to user errors and network coverage problems. These findings enable widespread use of this system in the future, reducing the critical time intervals until medical therapy is started.

  5. Buy, Borrow, or Steal? Film Access for Film Studies Students

    ERIC Educational Resources Information Center

    Rodgers, Wendy

    2018-01-01

    Libraries offer a mix of options to serve the film studies curriculum: streaming video, DVDs on Reserve, and streaming DVDs through online classrooms. Some professors screen films and lend DVDs to students. But how do students obtain the films required for their courses? How would they prefer to do so? These are among the questions explored using…

  6. A simulator tool set for evaluating HEVC/SHVC streaming

    NASA Astrophysics Data System (ADS)

    Al Hadhrami, Tawfik; Nightingale, James; Wang, Qi; Grecos, Christos; Kehtarnavaz, Nasser

    2015-02-01

    Video streaming and other multimedia applications account for an ever increasing proportion of all network traffic. The recent adoption of High Efficiency Video Coding (HEVC) as the H.265 standard provides many opportunities for new and improved services multimedia services and applications in the consumer domain. Since the delivery of version one of H.265, the Joint Collaborative Team on Video Coding have been working towards standardisation of a scalable extension (SHVC) to the H.265 standard and a series of range extensions and new profiles. As these enhancements are added to the standard the range of potential applications and research opportunities will expend. For example the use of video is also growing rapidly in other sectors such as safety, security, defence and health with real-time high quality video transmission playing an important role in areas like critical infrastructure monitoring and disaster management. Each of which may benefit from the application of enhanced HEVC/H.265 and SHVC capabilities. The majority of existing research into HEVC/H.265 transmission has focussed on the consumer domain addressing issues such as broadcast transmission and delivery to mobile devices with the lack of freely available tools widely cited as an obstacle to conducting this type of research. In this paper we present a toolset which facilitates the transmission and evaluation of HEVC/H.265 and SHVC encoded video on the popular open source NCTUns simulator. Our toolset provides researchers with a modular, easy to use platform for evaluating video transmission and adaptation proposals on large scale wired, wireless and hybrid architectures. The toolset consists of pre-processing, transmission, SHVC adaptation and post-processing tools to gather and analyse statistics. It has been implemented using HM15 and SHM5, the latest versions of the HEVC and SHVC reference software implementations to ensure that currently adopted proposals for scalable and range extensions to the standard can be investigated. We demonstrate the effectiveness and usability of our toolset by evaluating SHVC streaming and adaptation to meet terminal constraints and network conditions in a range of wired, wireless, and large scale wireless mesh network scenarios, each of which is designed to simulate a realistic environment. Our results are compared to those for H264/SVC, the scalable extension to the existing H.264/AVC advanced video coding standard.

  7. Novel Uses of Video to Accelerate the Surgical Learning Curve.

    PubMed

    Ibrahim, Andrew M; Varban, Oliver A; Dimick, Justin B

    2016-04-01

    Surgeons are under enormous pressure to continually improve and learn new surgical skills. Novel uses of surgical video in the preoperative, intraoperative, and postoperative setting are emerging to accelerate the learning curve of surgical skill and minimize harm to patients. In the preoperative setting, social media outlets provide a valuable platform for surgeons to collaborate and plan for difficult operative cases. Live streaming of video has allowed for intraoperative telementoring. Finally, postoperative use of video has provided structure for peer coaching to evaluate and improve surgical skill. Applying these approaches into practice is becoming easier as most of our surgical platforms (e.g., laparoscopic, and endoscopy) now have video recording technology built in and video editing software has become more user friendly. Future applications of video technology are being developed, including possible integration into accreditation and board certification.

  8. About subjective evaluation of adaptive video streaming

    NASA Astrophysics Data System (ADS)

    Tavakoli, Samira; Brunnström, Kjell; Garcia, Narciso

    2015-03-01

    The usage of HTTP Adaptive Streaming (HAS) technology by content providers is increasing rapidly. Having available the video content in multiple qualities, using HAS allows to adapt the quality of downloaded video to the current network conditions providing smooth video-playback. However, the time-varying video quality by itself introduces a new type of impairment. The quality adaptation can be done in different ways. In order to find the best adaptation strategy maximizing users perceptual quality it is necessary to investigate about the subjective perception of adaptation-related impairments. However, the novelties of these impairments and their comparably long time duration make most of the standardized assessment methodologies fall less suited for studying HAS degradation. Furthermore, in traditional testing methodologies, the quality of the video in audiovisual services is often evaluated separated and not in the presence of audio. Nevertheless, the requirement of jointly evaluating the audio and the video within a subjective test is a relatively under-explored research field. In this work, we address the research question of determining the appropriate assessment methodology to evaluate the sequences with time-varying quality due to the adaptation. This was done by studying the influence of different adaptation related parameters through two different subjective experiments using a methodology developed to evaluate long test sequences. In order to study the impact of audio presence on quality assessment by the test subjects, one of the experiments was done in the presence of audio stimuli. The experimental results were subsequently compared with another experiment using the standardized single stimulus Absolute Category Rating (ACR) methodology.

  9. a Cloud-Based Architecture for Smart Video Surveillance

    NASA Astrophysics Data System (ADS)

    Valentín, L.; Serrano, S. A.; Oves García, R.; Andrade, A.; Palacios-Alonso, M. A.; Sucar, L. Enrique

    2017-09-01

    Turning a city into a smart city has attracted considerable attention. A smart city can be seen as a city that uses digital technology not only to improve the quality of people's life, but also, to have a positive impact in the environment and, at the same time, offer efficient and easy-to-use services. A fundamental aspect to be considered in a smart city is people's safety and welfare, therefore, having a good security system becomes a necessity, because it allows us to detect and identify potential risk situations, and then take appropriate decisions to help people or even prevent criminal acts. In this paper we present an architecture for automated video surveillance based on the cloud computing schema capable of acquiring a video stream from a set of cameras connected to the network, process that information, detect, label and highlight security-relevant events automatically, store the information and provide situational awareness in order to minimize response time to take the appropriate action.

  10. Deriving video content type from HEVC bitstream semantics

    NASA Astrophysics Data System (ADS)

    Nightingale, James; Wang, Qi; Grecos, Christos; Goma, Sergio R.

    2014-05-01

    As network service providers seek to improve customer satisfaction and retention levels, they are increasingly moving from traditional quality of service (QoS) driven delivery models to customer-centred quality of experience (QoE) delivery models. QoS models only consider metrics derived from the network however, QoE models also consider metrics derived from within the video sequence itself. Various spatial and temporal characteristics of a video sequence have been proposed, both individually and in combination, to derive methods of classifying video content either on a continuous scale or as a set of discrete classes. QoE models can be divided into three broad categories, full reference, reduced reference and no-reference models. Due to the need to have the original video available at the client for comparison, full reference metrics are of limited practical value in adaptive real-time video applications. Reduced reference metrics often require metadata to be transmitted with the bitstream, while no-reference metrics typically operate in the decompressed domain at the client side and require significant processing to extract spatial and temporal features. This paper proposes a heuristic, no-reference approach to video content classification which is specific to HEVC encoded bitstreams. The HEVC encoder already makes use of spatial characteristics to determine partitioning of coding units and temporal characteristics to determine the splitting of prediction units. We derive a function which approximates the spatio-temporal characteristics of the video sequence by using the weighted averages of the depth at which the coding unit quadtree is split and the prediction mode decision made by the encoder to estimate spatial and temporal characteristics respectively. Since the video content type of a sequence is determined by using high level information parsed from the video stream, spatio-temporal characteristics are identified without the need for full decoding and can be used in a timely manner to aid decision making in QoE oriented adaptive real time streaming.

  11. Analog Video Authentication and Seal Verification Equipment Development

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gregory Lancaster

    Under contract to the US Department of Energy in support of arms control treaty verification activities, the Savannah River National Laboratory in conjunction with the Pacific Northwest National Laboratory, the Idaho National Laboratory and Milagro Consulting, LLC developed equipment for use within a chain of custody regime. This paper discussed two specific devices, the Authentication Through the Lens (ATL) analog video authentication system and a photographic multi-seal reader. Both of these devices have been demonstrated in a field trial, and the experience gained throughout will also be discussed. Typically, cryptographic methods are used to prove the authenticity of digital imagesmore » and video used in arms control chain of custody applications. However, in some applications analog cameras are used. Since cryptographic authentication methods will not work on analog video streams, a simple method of authenticating analog video was developed and tested. A photographic multi-seal reader was developed to image different types of visual unique identifiers for use in chain of custody and authentication activities. This seal reader is unique in its ability to image various types of seals including the Cobra Seal, Reflective Particle Tags, and adhesive seals. Flicker comparison is used to compare before and after images collected with the seal reader in order to detect tampering and verify the integrity of the seal.« less

  12. The High Definition Earth Viewing (HDEV) Payload

    NASA Technical Reports Server (NTRS)

    Muri, Paul; Runco, Susan; Fontanot, Carlos; Getteau, Chris

    2017-01-01

    The High Definition Earth Viewing (HDEV) payload enables long-term experimentation of four, commercial-of-the-shelf (COTS) high definition video, cameras mounted on the exterior of the International Space Station. The payload enables testing of cameras in the space environment. The HDEV cameras transmit imagery continuously to an encoder that then sends the video signal via Ethernet through the space station for downlink. The encoder, cameras, and other electronics are enclosed in a box pressurized to approximately one atmosphere, containing dry nitrogen, to provide a level of protection to the electronics from the space environment. The encoded video format supports streaming live video of Earth for viewing online. Camera sensor types include charge-coupled device and complementary metal-oxide semiconductor. Received imagery data is analyzed on the ground to evaluate camera sensor performance. Since payload deployment, minimal degradation to imagery quality has been observed. The HDEV payload continues to operate by live streaming and analyzing imagery. Results from the experiment reduce risk in the selection of cameras that could be considered for future use on the International Space Station and other spacecraft. This paper discusses the payload development, end-to- end architecture, experiment operation, resulting image analysis, and future work.

  13. An openstack-based flexible video transcoding framework in live

    NASA Astrophysics Data System (ADS)

    Shi, Qisen; Song, Jianxin

    2017-08-01

    With the rapid development of mobile live business, transcoding HD video is often a challenge for mobile devices due to their limited processing capability and bandwidth-constrained network connection. For live service providers, it's wasteful for resources to delay lots of transcoding server because some of them are free to work sometimes. To deal with this issue, this paper proposed an Openstack-based flexible transcoding framework to achieve real-time video adaption for mobile device and make computing resources used efficiently. To this end, we introduced a special method of video stream splitting and VMs resource scheduling based on access pressure prediction,which is forecasted by an AR model.

  14. Detection and Localization of Robotic Tools in Robot-Assisted Surgery Videos Using Deep Neural Networks for Region Proposal and Detection.

    PubMed

    Sarikaya, Duygu; Corso, Jason J; Guru, Khurshid A

    2017-07-01

    Video understanding of robot-assisted surgery (RAS) videos is an active research area. Modeling the gestures and skill level of surgeons presents an interesting problem. The insights drawn may be applied in effective skill acquisition, objective skill assessment, real-time feedback, and human-robot collaborative surgeries. We propose a solution to the tool detection and localization open problem in RAS video understanding, using a strictly computer vision approach and the recent advances of deep learning. We propose an architecture using multimodal convolutional neural networks for fast detection and localization of tools in RAS videos. To the best of our knowledge, this approach will be the first to incorporate deep neural networks for tool detection and localization in RAS videos. Our architecture applies a region proposal network (RPN) and a multimodal two stream convolutional network for object detection to jointly predict objectness and localization on a fusion of image and temporal motion cues. Our results with an average precision of 91% and a mean computation time of 0.1 s per test frame detection indicate that our study is superior to conventionally used methods for medical imaging while also emphasizing the benefits of using RPN for precision and efficiency. We also introduce a new data set, ATLAS Dione, for RAS video understanding. Our data set provides video data of ten surgeons from Roswell Park Cancer Institute, Buffalo, NY, USA, performing six different surgical tasks on the daVinci Surgical System (dVSS) with annotations of robotic tools per frame.

  15. A shower look-up table to trace the dynamics of meteoroid streams and their sources

    NASA Astrophysics Data System (ADS)

    Jenniskens, Petrus

    2018-04-01

    Meteor showers are caused by meteoroid streams from comets (and some primitive asteroids). They trace the comet population and its dynamical evolution, warn of dangerous long-period comets that can pass close to Earth's orbit, outline volumes of space with a higher satellite impact probability, and define how meteoroids evolve in the interplanetary medium. Ongoing meteoroid orbit surveys have mapped these showers in recent years, but the surveys are now running up against a more and more complicated scene. The IAU Working List of Meteor Showers has reached 956 entries to be investigated (per March 1, 2018). The picture is even more complicated with the discovery that radar-detected streams are often different, or differently distributed, than video-detected streams. Complicating matters even more, some meteor showers are active over many months, during which their radiant position gradually changes, which makes the use of mean orbits as a proxy for a meteoroid stream's identity meaningless. The dispersion of the stream in space and time is important to that identity and contains much information about its origin and dynamical evolution. To make sense of the meteor shower zoo, a Shower Look-Up Table was created that captures this dispersion. The Shower Look-Up Table has enabled the automated identification of showers in the ongoing CAMS video-based meteoroid orbit survey, results of which are presented now online in near-real time at http://cams.seti.org/FDL/. Visualization tools have been built that depict the streams in a planetarium setting. Examples will be presented that sample the range of meteoroid streams that this look-up table describes. Possibilities for further dynamical studies will be discussed.

  16. Point-of-View Recording Devices for Intraoperative Neurosurgical Video Capture.

    PubMed

    Porras, Jose L; Khalid, Syed; Root, Brandon K; Khan, Imad S; Singer, Robert J

    2016-01-01

    The ability to record and stream neurosurgery is an unprecedented opportunity to further research, medical education, and quality improvement. Here, we appraise the ease of implementation of existing point-of-view devices when capturing and sharing procedures from the neurosurgical operating room and detail their potential utility in this context. Our neurosurgical team tested and critically evaluated features of the Google Glass and Panasonic HX-A500 cameras, including ergonomics, media quality, and media sharing in both the operating theater and the angiography suite. Existing devices boast several features that facilitate live recording and streaming of neurosurgical procedures. Given that their primary application is not intended for the surgical environment, we identified a number of concrete, yet improvable, limitations. The present study suggests that neurosurgical video capture and live streaming represents an opportunity to contribute to research, education, and quality improvement. Despite this promise, shortcomings render existing devices impractical for serious consideration. We describe the features that future recording platforms should possess to improve upon existing technology.

  17. Identification and annotation of erotic film based on content analysis

    NASA Astrophysics Data System (ADS)

    Wang, Donghui; Zhu, Miaoliang; Yuan, Xin; Qian, Hui

    2005-02-01

    The paper brings forward a new method for identifying and annotating erotic films based on content analysis. First, the film is decomposed to video and audio stream. Then, the video stream is segmented into shots and key frames are extracted from each shot. We filter the shots that include potential erotic content by finding the nude human body in key frames. A Gaussian model in YCbCr color space for detecting skin region is presented. An external polygon that covered the skin regions is used for the approximation of the human body. Last, we give the degree of the nudity by calculating the ratio of skin area to whole body area with weighted parameters. The result of the experiment shows the effectiveness of our method.

  18. Development and Assessment of Web Courses That Use Streaming Audio and Video Technologies.

    ERIC Educational Resources Information Center

    Ingebritsen, Thomas S.; Flickinger, Kathleen

    Iowa State University, through a program called Project BIO (Biology Instructional Outreach), has been using RealAudio technology for about 2 years in college biology courses that are offered entirely via the World Wide Web. RealAudio is a type of streaming media technology that can be used to deliver audio content and a variety of other media…

  19. Perceptual video quality comparison of 3DTV broadcasting using multimode service systems

    NASA Astrophysics Data System (ADS)

    Ok, Jiheon; Lee, Chulhee

    2015-05-01

    Multimode service (MMS) systems allow broadcasters to provide multichannel services using a single HD channel. Using these systems, it is possible to provide 3DTV programs that can be watched either in three-dimensional (3-D) or two-dimensional (2-D) modes with backward compatibility. In the MMS system for 3DTV broadcasting using the Advanced Television Systems Committee standards, the left and the right views are encoded using MPEG-2 and H.264, respectively, and then transmitted using a dual HD streaming format. The left view, encoded using MPEG-2, assures 2-D backward compatibility while the right view, encoded using H.264, can be optionally combined with the left view to generate stereoscopic 3-D views. We analyze 2-D and 3-D perceptual quality when using the MMS system by comparing items in the frame-compatible format (top-bottom), which is a conventional transmission scheme for 3-D broadcasting. We performed perceptual 2-D and 3-D video quality evaluation assuming 3DTV programs are encoded using the MMS system and top-bottom format. The results show that MMS systems can be preferable with regard to perceptual 2-D and 3-D quality and backward compatibility.

  20. On continuous user authentication via typing behavior.

    PubMed

    Roth, Joseph; Liu, Xiaoming; Metaxas, Dimitris

    2014-10-01

    We hypothesize that an individual computer user has a unique and consistent habitual pattern of hand movements, independent of the text, while typing on a keyboard. As a result, this paper proposes a novel biometric modality named typing behavior (TB) for continuous user authentication. Given a webcam pointing toward a keyboard, we develop real-time computer vision algorithms to automatically extract hand movement patterns from the video stream. Unlike the typical continuous biometrics, such as keystroke dynamics (KD), TB provides a reliable authentication with a short delay, while avoiding explicit key-logging. We collect a video database where 63 unique subjects type static text and free text for multiple sessions. For one typing video, the hands are segmented in each frame and a unique descriptor is extracted based on the shape and position of hands, as well as their temporal dynamics in the video sequence. We propose a novel approach, named bag of multi-dimensional phrases, to match the cross-feature and cross-temporal pattern between a gallery sequence and probe sequence. The experimental results demonstrate a superior performance of TB when compared with KD, which, together with our ultrareal-time demo system, warrant further investigation of this novel vision application and biometric modality.

  1. MPEG-7 audio-visual indexing test-bed for video retrieval

    NASA Astrophysics Data System (ADS)

    Gagnon, Langis; Foucher, Samuel; Gouaillier, Valerie; Brun, Christelle; Brousseau, Julie; Boulianne, Gilles; Osterrath, Frederic; Chapdelaine, Claude; Dutrisac, Julie; St-Onge, Francis; Champagne, Benoit; Lu, Xiaojian

    2003-12-01

    This paper reports on the development status of a Multimedia Asset Management (MAM) test-bed for content-based indexing and retrieval of audio-visual documents within the MPEG-7 standard. The project, called "MPEG-7 Audio-Visual Document Indexing System" (MADIS), specifically targets the indexing and retrieval of video shots and key frames from documentary film archives, based on audio-visual content like face recognition, motion activity, speech recognition and semantic clustering. The MPEG-7/XML encoding of the film database is done off-line. The description decomposition is based on a temporal decomposition into visual segments (shots), key frames and audio/speech sub-segments. The visible outcome will be a web site that allows video retrieval using a proprietary XQuery-based search engine and accessible to members at the Canadian National Film Board (NFB) Cineroute site. For example, end-user will be able to ask to point on movie shots in the database that have been produced in a specific year, that contain the face of a specific actor who tells a specific word and in which there is no motion activity. Video streaming is performed over the high bandwidth CA*net network deployed by CANARIE, a public Canadian Internet development organization.

  2. Intergraph video and images exploitation capabilities

    NASA Astrophysics Data System (ADS)

    Colla, Simone; Manesis, Charalampos

    2013-08-01

    The current paper focuses on the capture, fusion and process of aerial imagery in order to leverage full motion video, giving analysts the ability to collect, analyze, and maximize the value of video assets. Unmanned aerial vehicles (UAV) have provided critical real-time surveillance and operational support to military organizations, and are a key source of intelligence, particularly when integrated with other geospatial data. In the current workflow, at first, the UAV operators plan the flight by using a flight planning software. During the flight the UAV send a live video stream directly on the field to be processed by Intergraph software, to generate and disseminate georeferenced images trough a service oriented architecture based on ERDAS Apollo suite. The raw video-based data sources provide the most recent view of a situation and can augment other forms of geospatial intelligence - such as satellite imagery and aerial photos - to provide a richer, more detailed view of the area of interest. To effectively use video as a source of intelligence, however, the analyst needs to seamlessly fuse the video with these other types of intelligence, such as map features and annotations. Intergraph has developed an application that automatically generates mosaicked georeferenced image, tags along the video route which can then be seamlessly integrated with other forms of static data, such as aerial photos, satellite imagery, or geospatial layers and features. Consumers will finally have the ability to use a single, streamlined system to complete the entire geospatial information lifecycle: capturing geospatial data using sensor technology; processing vector, raster, terrain data into actionable information; managing, fusing, and sharing geospatial data and video toghether; and finally, rapidly and securely delivering integrated information products, ensuring individuals can make timely decisions.

  3. Captivating Broad Audiences with an Internet-connected Ocean

    NASA Astrophysics Data System (ADS)

    Moran, K.; Elliott, L.; Gervais, F.; Juniper, K.; Owens, D.; Pirenne, B.

    2012-12-01

    NEPTUNE Canada, a network of Ocean Networks Canada and the first deep water cabled ocean observatory, began operations in December 2009. Located offshore Canada's west coast, the network streams data from passive, active, and interactive sensors positioned at five nodes along its 800 km long looped cable to the Internet. This technically advanced system includes a sophisticated data management and archiving system, which enables the collection of real-time physical, chemical, geological, and biological oceanographic data, including video, at resolutions relevant for furthering our understanding of the dynamics of the earth-ocean system. Scientists in Canada and around the world comprise the primary audience for these data, but NEPTUNE Canada is also serving these data to broader audiences including K-16 students and teachers, informal educators, citizen scientists, the press, and the public. Here we present our engagement tools, approaches, and experiences including electronic books, personal phone apps, Internet-served video, social media, mini-observatory systems, print media, live broadcasting from sea, and a citizen scientist portal.NEPTUNE Canada's ibook available on Apple's iBook store.

  4. Watermarking textures in video games

    NASA Astrophysics Data System (ADS)

    Liu, Huajian; Berchtold, Waldemar; Schäfer, Marcel; Lieb, Patrick; Steinebach, Martin

    2014-02-01

    Digital watermarking is a promising solution to video game piracy. In this paper, based on the analysis of special challenges and requirements in terms of watermarking textures in video games, a novel watermarking scheme for DDS textures in video games is proposed. To meet the performance requirements in video game applications, the proposed algorithm embeds the watermark message directly in the compressed stream in DDS files and can be straightforwardly applied in watermark container technique for real-time embedding. Furthermore, the embedding approach achieves high watermark payload to handle collusion secure fingerprinting codes with extreme length. Hence, the scheme is resistant to collusion attacks, which is indispensable in video game applications. The proposed scheme is evaluated in aspects of transparency, robustness, security and performance. Especially, in addition to classical objective evaluation, the visual quality and playing experience of watermarked games is assessed subjectively in game playing.

  5. Robotic Telesurgery Research

    DTIC Science & Technology

    2010-03-01

    piece of tissue. Full Mobility Manipulator Robot The primary challenge with the design of a full mobility robot is meeting the competing design...streamed through an embedded plug-in for VLC player using asf/wmv encoding with 200ms buffering. A benchtop test of the remote user interface was...encountered in ensuring quality video is being made available to the surgeon. A significant challenge has been to consistently provide high quality video

  6. Literature review on risky driving videos on YouTube: Unknown effects and areas for concern?

    PubMed

    Vingilis, Evelyn; Yıldırım-Yenier, Zümrüt; Vingilis-Jaremko, Larissa; Wickens, Christine; Seeley, Jane; Fleiter, Judy; Grushka, Daniel H

    2017-08-18

    Entry of terms reflective of extreme risky driving behaviors into the YouTube website yields millions of videos. The majority of the top 20 highly subscribed automotive YouTube websites are focused on high-performance vehicles, high speed, and often risky driving. Moreover, young men are the heaviest users of online video sharing sites, overall streaming more videos, and watching them longer than any other group. The purpose of this article is to review the literature on YouTube videos and risky driving. A systematic search was performed using the following specialized database sources-Scopus, PubMed, Web of Science, ERIC, and Google Scholar-for the years 2005-2015 for articles in the English language. Search words included "YouTube AND driving," "YouTube AND speeding," "YouTube AND racing." No published research was found on the content of risky driving videos or on the effects of these videos on viewers. This literature review presents the current state of our published knowledge on the topic, which includes a review of the effects of mass media on risky driving cognitions; attitudes and behavior; similarities and differences between mass and social media; information on the YouTube platform; psychological theories that could support YouTube's potential effects on driving behavior; and 2 examples of risky driving behaviors ("sidewalk skiing" and "ghost riding the whip") suggestive of varying levels of modeling behavior in subsequent YouTube videos. Every month about 1 billion individuals are reported to view YouTube videos (ebizMBA Guide 2015 ) and young men are the heaviest users, overall streaming more YouTube videos and watching them longer than women and other age groups (Nielsen 2011 ). This group is also the most dangerous group in traffic, engaging in more per capita violations and experiencing more per capita injuries and fatalities (e.g., Parker et al. 1995 ; Reason et al. 1990 ; Transport Canada 2015 ; World Health Organization 2015 ). YouTube also contains many channels depicting risky driving videos. The time has come for the traffic safety community to begin exploring these relationships.

  7. Video error concealment using block matching and frequency selective extrapolation algorithms

    NASA Astrophysics Data System (ADS)

    P. K., Rajani; Khaparde, Arti

    2017-06-01

    Error Concealment (EC) is a technique at the decoder side to hide the transmission errors. It is done by analyzing the spatial or temporal information from available video frames. It is very important to recover distorted video because they are used for various applications such as video-telephone, video-conference, TV, DVD, internet video streaming, video games etc .Retransmission-based and resilient-based methods, are also used for error removal. But these methods add delay and redundant data. So error concealment is the best option for error hiding. In this paper, the error concealment methods such as Block Matching error concealment algorithm is compared with Frequency Selective Extrapolation algorithm. Both the works are based on concealment of manually error video frames as input. The parameter used for objective quality measurement was PSNR (Peak Signal to Noise Ratio) and SSIM(Structural Similarity Index). The original video frames along with error video frames are compared with both the Error concealment algorithms. According to simulation results, Frequency Selective Extrapolation is showing better quality measures such as 48% improved PSNR and 94% increased SSIM than Block Matching Algorithm.

  8. Final Report: Non-Visible, Automated Target Acquisition and Tracking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ziock, Klaus-Peter; Fabris, Lorenzo; Goddard, James K.

    The Roadside Tracker (RST) represents a new approach to radiation portal monitors. It uses a combination of gamma-ray and visible-light imaging to localize gamma-ray radiation sources to individual vehicles in free-flowing, multi-lane traffic. Deployed as two trailers that are parked on either side of the roadway (Fig. 1); the RST scans passing traffic with two large gamma-ray imagers, one mounted in each trailer. The system compensates for vehicle motion through the imager’s fields of view by using automated target acquisition and tracking (TAT) software applied to a stream of video images. Once a vehicle has left the field of view,more » the radiation image of that vehicle is analyzed for the presence of a source, and if one is found, an alarm is sounded. The gamma-ray image is presented to the operator together with the video image of the traffic stream when the vehicle was approximately closest to the system (Fig. 2). The offending vehicle is identified with a bounding box to distinguish it from other vehicles that might be present at the same time. The system was developed under a previous grant from the Department of Homeland Security’s (DHS’s) Domestic Nuclear Detection Office (DNDO). This report documents work performed with follow-on funding from DNDO to further advance the development of the RST. Specifically, the primary thrust was to extend the performance envelope of the system by replacing the visible-light video cameras used by the TAT software with sensors that would allow operation at night and during inclement weather. In particular, it was desired to allow operation after dark without requiring external lighting. As part of this work, the system software was also upgraded to allow the use of 64-bit computers, the current generation operating system (OS), software development environment (Windows 7 vs. Windows XP, and current Visual Studio.Net), and improved software version controls (GIT vs. Source Safe.) With the upgraded performance allowed by new computers, and the additional memory available in a 64-bit OS, the system was able to handle greater traffic densities, and this also allowed addition of the ability to handle stop-and-go traffic.« less

  9. An Intelligent Surveillance Platform for Large Metropolitan Areas with Dense Sensor Deployment

    PubMed Central

    Fernández, Jorge; Calavia, Lorena; Baladrón, Carlos; Aguiar, Javier M.; Carro, Belén; Sánchez-Esguevillas, Antonio; Alonso-López, Jesus A.; Smilansky, Zeev

    2013-01-01

    This paper presents an intelligent surveillance platform based on the usage of large numbers of inexpensive sensors designed and developed inside the European Eureka Celtic project HuSIMS. With the aim of maximizing the number of deployable units while keeping monetary and resource/bandwidth costs at a minimum, the surveillance platform is based on the usage of inexpensive visual sensors which apply efficient motion detection and tracking algorithms to transform the video signal in a set of motion parameters. In order to automate the analysis of the myriad of data streams generated by the visual sensors, the platform's control center includes an alarm detection engine which comprises three components applying three different Artificial Intelligence strategies in parallel. These strategies are generic, domain-independent approaches which are able to operate in several domains (traffic surveillance, vandalism prevention, perimeter security, etc.). The architecture is completed with a versatile communication network which facilitates data collection from the visual sensors and alarm and video stream distribution towards the emergency teams. The resulting surveillance system is extremely suitable for its deployment in metropolitan areas, smart cities, and large facilities, mainly because cheap visual sensors and autonomous alarm detection facilitate dense sensor network deployments for wide and detailed coverage. PMID:23748169

  10. A portable platform to collect and review behavioral data simultaneously with neurophysiological signals.

    PubMed

    Tianxiao Jiang; Siddiqui, Hasan; Ray, Shruti; Asman, Priscella; Ozturk, Musa; Ince, Nuri F

    2017-07-01

    This paper presents a portable platform to collect and review behavioral data simultaneously with neurophysiological signals. The whole system is comprised of four parts: a sensor data acquisition interface, a socket server for real-time data streaming, a Simulink system for real-time processing and an offline data review and analysis toolbox. A low-cost microcontroller is used to acquire data from external sensors such as accelerometer and hand dynamometer. The micro-controller transfers the data either directly through USB or wirelessly through a bluetooth module to a data server written in C++ for MS Windows OS. The data server also interfaces with the digital glove and captures HD video from webcam. The acquired sensor data are streamed under User Datagram Protocol (UDP) to other applications such as Simulink/Matlab for real-time analysis and recording. Neurophysiological signals such as electroencephalography (EEG), electrocorticography (ECoG) and local field potential (LFP) recordings can be collected simultaneously in Simulink and fused with behavioral data. In addition, we developed a customized Matlab Graphical User Interface (GUI) software to review, annotate and analyze the data offline. The software provides a fast, user-friendly data visualization environment with synchronized video playback feature. The software is also capable of reviewing long-term neural recordings. Other featured functions such as fast preprocessing with multithreaded filters, annotation, montage selection, power-spectral density (PSD) estimate, time-frequency map and spatial spectral map are also implemented.

  11. HPC enabled real-time remote processing of laparoscopic surgery

    NASA Astrophysics Data System (ADS)

    Ronaghi, Zahra; Sapra, Karan; Izard, Ryan; Duffy, Edward; Smith, Melissa C.; Wang, Kuang-Ching; Kwartowitz, David M.

    2016-03-01

    Laparoscopic surgery is a minimally invasive surgical technique. The benefit of small incisions has a disadvantage of limited visualization of subsurface tissues. Image-guided surgery (IGS) uses pre-operative and intra-operative images to map subsurface structures. One particular laparoscopic system is the daVinci-si robotic surgical system. The video streams generate approximately 360 megabytes of data per second. Real-time processing this large stream of data on a bedside PC, single or dual node setup, has become challenging and a high-performance computing (HPC) environment may not always be available at the point of care. To process this data on remote HPC clusters at the typical 30 frames per second rate, it is required that each 11.9 MB video frame be processed by a server and returned within 1/30th of a second. We have implement and compared performance of compression, segmentation and registration algorithms on Clemson's Palmetto supercomputer using dual NVIDIA K40 GPUs per node. Our computing framework will also enable reliability using replication of computation. We will securely transfer the files to remote HPC clusters utilizing an OpenFlow-based network service, Steroid OpenFlow Service (SOS) that can increase performance of large data transfers over long-distance and high bandwidth networks. As a result, utilizing high-speed OpenFlow- based network to access computing clusters with GPUs will improve surgical procedures by providing real-time medical image processing and laparoscopic data.

  12. Plugin free remote visualization in the browser

    NASA Astrophysics Data System (ADS)

    Tamm, Georg; Slusallek, Philipp

    2015-01-01

    Today, users access information and rich media from anywhere using the web browser on their desktop computers, tablets or smartphones. But the web evolves beyond media delivery. Interactive graphics applications like visualization or gaming become feasible as browsers advance in the functionality they provide. However, to deliver large-scale visualization to thin clients like mobile devices, a dedicated server component is necessary. Ideally, the client runs directly within the browser the user is accustomed to, requiring no installation of a plugin or native application. In this paper, we present the state-of-the-art of technologies which enable plugin free remote rendering in the browser. Further, we describe a remote visualization system unifying these technologies. The system transfers rendering results to the client as images or as a video stream. We utilize the upcoming World Wide Web Consortium (W3C) conform Web Real-Time Communication (WebRTC) standard, and the Native Client (NaCl) technology built into Chrome, to deliver video with low latency.

  13. Use of Video Podcasts to Communicate Scientific Findings to Non-Scientists— Examples from the U.S. Geological Survey National Water-Quality Assessment Program

    NASA Astrophysics Data System (ADS)

    Harned, D. A.; McMahon, G.; Capelli, K.

    2010-12-01

    The U.S Geological Survey (USGS) National Water-Quality Assessment Program (NAWQA) provides information about (1) water-quality conditions and how those conditions vary locally, regionally, and nationally, (2) water-quality trends, and (3) factors that affect those conditions. As part of the NAWQA Program, the Effects of Urbanization on Stream Ecosystems (EUSE) study examined the vulnerability and resilience of streams to urbanization. Completion of the EUSE study has resulted in over 20 scientific publications. Video podcasts are being used to communicate the relevance of these scientific findings to resource managers and the general public. Two video podcasts have been produced to date (9-1-2010). The first film “Effects of Urbanization on Stream Ecosystems” is a 3-minute summary of results of the EUSE study. The film is accessible on the USGS Corecast website (http://www.usgs.gov/corecast/details.asp?ep=127) and is available in MPG, WMV, and QuickTime formats, as an audio-only podcast, with a complete transcript of the film; and as a YouTube video (http://www.youtube.com/watch?v=BYwZiiORYG8) with subtitles. The film has been viewed over 6200 times, with most downloads occurring in the first 3 weeks after the June release. Views of the film declined to approximately 60 a week for the following 9 weeks. Most of the requests for the film have originated from U.S. domain addresses with 13 percent originating from international addresses. The film was posted on YouTube in June and has received 262 views since that time. A 15-minute version of the film with more technical content is also available for access from the EUSE website (http://water.usgs.gov/nawqa/urban/html/podcasts.html). It has been downloaded over 660 times. The bulk of the requests occurred in the first 2 weeks after release, with most requests originating from U.S. addresses and 11 percent originating internationally. In the second film “Stormwater, Impervious Surface, and Stream Health” (not yet released) a discussion of impacts of stormwater runoff on stream health is combined with a documentary of a stream cleanup by middle-school students. The film’s intended audience is resource managers, public officials, and the general public. Additional films are planned to be released in 2011, addressing habitat effects, innovative approaches for stormwater management, and State and local management issues. Lessons learned in production of the films included appreciation for the importance of communication of the scientific message in everyman’s English and in the most stripped-down version possible, and for the amount of time required to condense technical findings into concise messages. Attention to the technical elements of film production is important, and the use of video clips to illustrate ideas instead of technical language and slideshows is paramount. The films should be made available on several different web venues, and should be downloadable in several different resolutions to ease accessibility. Video is an effective means to reach out beyond the scientific community to the wider public to present easily digestible information that may impact decision making.

  14. Advances of FishNet towards a fully automatic monitoring system for fish migration

    NASA Astrophysics Data System (ADS)

    Kratzert, Frederik; Mader, Helmut

    2017-04-01

    Restoring the continuum of river networks, affected by anthropogenic constructions, is one of the main objectives of the Water Framework Directive. Regarding fish migration, fish passes are a widely used measure. Often the functionality of these fish passes needs to be assessed by monitoring. Over the last years, we developed a new semi-automatic monitoring system (FishCam) which allows the contact free observation of fish migration in fish passes through videos. The system consists of a detection tunnel, equipped with a camera, a motion sensor and artificial light sources, as well as a software (FishNet), which helps to analyze the video data. In its latest version, the software is capable of detecting and tracking objects in the videos as well as classifying them into "fish" and "no-fish" objects. This allows filtering out the videos containing at least one fish (approx. 5 % of all grabbed videos) and reduces the manual labor to the analysis of these videos. In this state the entire system has already been used in over 20 different fish passes across Austria for a total of over 140 months of monitoring resulting in more than 1.4 million analyzed videos. As a next step towards a fully automatic monitoring system, a key feature is the automatized classification of the detected fish into their species, which is still an unsolved task in a fully automatic monitoring environment. Recent advances in the field of machine learning, especially image classification with deep convolutional neural networks, sound promising in order to solve this problem. In this study, different approaches for the fish species classification are tested. Besides an image-only based classification approach using deep convolutional neural networks, various methods that combine the power of convolutional neural networks as image descriptors with additional features, such as the fish length and the time of appearance, are explored. To facilitate the development and testing phase of this approach, a subset of six fish species of Austrian rivers and streams is considered in this study. All scripts and the data to reproduce the results of this study will be made publicly available on GitHub* at the beginning of the EGU2017 General Assembly. * https://github.com/kratzert/EGU2017_public/

  15. Into the Black Box: Using Data Mining of In-Game Actions to Draw Inferences from Educational Technology about Students' Math Knowledge

    ERIC Educational Resources Information Center

    Kerr, Deirdre Song

    2014-01-01

    Educational video games have the potential to be used as assessments of student understanding of complex concepts. However, the interpretation of the rich stream of complex data that results from the tracking of in-game actions is so difficult that it is one of the most serious blockades to the use of educational video games or simulations to…

  16. Initial clinical outcomes of audiovisual-assisted therapeutic ambience in radiation therapy (AVATAR).

    PubMed

    Hiniker, Susan M; Bush, Karl; Fowler, Tyler; White, Evan C; Rodriguez, Samuel; Maxim, Peter G; Donaldson, Sarah S; Loo, Billy W

    Radiation therapy is an important component of treatment for many childhood cancers. Depending upon the age and maturity of the child, pediatric radiation therapy often requires general anesthesia for immobilization, position reproducibility, and daily treatment delivery. We designed and clinically implemented a radiation therapy-compatible audiovisual system that allows children to watch streaming video during treatment, with the goal of reducing the need for daily anesthesia through immersion in video. We designed an audiovisual-assisted therapeutic ambience in radiation therapy (AVATAR) system using a digital media player with wireless streaming and pico projector, and a radiolucent display screen positioned within the child's field of view to him or her with sufficient entertainment and distraction for the duration of serial treatments without the need for daily anesthesia. We piloted this system in 25 pediatric patients between the ages of 3 and 12 years. We calculated the number of fractions of radiation for which this system was used successfully and anesthesia avoided and compared it with the anesthesia rates reported in the literature for children of this age. Twenty-three of 25 patients (92%) were able to complete the prescribed course of radiation therapy without anesthesia using the AVATAR system, with a total of 441 fractions of treatment administered when using AVATAR. The median age of patients successfully treated with this approach was 6 years. Seven of the 23 patients were initially treated with daily anesthesia and were successfully transitioned to use of the AVATAR system. Patients and families reported an improved treatment experience with the use of the AVATAR system compared with anesthesia. The AVATAR system enables a high proportion of children to undergo radiation therapy without anesthesia compared with reported anesthesia rates, justifying continued development and clinical investigation of this technique. Copyright © 2016 American Society for Radiation Oncology. Published by Elsevier Inc. All rights reserved.

  17. Multicast for savings in cache-based video distribution

    NASA Astrophysics Data System (ADS)

    Griwodz, Carsten; Zink, Michael; Liepert, Michael; On, Giwon; Steinmetz, Ralf

    1999-12-01

    Internet video-on-demand (VoD) today streams videos directly from server to clients, because re-distribution is not established yet. Intranet solutions exist but are typically managed centrally. Caching may overcome these management needs, however existing web caching strategies are not applicable because they work in different conditions. We propose movie distribution by means of caching, and study the feasibility from the service providers' point of view. We introduce the combination of our reliable multicast protocol LCRTP for caching hierarchies combined with our enhancement to the patching technique for bandwidth friendly True VoD, not depending on network resource guarantees.

  18. Power-Constrained Fuzzy Logic Control of Video Streaming over a Wireless Interconnect

    NASA Astrophysics Data System (ADS)

    Razavi, Rouzbeh; Fleury, Martin; Ghanbari, Mohammed

    2008-12-01

    Wireless communication of video, with Bluetooth as an example, represents a compromise between channel conditions, display and decode deadlines, and energy constraints. This paper proposes fuzzy logic control (FLC) of automatic repeat request (ARQ) as a way of reconciling these factors, with a 40% saving in power in the worst channel conditions from economizing on transmissions when channel errors occur. Whatever the channel conditions are, FLC is shown to outperform the default Bluetooth scheme and an alternative Bluetooth-adaptive ARQ scheme in terms of reduced packet loss and delay, as well as improved video quality.

  19. Robust Transmission of H.264/AVC Streams Using Adaptive Group Slicing and Unequal Error Protection

    NASA Astrophysics Data System (ADS)

    Thomos, Nikolaos; Argyropoulos, Savvas; Boulgouris, Nikolaos V.; Strintzis, Michael G.

    2006-12-01

    We present a novel scheme for the transmission of H.264/AVC video streams over lossy packet networks. The proposed scheme exploits the error-resilient features of H.264/AVC codec and employs Reed-Solomon codes to protect effectively the streams. A novel technique for adaptive classification of macroblocks into three slice groups is also proposed. The optimal classification of macroblocks and the optimal channel rate allocation are achieved by iterating two interdependent steps. Dynamic programming techniques are used for the channel rate allocation process in order to reduce complexity. Simulations clearly demonstrate the superiority of the proposed method over other recent algorithms for transmission of H.264/AVC streams.

  20. An improvement analysis on video compression using file segmentation

    NASA Astrophysics Data System (ADS)

    Sharma, Shubhankar; Singh, K. John; Priya, M.

    2017-11-01

    From the past two decades the extreme evolution of the Internet has lead a massive rise in video technology and significantly video consumption over the Internet which inhabits the bulk of data traffic in general. Clearly, video consumes that so much data size on the World Wide Web, to reduce the burden on the Internet and deduction of bandwidth consume by video so that the user can easily access the video data.For this, many video codecs are developed such as HEVC/H.265 and V9. Although after seeing codec like this one gets a dilemma of which would be improved technology in the manner of rate distortion and the coding standard.This paper gives a solution about the difficulty for getting low delay in video compression and video application e.g. ad-hoc video conferencing/streaming or observation by surveillance. Also this paper describes the benchmark of HEVC and V9 technique of video compression on subjective oral estimations of High Definition video content, playback on web browsers. Moreover, this gives the experimental ideology of dividing the video file into several segments for compression and putting back together to improve the efficiency of video compression on the web as well as on the offline mode.

  1. “Is Your Man Stepping Out?” An online pilot study to evaluate acceptability of a guide-enhanced HIV prevention soap opera video series and feasibility of recruitment by Facebook© advertising

    PubMed Central

    Jones, Rachel; Lacroix, Lorraine J.; Nolte, Kerry

    2015-01-01

    Love, Sex, and Choices (LSC) is a 12-episode soap opera video series developed to reduce HIV risk among at-risk Black urban women. We added a video guide commentator to offer insights at critical dramatic moments. An online pilot study evaluated acceptability of the Guide Enhanced LSC (GELSC) and feasibility of Facebook© advertising, streaming to smartphones, and retention. Facebook© ads targeted high HIV-prevalence areas. In 30 days, Facebook© ads generated 230 screening interviews; 84 were high risk, 40 watched GELSC, and 39 followed up at 30 days. Recruitment of high-risk participants was 10 per week compared to 7 per week in previous field recruitment. Half the sample was Black; 12% were Latina. Findings suggest GELSC influenced sex scripts and behaviors. It was feasible to recruit young urban women from a large geographic area via Facebook© and to retain the sample. We extended the reach to at-risk women by streaming to mobile devices. PMID:26066692

  2. Combined Wavelet Video Coding and Error Control for Internet Streaming and Multicast

    NASA Astrophysics Data System (ADS)

    Chu, Tianli; Xiong, Zixiang

    2003-12-01

    This paper proposes an integrated approach to Internet video streaming and multicast (e.g., receiver-driven layered multicast (RLM) by McCanne) based on combined wavelet video coding and error control. We design a packetized wavelet video (PWV) coder to facilitate its integration with error control. The PWV coder produces packetized layered bitstreams that are independent among layers while being embedded within each layer. Thus, a lost packet only renders the following packets in the same layer useless. Based on the PWV coder, we search for a multilayered error-control strategy that optimally trades off source and channel coding for each layer under a given transmission rate to mitigate the effects of packet loss. While both the PWV coder and the error-control strategy are new—the former incorporates embedded wavelet video coding and packetization and the latter extends the single-layered approach for RLM by Chou et al.—the main distinction of this paper lies in the seamless integration of the two parts. Theoretical analysis shows a gain of up to 1 dB on a channel with 20% packet loss using our combined approach over separate designs of the source coder and the error-control mechanism. This is also substantiated by our simulations with a gain of up to 0.6 dB. In addition, our simulations show a gain of up to 2.2 dB over previous results reported by Chou et al.

  3. Application Layer Multicast

    NASA Astrophysics Data System (ADS)

    Allani, Mouna; Garbinato, Benoît; Pedone, Fernando

    An increasing number of Peer-to-Peer (P2P) Internet applications rely today on data dissemination as their cornerstone, e.g., audio or video streaming, multi-party games. These applications typically depend on some support for multicast communication, where peers interested in a given data stream can join a corresponding multicast group. As a consequence, the efficiency, scalability, and reliability guarantees of these applications are tightly coupled with that of the underlying multicast mechanism.

  4. Telepresence and real-time data transmission from Axial Seamount: implications for education and community engagement utilizing the OOI-RSN cabled observatory

    NASA Astrophysics Data System (ADS)

    Fundis, A. T.; Kelley, D. S.; Sautter, L. R.; Proskurowski, G.; Kawka, O.; Delaney, J. R.

    2011-12-01

    Axial Seamount, the most robust volcanic system on the Juan de Fuca Ridge, is a future site of the cabled observatory component of the National Science Foundation's Ocean Observatories Initiative (OOI) (see Delaney et al; Proskurowski et al., this meeting). In 2014, high-bandwidth data, high-definition video and digital still imagery will be streamed live from the cable observatory at Axial Seamount via the Internet to researchers, educators, and the public. The real-time data and high-speed communications stream will open new approaches for the onshore public and scientists to experience and engage in sea-going research as it is happening. For the next 7 years, the University of Washington and the OOI will collaboratively support an annual multi-week cruise aboard the research vessel Thomas G. Thompson. These "VISIONS" cruises will include scientific and maintenance operations related to the cabled network, the OOI Regional Scale Nodes (RSN). Leading up to 2014, VISIONS cruises will also be used to engage students, educators, scientists and the public in science focused at Axial Seamount through avenues that will be adaptable for the live data stream via the OOI-RSN cable. Here we describe the education and outreach efforts employed during the VISIONS'11 cruise to Axial Seamount including: 1) a live HD video stream from the seafloor and the ship to onshore scientists, educators, and the public; 2) a pilot program to teach undergraduates from the ship via live and taped broadcasts; 3) utilizing social media from the ship to communicate with scientists, educators, and the public onshore; and 4) providing undergraduate and graduate students onboard immersion into sea-going research. The 2011 eruption at Axial Seamount (see Chadwick et al., this meeting) is a prime example of the potential behind having these effective tools in place to engage the scientific community, students, and the public when the OOI cabled observatory comes online in 2014.

  5. Content-Aware Video Adaptation under Low-Bitrate Constraint

    NASA Astrophysics Data System (ADS)

    Hsiao, Ming-Ho; Chen, Yi-Wen; Chen, Hua-Tsung; Chou, Kuan-Hung; Lee, Suh-Yin

    2007-12-01

    With the development of wireless network and the improvement of mobile device capability, video streaming is more and more widespread in such an environment. Under the condition of limited resource and inherent constraints, appropriate video adaptations have become one of the most important and challenging issues in wireless multimedia applications. In this paper, we propose a novel content-aware video adaptation in order to effectively utilize resource and improve visual perceptual quality. First, the attention model is derived from analyzing the characteristics of brightness, location, motion vector, and energy features in compressed domain to reduce computation complexity. Then, through the integration of attention model, capability of client device and correlational statistic model, attractive regions of video scenes are derived. The information object- (IOB-) weighted rate distortion model is used for adjusting the bit allocation. Finally, the video adaptation scheme dynamically adjusts video bitstream in frame level and object level. Experimental results validate that the proposed scheme achieves better visual quality effectively and efficiently.

  6. The H.264/AVC advanced video coding standard: overview and introduction to the fidelity range extensions

    NASA Astrophysics Data System (ADS)

    Sullivan, Gary J.; Topiwala, Pankaj N.; Luthra, Ajay

    2004-11-01

    H.264/MPEG-4 AVC is the latest international video coding standard. It was jointly developed by the Video Coding Experts Group (VCEG) of the ITU-T and the Moving Picture Experts Group (MPEG) of ISO/IEC. It uses state-of-the-art coding tools and provides enhanced coding efficiency for a wide range of applications, including video telephony, video conferencing, TV, storage (DVD and/or hard disk based, especially high-definition DVD), streaming video, digital video authoring, digital cinema, and many others. The work on a new set of extensions to this standard has recently been completed. These extensions, known as the Fidelity Range Extensions (FRExt), provide a number of enhanced capabilities relative to the base specification as approved in the Spring of 2003. In this paper, an overview of this standard is provided, including the highlights of the capabilities of the new FRExt features. Some comparisons with the existing MPEG-2 and MPEG-4 Part 2 standards are also provided.

  7. Integrating unmanned aerial systems and LSPIV for rapid, cost-effective stream gauging

    NASA Astrophysics Data System (ADS)

    Lewis, Quinn W.; Lindroth, Evan M.; Rhoads, Bruce L.

    2018-05-01

    Quantifying flow in rivers is fundamental to assessments of water supply, water quality, ecological conditions, hydrological responses to storm events, and geomorphological processes. Image-based surface velocity measurements have shown promise in extending the range of discharge conditions that can be measured in the field. The use of Unmanned Aerial Systems (UAS) in image-based measurements of surface velocities has the potential to expand applications of this method. Thus far, few investigations have assessed this potential by evaluating the accuracy and repeatability of discharge measurements using surface velocities obtained from UAS. This study uses large-scale particle image velocimetry (LSPIV) derived from videos captured by cameras on a UAS and a fixed tripod to obtain discharge measurements at ten different stream locations in Illinois, USA. Discharge values are compared to reference values measured by an acoustic Doppler current profiler, a propeller meter, and established stream gauges. The results demonstrate the effects of UAS flight height, camera steadiness and leveling accuracy, video sampling frequency, and LSPIV interrogation area size on surface velocities, and show that the mean difference between fixed and UAS cameras is less than 10%. Differences between LSPIV-derived and reference discharge values are generally less than 20%, not systematically low or high, and not related to site parameters like channel width or depth, indicating that results are relatively insensitive to camera setup and image processing parameters typically required of LSPIV. The results also show that standard velocity indices (between 0.85 and 0.9) recommended for converting surface velocities to depth-averaged velocities yield reasonable discharge estimates, but are best calibrated at specific sites. The study recommends a basic methodology for LSPIV discharge measurements using UAS that is rapid, cost-efficient, and does not require major preparatory work at a measurement location, pre- and post-processing of imagery, or extensive background in image analysis and PIV.

  8. Technology Directions for the 21st Century. Volume 4

    NASA Technical Reports Server (NTRS)

    Crimi, Giles; Verheggen, Henry; Botta, Robert; Paul, Heywood; Vuong, Xuyen

    1998-01-01

    Data compression is an important tool for reducing the bandwidth of communications systems, and thus for reducing the size, weight, and power of spacecraft systems. For data requiring lossless transmissions, including most science data from spacecraft sensors, small compression factors of two to three may be expected. Little improvement can be expected over time. For data that is suitable for lossy compression, such as video data streams, much higher compression factors can be expected, such as 100 or more. More progress can be expected in this branch of the field, since there is more hidden redundancy and many more ways to exploit that redundancy.

  9. Multimedia and Some of Its Technical Issues.

    ERIC Educational Resources Information Center

    Wang, Shousan

    2000-01-01

    Discusses multimedia and its use in classroom teaching. Describes integrated services digital networks (ISDN); video-on-demand, that uses streaming technology via the Internet; and computer-assisted instruction. (Contains 19 references.) (LRW)

  10. Perpetual Ocean - Gulf Stream

    NASA Image and Video Library

    2017-12-08

    This image shows ocean surface currents around the world during the period from June 2005 through Decmeber 2007. Go here to view a video of this data: www.flickr.com/photos/gsfc/7009056027/ NASA/Goddard Space Flight Center Scientific Visualization Studio NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  11. Virtual imaging in sports broadcasting: an overview

    NASA Astrophysics Data System (ADS)

    Tan, Yi

    2003-04-01

    Virtual imaging technology is being used to augment television broadcasts -- virtual objects are seamlessly inserted into the video stream to appear as real entities to TV audiences. Virtual advertisements, the main application of this technology, are providing opportunities to improve the commercial value of television programming while enhancing the contents and the entertainment aspect of these programs. State-of-the-art technologies, such as image recognition, motion tracking and chroma keying, are central to a virtual imaging system. This paper reviews the general framework, the key techniques, and the sports broadcasting applications of virtual imaging technology.

  12. Discontinuity minimization for omnidirectional video projections

    NASA Astrophysics Data System (ADS)

    Alshina, Elena; Zakharchenko, Vladyslav

    2017-09-01

    Advances in display technologies both for head mounted devices and television panels demand resolution increase beyond 4K for source signal in virtual reality video streaming applications. This poses a problem of content delivery trough a bandwidth limited distribution networks. Considering a fact that source signal covers entire surrounding space investigation reviled that compression efficiency may fluctuate 40% in average depending on origin selection at the conversion stage from 3D space to 2D projection. Based on these knowledge the origin selection algorithm for video compression applications has been proposed. Using discontinuity entropy minimization function projection origin rotation may be defined to provide optimal compression results. Outcome of this research may be applied across various video compression solutions for omnidirectional content.

  13. Development of a telediagnosis endoscopy system over secure internet.

    PubMed

    Ohashi, K; Sakamoto, N; Watanabe, M; Mizushima, H; Tanaka, H

    2008-01-01

    We developed a new telediagnosis system to securely transmit high-quality endoscopic moving images over the Internet in real time. This system would enable collaboration between physicians seeking advice from endoscopists separated by long distances, to facilitate diagnosis. We adapted a new type of digital video streaming system (DVTS) to our teleendoscopic diagnosis system. To investigate its feasibility, we conducted a two-step experiment. A basic experiment was first conducted to transmit endoscopic video images between hospitals using a plain DVTS. After investigating the practical usability, we incorporated a secure and reliable communication function into the system, by equipping DVTS with "TCP2", a new security technology that establishes secure communication in the transport layer. The second experiment involved international transmission of teleendoscopic image between Hawaii and Japan using the improved system. In both the experiments, no serious transmission delay was observed to disturb physicians' communications and, after subjective evaluation by endoscopists, the diagnostic qualities of the images were found to be adequate. Moreover, the second experiment showed that "TCP2-equipped DVTS" successfully executed high-quality secure image transmission over a long distance network. We conclude that DVTS technology would be promising for teleendoscopic diagnosis. It was also shown that a high quality, secure teleendoscopic diagnosis system can be developed by equipping DVTS with TCP2.

  14. Clustering and Flow Conservation Monitoring Tool for Software Defined Networks.

    PubMed

    Puente Fernández, Jesús Antonio; García Villalba, Luis Javier; Kim, Tai-Hoon

    2018-04-03

    Prediction systems present some challenges on two fronts: the relation between video quality and observed session features and on the other hand, dynamics changes on the video quality. Software Defined Networks (SDN) is a new concept of network architecture that provides the separation of control plane (controller) and data plane (switches) in network devices. Due to the existence of the southbound interface, it is possible to deploy monitoring tools to obtain the network status and retrieve a statistics collection. Therefore, achieving the most accurate statistics depends on a strategy of monitoring and information requests of network devices. In this paper, we propose an enhanced algorithm for requesting statistics to measure the traffic flow in SDN networks. Such an algorithm is based on grouping network switches in clusters focusing on their number of ports to apply different monitoring techniques. Such grouping occurs by avoiding monitoring queries in network switches with common characteristics and then, by omitting redundant information. In this way, the present proposal decreases the number of monitoring queries to switches, improving the network traffic and preventing the switching overload. We have tested our optimization in a video streaming simulation using different types of videos. The experiments and comparison with traditional monitoring techniques demonstrate the feasibility of our proposal maintaining similar values decreasing the number of queries to the switches.

  15. Persistent aerial video registration and fast multi-view mosaicing.

    PubMed

    Molina, Edgardo; Zhu, Zhigang

    2014-05-01

    Capturing aerial imagery at high resolutions often leads to very low frame rate video streams, well under full motion video standards, due to bandwidth, storage, and cost constraints. Low frame rates make registration difficult when an aircraft is moving at high speeds or when global positioning system (GPS) contains large errors or it fails. We present a method that takes advantage of persistent cyclic video data collections to perform an online registration with drift correction. We split the persistent aerial imagery collection into individual cycles of the scene, identify and correct the registration errors on the first cycle in a batch operation, and then use the corrected base cycle as a reference pass to register and correct subsequent passes online. A set of multi-view panoramic mosaics is then constructed for each aerial pass for representation, presentation and exploitation of the 3D dynamic scene. These sets of mosaics are all in alignment to the reference cycle allowing their direct use in change detection, tracking, and 3D reconstruction/visualization algorithms. Stereo viewing with adaptive baselines and varying view angles is realized by choosing a pair of mosaics from a set of multi-view mosaics. Further, the mosaics for the second pass and later can be generated and visualized online as their is no further batch error correction.

  16. Nekton Interaction Monitoring System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2017-03-15

    The software provides a real-time processing system for sonar to detect and track animals, and to extract water column biomass statistics in order to facilitate continuous monitoring of an underwater environment. The Nekton Interaction Monitoring System (NIMS) extracts and archives tracking and backscatter statistics data from a real-time stream of data from a sonar device. NIMS also sends real-time tracking messages over the network that can be used by other systems to generate other metrics or to trigger instruments such as an optical video camera. A web-based user interface provides remote monitoring and control. NIMS currently supports three popular sonarmore » devices: M3 multi-beam sonar (Kongsberg), EK60 split-beam echo-sounder (Simrad) and BlueView acoustic camera (Teledyne).« less

  17. New design environment for defect detection in web inspection systems

    NASA Astrophysics Data System (ADS)

    Hajimowlana, S. Hossain; Muscedere, Roberto; Jullien, Graham A.; Roberts, James W.

    1997-09-01

    One of the aims of industrial machine vision is to develop computer and electronic systems destined to replace human vision in the process of quality control of industrial production. In this paper we discuss the development of a new design environment developed for real-time defect detection using reconfigurable FPGA and DSP processor mounted inside a DALSA programmable CCD camera. The FPGA is directly connected to the video data-stream and outputs data to a low bandwidth output bus. The system is targeted for web inspection but has the potential for broader application areas. We describe and show test results of the prototype system board, mounted inside a DALSA camera and discuss some of the algorithms currently simulated and implemented for web inspection applications.

  18. A comparison of two methods for assessing awareness of antitobacco television advertisements.

    PubMed

    Luxenberg, Michael G; Greenseid, Lija O; Depue, Jacob; Mowery, Andrea; Dreher, Marietta; Larsen, Lindsay S; Schillo, Barbara

    2016-05-01

    This study uses an online survey panel to compare two approaches for assessing ad awareness. The first uses a screenshot of a television ad and the second shows participants a full-length video of the ad. We randomly assigned 1034 Minnesota respondents to view a screenshot or a streaming video from two antitobacco ads. The study used one ad from ClearWay Minnesota's ITALIC! We All Pay the Price campaign, and one from the Centers for Disease Control ITALIC! Tips campaign. The key measure used to assess ad awareness was aided ad recall. Multivariate analyses of recall with cessation behaviour and attitudinal beliefs assessed the validity of these approaches. The respondents who saw the video reported significantly higher recall than those who saw the screenshot. Associations of recall with cessation behaviour and attitudinal beliefs were stronger and in the anticipated direction using the screenshot method. Over 20% of the respondents assigned to the video group could not see the ad. People who were under 45 years old, had incomes greater than $35,000 and women were reportedly less able to access the video. The methodology used to assess recall matters. Campaigns may exaggerate the successes or failures of their media campaigns, depending on the approach they employ and how they compare it to other media campaign evaluations. When incorporating streaming video, researchers should consider accessibility and report possible response bias. Researchers should fully define the measures they use, specify any viewing accessibility issues, and make ad comparisons only when using comparable methods. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  19. An integrated multispectral video and environmental monitoring system for the study of coastal processes and the support of beach management operations

    NASA Astrophysics Data System (ADS)

    Ghionis, George; Trygonis, Vassilis; Karydis, Antonis; Vousdoukas, Michalis; Alexandrakis, George; Drakopoulos, Panos; Amdreadis, Olympos; Psarros, Fotis; Velegrakis, Antonis; Poulos, Serafim

    2016-04-01

    Effective beach management requires environmental assessments that are based on sound science, are cost-effective and are available to beach users and managers in an accessible, timely and transparent manner. The most common problems are: 1) The available field data are scarce and of sub-optimal spatio-temporal resolution and coverage, 2) our understanding of local beach processes needs to be improved in order to accurately model/forecast beach dynamics under a changing climate, and 3) the information provided by coastal scientists/engineers in the form of data, models and scientific interpretation is often too complicated to be of direct use by coastal managers/decision makers. A multispectral video system has been developed, consisting of one or more video cameras operating in the visible part of the spectrum, a passive near-infrared (NIR) camera, an active NIR camera system, a thermal infrared camera and a spherical video camera, coupled with innovative image processing algorithms and a telemetric system for the monitoring of coastal environmental parameters. The complete system has the capability to record, process and communicate (in quasi-real time) high frequency information on shoreline position, wave breaking zones, wave run-up, erosion hot spots along the shoreline, nearshore wave height, turbidity, underwater visibility, wind speed and direction, air and sea temperature, solar radiation, UV radiation, relative humidity, barometric pressure and rainfall. An innovative, remotely-controlled interactive visual monitoring system, based on the spherical video camera (with 360°field of view), combines the video streams from all cameras and can be used by beach managers to monitor (in real time) beach user numbers, flow activities and safety at beaches of high touristic value. The high resolution near infrared cameras permit 24-hour monitoring of beach processes, while the thermal camera provides information on beach sediment temperature and moisture, can detect upwelling in the nearshore zone, and enhances the safety of beach users. All data can be presented in real- or quasi-real time and are stored for future analysis and training/validation of coastal processes models. Acknowledgements: This work was supported by the project BEACHTOUR (11SYN-8-1466) of the Operational Program "Cooperation 2011, Competitiveness and Entrepreneurship", co-funded by the European Regional Development Fund and the Greek Ministry of Education and Religious Affairs.

  20. Optimal frame-by-frame result combination strategy for OCR in video stream

    NASA Astrophysics Data System (ADS)

    Bulatov, Konstantin; Lynchenko, Aleksander; Krivtsov, Valeriy

    2018-04-01

    This paper describes the problem of combining classification results of multiple observations of one object. This task can be regarded as a particular case of a decision-making using a combination of experts votes with calculated weights. The accuracy of various methods of combining the classification results depending on different models of input data is investigated on the example of frame-by-frame character recognition in a video stream. Experimentally it is shown that the strategy of choosing a single most competent expert in case of input data without irrelevant observations has an advantage (in this case irrelevant means with character localization and segmentation errors). At the same time this work demonstrates the advantage of combining several most competent experts according to multiplication rule or voting if irrelevant samples are present in the input data.

  1. Performance improvement of multi-class detection using greedy algorithm for Viola-Jones cascade selection

    NASA Astrophysics Data System (ADS)

    Tereshin, Alexander A.; Usilin, Sergey A.; Arlazarov, Vladimir V.

    2018-04-01

    This paper aims to study the problem of multi-class object detection in video stream with Viola-Jones cascades. An adaptive algorithm for selecting Viola-Jones cascade based on greedy choice strategy in solution of the N-armed bandit problem is proposed. The efficiency of the algorithm on the problem of detection and recognition of the bank card logos in the video stream is shown. The proposed algorithm can be effectively used in documents localization and identification, recognition of road scene elements, localization and tracking of the lengthy objects , and for solving other problems of rigid object detection in a heterogeneous data flows. The computational efficiency of the algorithm makes it possible to use it both on personal computers and on mobile devices based on processors with low power consumption.

  2. Comparative study of internet cloud and cloudlet over wireless mesh networks for real-time applications

    NASA Astrophysics Data System (ADS)

    Khan, Kashif A.; Wang, Qi; Luo, Chunbo; Wang, Xinheng; Grecos, Christos

    2014-05-01

    Mobile cloud computing is receiving world-wide momentum for ubiquitous on-demand cloud services for mobile users provided by Amazon, Google etc. with low capital cost. However, Internet-centric clouds introduce wide area network (WAN) delays that are often intolerable for real-time applications such as video streaming. One promising approach to addressing this challenge is to deploy decentralized mini-cloud facility known as cloudlets to enable localized cloud services. When supported by local wireless connectivity, a wireless cloudlet is expected to offer low cost and high performance cloud services for the users. In this work, we implement a realistic framework that comprises both a popular Internet cloud (Amazon Cloud) and a real-world cloudlet (based on Ubuntu Enterprise Cloud (UEC)) for mobile cloud users in a wireless mesh network. We focus on real-time video streaming over the HTTP standard and implement a typical application. We further perform a comprehensive comparative analysis and empirical evaluation of the application's performance when it is delivered over the Internet cloud and the cloudlet respectively. The study quantifies the influence of the two different cloud networking architectures on supporting real-time video streaming. We also enable movement of the users in the wireless mesh network and investigate the effect of user's mobility on mobile cloud computing over the cloudlet and Amazon cloud respectively. Our experimental results demonstrate the advantages of the cloudlet paradigm over its Internet cloud counterpart in supporting the quality of service of real-time applications.

  3. Dashboard Videos

    NASA Astrophysics Data System (ADS)

    Gleue, Alan D.; Depcik, Chris; Peltier, Ted

    2012-11-01

    Last school year, I had a web link emailed to me entitled "A Dashboard Physics Lesson." The link, created and posted by Dale Basier on his Lab Out Loud blog, illustrates video of a car's speedometer synchronized with video of the road. These two separate video streams are compiled into one video that students can watch and analyze. After seeing this website and video, I decided to create my own dashboard videos to show to my high school physics students. I have produced and synchronized 12 separate dashboard videos, each about 10 minutes in length, driving around the city of Lawrence, KS, and Douglas County, and posted them to a website.2 Each video reflects different types of driving: both positive and negative accelerations and constant speeds. As shown in Fig. 1, I was able to capture speed, distance, and miles per gallon from my dashboard instrumentation. By linking this with a stopwatch, each of these quantities can be graphed with respect to time. I anticipate and hope that teachers will find these useful in their own classrooms, i.e., having physics students watch the videos and create their own motion maps (distance-time, speed-time) for study.

  4. Fluid dynamics, cavitation, and tip-to-tissue interaction of longitudinal and torsional ultrasound modes during phacoemulsification.

    PubMed

    Zacharias, Jaime; Ohl, Claus-Dieter

    2013-04-01

    To describe the fluidic events that occur in a test chamber during phacoemulsification with longitudinal and torsional ultrasound (US) modalities. Pasteur Ophthalmic Clinic Phacodynamics Laboratory, Santiago, Chile, and Nanyang Technological University, Singapore. Experimental study. Ultra-high-speed videos of a phacoemulsifying tip were recorded while the tip operated in longitudinal and torsional US modalities using variable US power. Two high-speed video cameras were used to record videos up to 625,000 frames per second. A high-intensity spotlight source was used for illumination to engage shadowgraphy techniques. Particle image velocimetry was used to evaluate fluidic patterns while a hyperbaric environmental system allowed the evaluation of cavitation effects. Tip-to-tissue interaction at high speed was evaluated using human cataract fragments. Particle imaging velocimetry showed the following flow patterns for longitudinal and torsional modes at high US powers: forward-directed streaming with longitudinal mode and backward-directed streaming with torsional mode. The ultrasound power threshold for the appearance of cavitation was 60% for longitudinal mode and 80% for torsional mode. Cavitation was suppressed with pressure of 1.0 bar for longitudinal mode and 0.3 bar for torsional mode. Generation of previously unseen stable gaseous microbubbles was noted. Tip-to-tissue interaction analysis showed the presence of cavitation bubbles close to the site of fragmentation with no apparent effect on cutting. High-speed imaging and particle image velocimetry yielded a better understanding and differentiated the fluidic pattern behavior between longitudinal and torsional US during phacoemulsification. These recordings also showed more detailed aspects of cavitation that clarified its role in lens material cutting for both modalities. Copyright © 2013 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.

  5. Efficient 3D Watermarked Video Communication with Chaotic Interleaving, Convolution Coding, and LMMSE Equalization

    NASA Astrophysics Data System (ADS)

    El-Shafai, W.; El-Bakary, E. M.; El-Rabaie, S.; Zahran, O.; El-Halawany, M.; Abd El-Samie, F. E.

    2017-06-01

    Three-Dimensional Multi-View Video (3D-MVV) transmission over wireless networks suffers from Macro-Blocks losses due to either packet dropping or fading-motivated bit errors. Thus, the robust performance of 3D-MVV transmission schemes over wireless channels becomes a recent considerable hot research issue due to the restricted resources and the presence of severe channel errors. The 3D-MVV is composed of multiple video streams shot by several cameras around a single object, simultaneously. Therefore, it is an urgent task to achieve high compression ratios to meet future bandwidth constraints. Unfortunately, the highly-compressed 3D-MVV data becomes more sensitive and vulnerable to packet losses, especially in the case of heavy channel faults. Thus, in this paper, we suggest the application of a chaotic Baker interleaving approach with equalization and convolution coding for efficient Singular Value Decomposition (SVD) watermarked 3D-MVV transmission over an Orthogonal Frequency Division Multiplexing wireless system. Rayleigh fading and Additive White Gaussian Noise are considered in the real scenario of 3D-MVV transmission. The SVD watermarked 3D-MVV frames are primarily converted to their luminance and chrominance components, which are then converted to binary data format. After that, chaotic interleaving is applied prior to the modulation process. It is used to reduce the channel effects on the transmitted bit streams and it also adds a degree of encryption to the transmitted 3D-MVV frames. To test the performance of the proposed framework; several simulation experiments on different SVD watermarked 3D-MVV frames have been executed. The experimental results show that the received SVD watermarked 3D-MVV frames still have high Peak Signal-to-Noise Ratios and watermark extraction is possible in the proposed framework.

  6. Architecture of web services in the enhancement of real-time 3D video virtualization in cloud environment

    NASA Astrophysics Data System (ADS)

    Bada, Adedayo; Wang, Qi; Alcaraz-Calero, Jose M.; Grecos, Christos

    2016-04-01

    This paper proposes a new approach to improving the application of 3D video rendering and streaming by jointly exploring and optimizing both cloud-based virtualization and web-based delivery. The proposed web service architecture firstly establishes a software virtualization layer based on QEMU (Quick Emulator), an open-source virtualization software that has been able to virtualize system components except for 3D rendering, which is still in its infancy. The architecture then explores the cloud environment to boost the speed of the rendering at the QEMU software virtualization layer. The capabilities and inherent limitations of Virgil 3D, which is one of the most advanced 3D virtual Graphics Processing Unit (GPU) available, are analyzed through benchmarking experiments and integrated into the architecture to further speed up the rendering. Experimental results are reported and analyzed to demonstrate the benefits of the proposed approach.

  7. A Scalable QoS-Aware VoD Resource Sharing Scheme for Next Generation Networks

    NASA Astrophysics Data System (ADS)

    Huang, Chenn-Jung; Luo, Yun-Cheng; Chen, Chun-Hua; Hu, Kai-Wen

    In network-aware concept, applications are aware of network conditions and are adaptable to the varying environment to achieve acceptable and predictable performance. In this work, a solution for video on demand service that integrates wireless and wired networks by using the network aware concepts is proposed to reduce the blocking probability and dropping probability of mobile requests. Fuzzy logic inference system is employed to select appropriate cache relay nodes to cache published video streams and distribute them to different peers through service oriented architecture (SOA). SIP-based control protocol and IMS standard are adopted to ensure the possibility of heterogeneous communication and provide a framework for delivering real-time multimedia services over an IP-based network to ensure interoperability, roaming, and end-to-end session management. The experimental results demonstrate that effectiveness and practicability of the proposed work.

  8. Building Airport Surface HITL Simulation Capability

    NASA Technical Reports Server (NTRS)

    Chinn, Fay Cherie

    2016-01-01

    FutureFlight Central is a high fidelity, real-time simulator designed to study surface operations and automation. As an air traffic control tower simulator, FFC allows stakeholders such as the FAA, controllers, pilots, airports, and airlines to develop and test advanced surface and terminal area concepts and automation including NextGen and beyond automation concepts and tools. These technologies will improve the safety, capacity and environmental issues facing the National Airspace system. FFC also has extensive video streaming capabilities, which combined with the 3-D database capability makes the facility ideal for any research needing an immersive virtual and or video environment. FutureFlight Central allows human in the loop testing which accommodates human interactions and errors giving a more complete picture than fast time simulations. This presentation describes FFCs capabilities and the components necessary to build an airport surface human in the loop simulation capability.

  9. Real-time 3D video compression for tele-immersive environments

    NASA Astrophysics Data System (ADS)

    Yang, Zhenyu; Cui, Yi; Anwar, Zahid; Bocchino, Robert; Kiyanclar, Nadir; Nahrstedt, Klara; Campbell, Roy H.; Yurcik, William

    2006-01-01

    Tele-immersive systems can improve productivity and aid communication by allowing distributed parties to exchange information via a shared immersive experience. The TEEVE research project at the University of Illinois at Urbana-Champaign and the University of California at Berkeley seeks to foster the development and use of tele-immersive environments by a holistic integration of existing components that capture, transmit, and render three-dimensional (3D) scenes in real time to convey a sense of immersive space. However, the transmission of 3D video poses significant challenges. First, it is bandwidth-intensive, as it requires the transmission of multiple large-volume 3D video streams. Second, existing schemes for 2D color video compression such as MPEG, JPEG, and H.263 cannot be applied directly because the 3D video data contains depth as well as color information. Our goal is to explore from a different angle of the 3D compression space with factors including complexity, compression ratio, quality, and real-time performance. To investigate these trade-offs, we present and evaluate two simple 3D compression schemes. For the first scheme, we use color reduction to compress the color information, which we then compress along with the depth information using zlib. For the second scheme, we use motion JPEG to compress the color information and run-length encoding followed by Huffman coding to compress the depth information. We apply both schemes to 3D videos captured from a real tele-immersive environment. Our experimental results show that: (1) the compressed data preserves enough information to communicate the 3D images effectively (min. PSNR > 40) and (2) even without inter-frame motion estimation, very high compression ratios (avg. > 15) are achievable at speeds sufficient to allow real-time communication (avg. ~ 13 ms per 3D video frame).

  10. Evaluation of an interactive web-based nursing course with streaming videos for medication administration skills.

    PubMed

    Sowan, Azizeh K; Idhail, Jamila Abu

    2014-08-01

    Nursing students should exhibit competence in nursing skills in order to provide safe and quality patient care. This study describes the design and students' response to an interactive web-based course using streaming video technology tailored to students' needs and the course objectives of the fundamentals of nursing skills clinical course. A mixed-methodology design was used to describe the experience of 102 first-year undergraduate nursing students at a school of nursing in Jordan who were enrolled in the course. A virtual course with streaming videos was designed to demonstrate medication administration fundamental skills. The videos recorded the ideal lab demonstration of the skills, and real-world practice performed by registered nurses for patients in a hospital setting. After course completion, students completed a 30-item satisfaction questionnaire, 8 self-efficacy scales, and a 4-item scale solicited their preferences of using the virtual course as a substitute or a replacement of the lab demonstration. Students' grades in the skill examination of the procedures were measured. Relationships between the main variables and predictors of satisfaction and self-efficacy were examined. Students were satisfied with the virtual course (3.9 ± 0.56, out of a 5-point scale) with a high-perceived overall self-efficacy (4.38 ± 0.42, out of a 5-point scale). Data showed a significant correlation between student satisfaction, self-efficacy and achievement in the virtual course (r = 0.45-0.49, p < 0.01). The majority of students accessed the course from home and some faced technical difficulties. Significant predictors of satisfaction were ease of access the course and gender (B = 0.35, 0.25, CI = 0.12-0.57, 0.02-0.48 respectively). The mean achievement score of students in the virtual class (7.5 ± 0.34) was significantly higher than that of a previous comparable cohort who was taught in the traditional method (6.0 ± 0.23) (p < 0.05). Nearly 40% of the students believed that the virtual course is a sufficient replacement of the lab demonstration. The use of multimedia within an interactive online learning environment is a valuable teaching strategy that yields a high level of nursing student satisfaction, self-efficacy, and achievement. The creation and delivery of a virtual learning environment with streaming videos for clinical courses is a complex process that should be carefully designed to positively influence the learning experience. However, the learning benefits gained from such pedagogical approach are worth faculty, institution and students' efforts. Published by Elsevier Ireland Ltd.

  11. A hardware architecture for real-time shadow removal in high-contrast video

    NASA Astrophysics Data System (ADS)

    Verdugo, Pablo; Pezoa, Jorge E.; Figueroa, Miguel

    2017-09-01

    Broadcasting an outdoor sports event at daytime is a challenging task due to the high contrast that exists between areas in the shadow and light conditions within the same scene. Commercial cameras typically do not handle the high dynamic range of such scenes in a proper manner, resulting in broadcast streams with very little shadow detail. We propose a hardware architecture for real-time shadow removal in high-resolution video, which reduces the shadow effect and simultaneously improves shadow details. The algorithm operates only on the shadow portions of each video frame, thus improving the results and producing more realistic images than algorithms that operate on the entire frame, such as simplified Retinex and histogram shifting. The architecture receives an input in the RGB color space, transforms it into the YIQ space, and uses color information from both spaces to produce a mask of the shadow areas present in the image. The mask is then filtered using a connected components algorithm to eliminate false positives and negatives. The hardware uses pixel information at the edges of the mask to estimate the illumination ratio between light and shadow in the image, which is then used to correct the shadow area. Our prototype implementation simultaneously processes up to 7 video streams of 1920×1080 pixels at 60 frames per second on a Xilinx Kintex-7 XC7K325T FPGA.

  12. MPEG-4 ASP SoC receiver with novel image enhancement techniques for DAB networks

    NASA Astrophysics Data System (ADS)

    Barreto, D.; Quintana, A.; García, L.; Callicó, G. M.; Núñez, A.

    2007-05-01

    This paper presents a system for real-time video reception in low-power mobile devices using Digital Audio Broadcast (DAB) technology for transmission. A demo receiver terminal is designed into a FPGA platform using the Advanced Simple Profile (ASP) MPEG-4 standard for video decoding. In order to keep the demanding DAB requirements, the bandwidth of the encoded sequence must be drastically reduced. In this sense, prior to the MPEG-4 coding stage, a pre-processing stage is performed. It is firstly composed by a segmentation phase according to motion and texture based on the Principal Component Analysis (PCA) of the input video sequence, and secondly by a down-sampling phase, which depends on the segmentation results. As a result of the segmentation task, a set of texture and motion maps are obtained. These motion and texture maps are also included into the bit-stream as user data side-information and are therefore known to the receiver. For all bit-rates, the whole encoder/decoder system proposed in this paper exhibits higher image visual quality than the alternative encoding/decoding method, assuming equal image sizes. A complete analysis of both techniques has also been performed to provide the optimum motion and texture maps for the global system, which has been finally validated for a variety of video sequences. Additionally, an optimal HW/SW partition for the MPEG-4 decoder has been studied and implemented over a Programmable Logic Device with an embedded ARM9 processor. Simulation results show that a throughput of 15 QCIF frames per second can be achieved with low area and low power implementation.

  13. Using the OOI Cabled Array HD Camera to Explore Geophysical and Oceanographic Problems at Axial Seamount

    NASA Astrophysics Data System (ADS)

    Crone, T. J.; Knuth, F.; Marburg, A.

    2016-12-01

    A broad array of Earth science problems can be investigated using high-definition video imagery from the seafloor, ranging from those that are geological and geophysical in nature, to those that are biological and water-column related. A high-definition video camera was installed as part of the Ocean Observatory Initiative's core instrument suite on the Cabled Array, a real-time fiber optic data and power system that stretches from the Oregon Coast to Axial Seamount on the Juan de Fuca Ridge. This camera runs a 14-minute pan-tilt-zoom routine 8 times per day, focusing on locations of scientific interest on and near the Mushroom vent in the ASHES hydrothermal field inside the Axial caldera. The system produces 13 GB of lossless HD video every 3 hours, and at the time of this writing it has generated 2100 recordings totaling 28.5 TB since it began streaming data into the OOI archive in August of 2015. Because of the large size of this dataset, downloading the entirety of the video for long timescale investigations is not practical. We are developing a set of user-side tools for downloading single frames and frame ranges from the OOI HD camera raw data archive to aid users interested in using these data for their research. We use these tools to download about one year's worth of partial frame sets to investigate several questions regarding the hydrothermal system at ASHES, including the variability of bacterial "floc" in the water-column, and changes in high temperature fluid fluxes using optical flow techniques. We show that while these user-side tools can facilitate rudimentary scientific investigations using the HD camera data, a server-side computing environment that allows users to explore this dataset without downloading any raw video will be required for more advanced investigations to flourish.

  14. Efficient Hardware Implementation of the Horn-Schunck Algorithm for High-Resolution Real-Time Dense Optical Flow Sensor

    PubMed Central

    Komorkiewicz, Mateusz; Kryjak, Tomasz; Gorgon, Marek

    2014-01-01

    This article presents an efficient hardware implementation of the Horn-Schunck algorithm that can be used in an embedded optical flow sensor. An architecture is proposed, that realises the iterative Horn-Schunck algorithm in a pipelined manner. This modification allows to achieve data throughput of 175 MPixels/s and makes processing of Full HD video stream (1, 920 × 1, 080 @ 60 fps) possible. The structure of the optical flow module as well as pre- and post-filtering blocks and a flow reliability computation unit is described in details. Three versions of optical flow modules, with different numerical precision, working frequency and obtained results accuracy are proposed. The errors caused by switching from floating- to fixed-point computations are also evaluated. The described architecture was tested on popular sequences from an optical flow dataset of the Middlebury University. It achieves state-of-the-art results among hardware implementations of single scale methods. The designed fixed-point architecture achieves performance of 418 GOPS with power efficiency of 34 GOPS/W. The proposed floating-point module achieves 103 GFLOPS, with power efficiency of 24 GFLOPS/W. Moreover, a 100 times speedup compared to a modern CPU with SIMD support is reported. A complete, working vision system realized on Xilinx VC707 evaluation board is also presented. It is able to compute optical flow for Full HD video stream received from an HDMI camera in real-time. The obtained results prove that FPGA devices are an ideal platform for embedded vision systems. PMID:24526303

  15. Efficient hardware implementation of the Horn-Schunck algorithm for high-resolution real-time dense optical flow sensor.

    PubMed

    Komorkiewicz, Mateusz; Kryjak, Tomasz; Gorgon, Marek

    2014-02-12

    This article presents an efficient hardware implementation of the Horn-Schunck algorithm that can be used in an embedded optical flow sensor. An architecture is proposed, that realises the iterative Horn-Schunck algorithm in a pipelined manner. This modification allows to achieve data throughput of 175 MPixels/s and makes processing of Full HD video stream (1; 920 × 1; 080 @ 60 fps) possible. The structure of the optical flow module as well as pre- and post-filtering blocks and a flow reliability computation unit is described in details. Three versions of optical flow modules, with different numerical precision, working frequency and obtained results accuracy are proposed. The errors caused by switching from floating- to fixed-point computations are also evaluated. The described architecture was tested on popular sequences from an optical flow dataset of the Middlebury University. It achieves state-of-the-art results among hardware implementations of single scale methods. The designed fixed-point architecture achieves performance of 418 GOPS with power efficiency of 34 GOPS/W. The proposed floating-point module achieves 103 GFLOPS, with power efficiency of 24 GFLOPS/W. Moreover, a 100 times speedup compared to a modern CPU with SIMD support is reported. A complete, working vision system realized on Xilinx VC707 evaluation board is also presented. It is able to compute optical flow for Full HD video stream received from an HDMI camera in real-time. The obtained results prove that FPGA devices are an ideal platform for embedded vision systems.

  16. Augmented-reality visualization of brain structures with stereo and kinetic depth cues: system description and initial evaluation with head phantom

    NASA Astrophysics Data System (ADS)

    Maurer, Calvin R., Jr.; Sauer, Frank; Hu, Bo; Bascle, Benedicte; Geiger, Bernhard; Wenzel, Fabian; Recchi, Filippo; Rohlfing, Torsten; Brown, Christopher R.; Bakos, Robert J.; Maciunas, Robert J.; Bani-Hashemi, Ali R.

    2001-05-01

    We are developing a video see-through head-mounted display (HMD) augmented reality (AR) system for image-guided neurosurgical planning and navigation. The surgeon wears a HMD that presents him with the augmented stereo view. The HMD is custom fitted with two miniature color video cameras that capture a stereo view of the real-world scene. We are concentrating specifically at this point on cranial neurosurgery, so the images will be of the patient's head. A third video camera, operating in the near infrared, is also attached to the HMD and is used for head tracking. The pose (i.e., position and orientation) of the HMD is used to determine where to overlay anatomic structures segmented from preoperative tomographic images (e.g., CT, MR) on the intraoperative video images. Two SGI 540 Visual Workstation computers process the three video streams and render the augmented stereo views for display on the HMD. The AR system operates in real time at 30 frames/sec with a temporal latency of about three frames (100 ms) and zero relative lag between the virtual objects and the real-world scene. For an initial evaluation of the system, we created AR images using a head phantom with actual internal anatomic structures (segmented from CT and MR scans of a patient) realistically positioned inside the phantom. When using shaded renderings, many users had difficulty appreciating overlaid brain structures as being inside the head. When using wire frames, and texture-mapped dot patterns, most users correctly visualized brain anatomy as being internal and could generally appreciate spatial relationships among various objects. The 3D perception of these structures is based on both stereoscopic depth cues and kinetic depth cues, with the user looking at the head phantom from varying positions. The perception of the augmented visualization is natural and convincing. The brain structures appear rigidly anchored in the head, manifesting little or no apparent swimming or jitter. The initial evaluation of the system is encouraging, and we believe that AR visualization might become an important tool for image-guided neurosurgical planning and navigation.

  17. High End Visualization of Geophysical Datasets Using Immersive Technology: The SIO Visualization Center.

    NASA Astrophysics Data System (ADS)

    Newman, R. L.

    2002-12-01

    How many images can you display at one time with Power Point without getting "postage stamps"? Do you have fantastic datasets that you cannot view because your computer is too slow/small? Do you assume a few 2-D images of a 3-D picture are sufficient? High-end visualization centers can minimize and often eliminate these problems. The new visualization center [http://siovizcenter.ucsd.edu] at Scripps Institution of Oceanography [SIO] immerses users into a virtual world by projecting 3-D images onto a Panoram GVR-120E wall-sized floor-to-ceiling curved screen [7' x 23'] that has 3.2 mega-pixels of resolution. The Infinite Reality graphics subsystem is driven by a single-pipe SGI Onyx 3400 with a system bandwidth of 44 Gbps. The Onyx is powered by 16 MIPS R12K processors and 16 GB of addressable memory. The system is also equipped with transmitters and LCD shutter glasses which permit stereographic 3-D viewing of high-resolution images. This center is ideal for groups of up to 60 people who can simultaneously view these large-format images. A wide range of hardware and software is available, giving the users a totally immersive working environment in which to display, analyze, and discuss large datasets. The system enables simultaneous display of video and audio streams from sources such as SGI megadesktop and stereo megadesktop, S-VHS video, DVD video, and video from a Macintosh or PC. For instance, one-third of the screen might be displaying S-VHS video from a remotely-operated-vehicle [ROV], while the remaining portion of the screen might be used for an interactive 3-D flight over the same parcel of seafloor. The video and audio combinations using this system are numerous, allowing users to combine and explore data and images in innovative ways, greatly enhancing scientists' ability to visualize, understand and collaborate on complex datasets. In the not-distant future, with the rapid growth in networking speeds in the US, it will be possible for Earth Sciences Departments to collaborate effectively while limiting the amount of physical travel required. This includes porting visualization content to the popular, low-cost Geowall visualization systems, and providing web-based access to databanks filled with stock geoscience visualizations.

  18. The Assembled Solar Eclipse Package (ASEP) in Bangka Indonesia during the total solar eclipse on March 9, 2016

    NASA Astrophysics Data System (ADS)

    Puji Asmoro, Cahyo; Wijaya, Agus Fany Chandra; Dwi Ardi, Nanang; Abdurrohman, Arman; Aria Utama, Judhistira; Sutiadi, Asep; Hikmat; Ramlan Ramalis, Taufik; Suyardi, Bintang

    2016-11-01

    The Assembled Solar Eclipse Package (ASEP) is not only an integrated apparatus constructed to obtain imaging data during solar eclipse, but also it involved sky brightness and live streaming requirement. Main four parts of ASEP are composed by two imaging data recorders, one high definition video streaming camera, and a sky quality meter instrument (SQM) linked by a personal computer and motorized mounting. The parts are common instruments which are used for education or personal use. The first part is used to capture corona and prominence image during totality. For the second part, video is powerful data in order to educate public through web streaming lively. The last part, SQM is used to confirm our imaging data during obscuration. The perfect prominence picture was obtained by one of the data capture using William-Optics F=388mm with Nikon DSLR D3100. In addition, the diamond ring and corona were recorded by the second imaging tool using Sky Watcher F=910mm with Canon DSLR 60D. The third instrument is the Sony HXR MC5 streaming set to be able to broadcast to public domain area via official website. From the SQM, the value of the darkness during totality is quiet similar as a dawn condition. Finally, ASEP was entirely successful and be able to fulfil our competency as educational researcher in university.

  19. Real-time tracking and fast retrieval of persons in multiple surveillance cameras of a shopping mall

    NASA Astrophysics Data System (ADS)

    Bouma, Henri; Baan, Jan; Landsmeer, Sander; Kruszynski, Chris; van Antwerpen, Gert; Dijk, Judith

    2013-05-01

    The capability to track individuals in CCTV cameras is important for e.g. surveillance applications at large areas such as train stations, airports and shopping centers. However, it is laborious to track and trace people over multiple cameras. In this paper, we present a system for real-time tracking and fast interactive retrieval of persons in video streams from multiple static surveillance cameras. This system is demonstrated in a shopping mall, where the cameras are positioned without overlapping fields-of-view and have different lighting conditions. The results show that the system allows an operator to find the origin or destination of a person more efficiently. The misses are reduced with 37%, which is a significant improvement.

  20. Light Video Game Play is Associated with Enhanced Visual Processing of Rapid Serial Visual Presentation Targets.

    PubMed

    Howard, Christina J; Wilding, Robert; Guest, Duncan

    2017-02-01

    There is mixed evidence that video game players (VGPs) may demonstrate better performance in perceptual and attentional tasks than non-VGPs (NVGPs). The rapid serial visual presentation task is one such case, where observers respond to two successive targets embedded within a stream of serially presented items. We tested light VGPs (LVGPs) and NVGPs on this task. LVGPs were better at correct identification of second targets whether they were also attempting to respond to the first target. This performance benefit seen for LVGPs suggests enhanced visual processing for briefly presented stimuli even with only very moderate game play. Observers were less accurate at discriminating the orientation of a second target within the stream if it occurred shortly after presentation of the first target, that is to say, they were subject to the attentional blink (AB). We find no evidence for any reduction in AB in LVGPs compared with NVGPs.

  1. "Is Your Man Stepping Out?" An Online Pilot Study to Evaluate Acceptability of a Guide-Enhanced HIV Prevention Soap Opera Video Series and Feasibility of Recruitment by Facebook Advertising.

    PubMed

    Jones, Rachel; Lacroix, Lorraine J; Nolte, Kerry

    2015-01-01

    Love, Sex, and Choices (LSC) is a 12-episode soap opera video series developed to reduce HIV risk among at-risk Black urban women. We added a video guide commentator to offer insights at critical dramatic moments. An online pilot study evaluated acceptability of the Guide-Enhanced LSC (GELSC) and feasibility of Facebook advertising, streaming to smartphones, and retention. Facebook ads targeted high-HIV-prevalence areas. In 30 days, Facebook ads generated 230 screening interviews: 84 were high risk, 40 watched GELSC, and 39 followed up at 30 days. Recruitment of high-risk participants was 10 per week, compared to seven per week in previous field recruitment. Half the sample was Black; 12% were Latina. Findings suggest GELSC influenced sex scripts and behaviors. It was feasible to recruit young urban women from a large geographic area via Facebook and to retain the sample. We extended the reach to at-risk women by streaming to mobile devices. Copyright © 2015 Association of Nurses in AIDS Care. Published by Elsevier Inc. All rights reserved.

  2. Robust and efficient fiducial tracking for augmented reality in HD-laparoscopic video streams

    NASA Astrophysics Data System (ADS)

    Mueller, M.; Groch, A.; Baumhauer, M.; Maier-Hein, L.; Teber, D.; Rassweiler, J.; Meinzer, H.-P.; Wegner, In.

    2012-02-01

    Augmented Reality (AR) is a convenient way of porting information from medical images into the surgical field of view and can deliver valuable assistance to the surgeon, especially in laparoscopic procedures. In addition, high definition (HD) laparoscopic video devices are a great improvement over the previously used low resolution equipment. However, in AR applications that rely on real-time detection of fiducials from video streams, the demand for efficient image processing has increased due to the introduction of HD devices. We present an algorithm based on the well-known Conditional Density Propagation (CONDENSATION) algorithm which can satisfy these new demands. By incorporating a prediction around an already existing and robust segmentation algorithm, we can speed up the whole procedure while leaving the robustness of the fiducial segmentation untouched. For evaluation purposes we tested the algorithm on recordings from real interventions, allowing for a meaningful interpretation of the results. Our results show that we can accelerate the segmentation by a factor of 3.5 on average. Moreover, the prediction information can be used to compensate for fiducials that are temporarily occluded or out of scope, providing greater stability.

  3. Impact of different cloud deployments on real-time video applications for mobile video cloud users

    NASA Astrophysics Data System (ADS)

    Khan, Kashif A.; Wang, Qi; Luo, Chunbo; Wang, Xinheng; Grecos, Christos

    2015-02-01

    The latest trend to access mobile cloud services through wireless network connectivity has amplified globally among both entrepreneurs and home end users. Although existing public cloud service vendors such as Google, Microsoft Azure etc. are providing on-demand cloud services with affordable cost for mobile users, there are still a number of challenges to achieve high-quality mobile cloud based video applications, especially due to the bandwidth-constrained and errorprone mobile network connectivity, which is the communication bottleneck for end-to-end video delivery. In addition, existing accessible clouds networking architectures are different in term of their implementation, services, resources, storage, pricing, support and so on, and these differences have varied impact on the performance of cloud-based real-time video applications. Nevertheless, these challenges and impacts have not been thoroughly investigated in the literature. In our previous work, we have implemented a mobile cloud network model that integrates localized and decentralized cloudlets (mini-clouds) and wireless mesh networks. In this paper, we deploy a real-time framework consisting of various existing Internet cloud networking architectures (Google Cloud, Microsoft Azure and Eucalyptus Cloud) and a cloudlet based on Ubuntu Enterprise Cloud over wireless mesh networking technology for mobile cloud end users. It is noted that the increasing trend to access real-time video streaming over HTTP/HTTPS is gaining popularity among both research and industrial communities to leverage the existing web services and HTTP infrastructure in the Internet. To study the performance under different deployments using different public and private cloud service providers, we employ real-time video streaming over the HTTP/HTTPS standard, and conduct experimental evaluation and in-depth comparative analysis of the impact of different deployments on the quality of service for mobile video cloud users. Empirical results are presented and discussed to quantify and explain the different impacts resulted from various cloud deployments, video application and wireless/mobile network setting, and user mobility. Additionally, this paper analyses the advantages, disadvantages, limitations and optimization techniques in various cloud networking deployments, in particular the cloudlet approach compared with the Internet cloud approach, with recommendations of optimized deployments highlighted. Finally, federated clouds and inter-cloud collaboration challenges and opportunities are discussed in the context of supporting real-time video applications for mobile users.

  4. Automatic attention-based prioritization of unconstrained video for compression

    NASA Astrophysics Data System (ADS)

    Itti, Laurent

    2004-06-01

    We apply a biologically-motivated algorithm that selects visually-salient regions of interest in video streams to multiply-foveated video compression. Regions of high encoding priority are selected based on nonlinear integration of low-level visual cues, mimicking processing in primate occipital and posterior parietal cortex. A dynamic foveation filter then blurs (foveates) every frame, increasingly with distance from high-priority regions. Two variants of the model (one with continuously-variable blur proportional to saliency at every pixel, and the other with blur proportional to distance from three independent foveation centers) are validated against eye fixations from 4-6 human observers on 50 video clips (synthetic stimuli, video games, outdoors day and night home video, television newscast, sports, talk-shows, etc). Significant overlap is found between human and algorithmic foveations on every clip with one variant, and on 48 out of 50 clips with the other. Substantial compressed file size reductions by a factor 0.5 on average are obtained for foveated compared to unfoveated clips. These results suggest a general-purpose usefulness of the algorithm in improving compression ratios of unconstrained video.

  5. Region-of-interest determination and bit-rate conversion for H.264 video transcoding

    NASA Astrophysics Data System (ADS)

    Huang, Shu-Fen; Chen, Mei-Juan; Tai, Kuang-Han; Li, Mian-Shiuan

    2013-12-01

    This paper presents a video bit-rate transcoder for baseline profile in H.264/AVC standard to fit the available channel bandwidth for the client when transmitting video bit-streams via communication channels. To maintain visual quality for low bit-rate video efficiently, this study analyzes the decoded information in the transcoder and proposes a Bayesian theorem-based region-of-interest (ROI) determination algorithm. In addition, a curve fitting scheme is employed to find the models of video bit-rate conversion. The transcoded video will conform to the target bit-rate by re-quantization according to our proposed models. After integrating the ROI detection method and the bit-rate transcoding models, the ROI-based transcoder allocates more coding bits to ROI regions and reduces the complexity of the re-encoding procedure for non-ROI regions. Hence, it not only keeps the coding quality but improves the efficiency of the video transcoding for low target bit-rates and makes the real-time transcoding more practical. Experimental results show that the proposed framework gets significantly better visual quality.

  6. Cardiac ultrasonography over 4G wireless networks using a tele-operated robot

    PubMed Central

    Panayides, Andreas S.; Jossif, Antonis P.; Christoforou, Eftychios G.; Vieyres, Pierre; Novales, Cyril; Voskarides, Sotos; Pattichis, Constantinos S.

    2016-01-01

    This Letter proposes an end-to-end mobile tele-echography platform using a portable robot for remote cardiac ultrasonography. Performance evaluation investigates the capacity of long-term evolution (LTE) wireless networks to facilitate responsive robot tele-manipulation and real-time ultrasound video streaming that qualifies for clinical practice. Within this context, a thorough video coding standards comparison for cardiac ultrasound applications is performed, using a data set of ten ultrasound videos. Both objective and subjective (clinical) video quality assessment demonstrate that H.264/AVC and high efficiency video coding standards can achieve diagnostically-lossless video quality at bitrates well within the LTE supported data rates. Most importantly, reduced latencies experienced throughout the live tele-echography sessions allow the medical expert to remotely operate the robot in a responsive manner, using the wirelessly communicated cardiac ultrasound video to reach a diagnosis. Based on preliminary results documented in this Letter, the proposed robotised tele-echography platform can provide for reliable, remote diagnosis, achieving comparable quality of experience levels with in-hospital ultrasound examinations. PMID:27733929

  7. Capture and playback synchronization in video conferencing

    NASA Astrophysics Data System (ADS)

    Shae, Zon-Yin; Chang, Pao-Chi; Chen, Mon-Song

    1995-03-01

    Packet-switching based video conferencing has emerged as one of the most important multimedia applications. Lip synchronization can be disrupted in the packet network as the result of the network properties: packet delay jitters at the capture end, network delay jitters, packet loss, packet arrived out of sequence, local clock mismatch, and video playback overlay with the graphic system. The synchronization problem become more demanding as the real time and multiparty requirement of the video conferencing application. Some of the above mentioned problem can be solved in the more advanced network architecture as ATM having promised. This paper will present some of the solutions to the problems that can be useful at the end station terminals in the massively deployed packet switching network today. The playback scheme in the end station will consist of two units: compression domain buffer management unit and the pixel domain buffer management unit. The pixel domain buffer management unit is responsible for removing the annoying frame shearing effect in the display. The compression domain buffer management unit is responsible for parsing the incoming packets for identifying the complete data blocks in the compressed data stream which can be decoded independently. The compression domain buffer management unit is also responsible for concealing the effects of clock mismatch, lip synchronization, and packet loss, out of sequence, and network jitters. This scheme can also be applied to the multiparty teleconferencing environment. Some of the schemes presented in this paper have been implemented in the Multiparty Multimedia Teleconferencing (MMT) system prototype at the IBM watson research center.

  8. The challenges of archiving networked-based multimedia performances (Performance cryogenics)

    NASA Astrophysics Data System (ADS)

    Cohen, Elizabeth; Cooperstock, Jeremy; Kyriakakis, Chris

    2002-11-01

    Music archives and libraries have cultural preservation at the core of their charters. New forms of art often race ahead of the preservation infrastructure. The ability to stream multiple synchronized ultra-low latency streams of audio and video across a continent for a distributed interactive performance such as music and dance with high-definition video and multichannel audio raises a series of challenges for the architects of digital libraries and those responsible for cultural preservation. The archiving of such performances presents numerous challenges that go beyond simply recording each stream. Case studies of storage and subsequent retrieval issues for Internet2 collaborative performances are discussed. The development of shared reality and immersive environments generate issues about, What constitutes an archived performance that occurs across a network (in multiple spaces over time)? What are the families of necessary metadata to reconstruct this virtual world in another venue or era? For example, if the network exhibited changes in latency the performers most likely adapted. In a future recreation, the latency will most likely be completely different. We discuss the parameters of immersive environment acquisition and rendering, network architectures, software architecture, musical/choreographic scores, and environmental acoustics that must be considered to address this problem.

  9. Multi-modal highlight generation for sports videos using an information-theoretic excitability measure

    NASA Astrophysics Data System (ADS)

    Hasan, Taufiq; Bořil, Hynek; Sangwan, Abhijeet; L Hansen, John H.

    2013-12-01

    The ability to detect and organize `hot spots' representing areas of excitement within video streams is a challenging research problem when techniques rely exclusively on video content. A generic method for sports video highlight selection is presented in this study which leverages both video/image structure as well as audio/speech properties. Processing begins where the video is partitioned into small segments and several multi-modal features are extracted from each segment. Excitability is computed based on the likelihood of the segmental features residing in certain regions of their joint probability density function space which are considered both exciting and rare. The proposed measure is used to rank order the partitioned segments to compress the overall video sequence and produce a contiguous set of highlights. Experiments are performed on baseball videos based on signal processing advancements for excitement assessment in the commentators' speech, audio energy, slow motion replay, scene cut density, and motion activity as features. Detailed analysis on correlation between user excitability and various speech production parameters is conducted and an effective scheme is designed to estimate the excitement level of commentator's speech from the sports videos. Subjective evaluation of excitability and ranking of video segments demonstrate a higher correlation with the proposed measure compared to well-established techniques indicating the effectiveness of the overall approach.

  10. StreaMorph: A Case for Synthesizing Energy-Efficient Adaptive Programs Using High-Level Abstractions

    DTIC Science & Technology

    2013-08-12

    technique when switching from using eight cores to one core. 1. Introduction Real - time streaming of media data is growing in popularity. This includes...both capture and processing of real - time video and audio, and delivery of video and audio from servers; recent usage number shows over 800 million...source of data, when that source is a real - time source, and it is generally not necessary to get ahead of the sink. Even with real - time sources and sinks

  11. An analysis of lecture video utilization in undergraduate medical education: associations with performance in the courses

    PubMed Central

    McNulty, John A; Hoyt, Amy; Gruener, Gregory; Chandrasekhar, Arcot; Espiritu, Baltazar; Price, Ron; Naheedy, Ross

    2009-01-01

    Background Increasing numbers of medical schools are providing videos of lectures to their students. This study sought to analyze utilization of lecture videos by medical students in their basic science courses and to determine if student utilization was associated with performance on exams. Methods Streaming videos of lectures (n = 149) to first year and second year medical students (n = 284) were made available through a password-protected server. Server logs were analyzed over a 10-week period for both classes. For each lecture, the logs recorded time and location from which students accessed the file. A survey was administered at the end of the courses to obtain additional information about student use of the videos. Results There was a wide disparity in the level of use of lecture videos by medical students with the majority of students accessing the lecture videos sparingly (60% of the students viewed less than 10% of the available videos. The anonymous student survey revealed that students tended to view the videos by themselves from home during weekends and prior to exams. Students who accessed lecture videos more frequently had significantly (p < 0.002) lower exam scores. Conclusion We conclude that videos of lectures are used by relatively few medical students and that individual use of videos is associated with the degree to which students are having difficulty with the subject matter. PMID:19173725

  12. SITHON: An Airborne Fire Detection System Compliant with Operational Tactical Requirements

    PubMed Central

    Kontoes, Charalabos; Keramitsoglou, Iphigenia; Sifakis, Nicolaos; Konstantinidis, Pavlos

    2009-01-01

    In response to the urging need of fire managers for timely information on fire location and extent, the SITHON system was developed. SITHON is a fully digital thermal imaging system, integrating INS/GPS and a digital camera, designed to provide timely positioned and projected thermal images and video data streams rapidly integrated in the GIS operated by Crisis Control Centres. This article presents in detail the hardware and software components of SITHON, and demonstrates the first encouraging results of test flights over the Sithonia Peninsula in Northern Greece. It is envisaged that the SITHON system will be soon operated onboard various airborne platforms including fire brigade airplanes and helicopters as well as on UAV platforms owned and operated by the Greek Air Forces. PMID:22399963

  13. Map Classification In Image Data

    DTIC Science & Technology

    2015-09-25

    showing the signicant portion of image and video data transfers via Youtube , Facebook, and Flickr as primary platforms from Infographic (2015) digital...reserves • hydrography: lakes, rivers, streams, swamps, coastal flats • relief: mountains, valleys, slopes, depressions • vegetation: wooded and cleared

  14. WPSS: watching people security services

    NASA Astrophysics Data System (ADS)

    Bouma, Henri; Baan, Jan; Borsboom, Sander; van Zon, Kasper; Luo, Xinghan; Loke, Ben; Stoeller, Bram; van Kuilenburg, Hans; Dijk, Judith

    2013-10-01

    To improve security, the number of surveillance cameras is rapidly increasing. However, the number of human operators remains limited and only a selection of the video streams are observed. Intelligent software services can help to find people quickly, evaluate their behavior and show the most relevant and deviant patterns. We present a software platform that contributes to the retrieval and observation of humans and to the analysis of their behavior. The platform consists of mono- and stereo-camera tracking, re-identification, behavioral feature computation, track analysis, behavior interpretation and visualization. This system is demonstrated in a busy shopping mall with multiple cameras and different lighting conditions.

  15. Telearch - Integrated visual simulation environment for collaborative virtual archaeology.

    NASA Astrophysics Data System (ADS)

    Kurillo, Gregorij; Forte, Maurizio

    Archaeologists collect vast amounts of digital data around the world; however, they lack tools for integration and collaborative interaction to support reconstruction and interpretation process. TeleArch software is aimed to integrate different data sources and provide real-time interaction tools for remote collaboration of geographically distributed scholars inside a shared virtual environment. The framework also includes audio, 2D and 3D video streaming technology to facilitate remote presence of users. In this paper, we present several experimental case studies to demonstrate the integration and interaction with 3D models and geographical information system (GIS) data in this collaborative environment.

  16. Live Aircraft Encounter Visualization at FutureFlight Central

    NASA Technical Reports Server (NTRS)

    Murphy, James R.; Chinn, Fay; Monheim, Spencer; Otto, Neil; Kato, Kenji; Archdeacon, John

    2018-01-01

    Researchers at the National Aeronautics and Space Administration (NASA) have developed an aircraft data streaming capability that can be used to visualize live aircraft in near real-time. During a joint Federal Aviation Administration (FAA)/NASA Airborne Collision Avoidance System flight series, test sorties between unmanned aircraft and manned intruder aircraft were shown in real-time at NASA Ames' FutureFlight Central tower facility as a virtual representation of the encounter. This capability leveraged existing live surveillance, video, and audio data streams distributed through a Live, Virtual, Constructive test environment, then depicted the encounter from the point of view of any aircraft in the system showing the proximity of the other aircraft. For the demonstration, position report data were sent to the ground from on-board sensors on the unmanned aircraft. The point of view can be change dynamically, allowing encounters from all angles to be observed. Visualizing the encounters in real-time provides a safe and effective method for observation of live flight testing and a strong alternative to travel to the remote test range.

  17. Creating a web-enhanced interactive preclinic technique manual: case report and student response.

    PubMed

    Boberick, Kenneth G

    2004-12-01

    This article describes the development, use, and student response to an online manual developed with off-the-shelf software and made available using a web-based course management system (Blackboard) that was used to transform a freshman restorative preclinical technique course from a lecture-only course into an interactive web-enhanced course. The goals of the project were to develop and implement a web-enhanced interactive learning experience in a preclinical restorative technique course and shift preclinical education from a teacher-centered experience to a student-driven experience. The project was evaluated using an anonymous post-course survey (95 percent response rate) of 123 freshman students that assessed enabling (technical support and access to the technology), process (the actual experience and usability), and outcome criteria (acquisition and successful use of the knowledge gained and skills learned) of the online manual. Students responded favorably to sections called "slide galleries" where ideal and non-ideal examples of projects could be viewed. Causes, solutions, and preventive measures were provided for the errors shown. Sections called "slide series" provided cookbook directions allowing for self-paced and student-directed learning. Virtually all of the students, 99 percent, found the quality of the streaming videos adequate to excellent. Regarding Internet connections and video viewing, 65 percent of students successfully viewed the videos from a remote site; cable connections were the most reliable, dial-up connections were inadequate, and DSL connections were variable. Seventy-three percent of the students felt the videos were an effective substitute for in-class demonstrations. Students preferred video with sound over video with subtitles and preferred short video clips embedded in the text over compilation videos. The results showed it is possible to develop and implement web-enhanced and interactive dental education in a preclinical restorative technique course that successfully delivered information beyond the textual format.

  18. Earth-Directed Coronal Hole

    NASA Image and Video Library

    2016-09-21

    A dark coronal hole that was facing towards Earth for several days spewed streams of solar wind in our direction (Sept. 18-21, 2016). A coronal hole is a magnetically open region. The magnetic fields have opened up allowing solar wind (comprised of charged particles) to stream into space. Gusts of solar wind can generate beautiful aurora when they reach Earth. The video clip shows the sun in a wavelength of extreme ultraviolet light. Movies are available at http://photojournal.jpl.nasa.gov/catalog/PIA21067

  19. Coronal Hole Rotating Towards Us

    NASA Image and Video Library

    2018-05-22

    A good-sized coronal hole came around to where it is just about facing Earth (May 16-18, 2018). Coronal holes are areas of open magnetic field from which solar wind (consisting of charged particles) streams into space. The video clip covers two days and was taken in a wavelength of extreme ultraviolet light. Such streams of particles take several days to reach Earth, but they can generate aurora, particularly nearer the poles. An animation is available at https://photojournal.jpl.nasa.gov/catalog/PIA00575

  20. Optimal space communications techniques. [discussion of video signals and delta modulation

    NASA Technical Reports Server (NTRS)

    Schilling, D. L.

    1974-01-01

    The encoding of video signals using the Song Adaptive Delta Modulator (Song ADM) is discussed. The video signals are characterized as a sequence of pulses having arbitrary height and width. Although the ADM is suited to tracking signals having fast rise times, it was found that the DM algorithm (which permits an exponential rise for estimating an input step) results in a large overshoot and an underdamped response to the step. An overshoot suppression algorithm which significantly reduces the ringing while not affecting the rise time is presented along with formuli for the rise time and the settling time. Channel errors and their effect on the DM encoded bit stream were investigated.

  1. Clustering and Flow Conservation Monitoring Tool for Software Defined Networks

    PubMed Central

    Puente Fernández, Jesús Antonio

    2018-01-01

    Prediction systems present some challenges on two fronts: the relation between video quality and observed session features and on the other hand, dynamics changes on the video quality. Software Defined Networks (SDN) is a new concept of network architecture that provides the separation of control plane (controller) and data plane (switches) in network devices. Due to the existence of the southbound interface, it is possible to deploy monitoring tools to obtain the network status and retrieve a statistics collection. Therefore, achieving the most accurate statistics depends on a strategy of monitoring and information requests of network devices. In this paper, we propose an enhanced algorithm for requesting statistics to measure the traffic flow in SDN networks. Such an algorithm is based on grouping network switches in clusters focusing on their number of ports to apply different monitoring techniques. Such grouping occurs by avoiding monitoring queries in network switches with common characteristics and then, by omitting redundant information. In this way, the present proposal decreases the number of monitoring queries to switches, improving the network traffic and preventing the switching overload. We have tested our optimization in a video streaming simulation using different types of videos. The experiments and comparison with traditional monitoring techniques demonstrate the feasibility of our proposal maintaining similar values decreasing the number of queries to the switches. PMID:29614049

  2. Photo-acoustic and video-acoustic methods for sensing distant sound sources

    NASA Astrophysics Data System (ADS)

    Slater, Dan; Kozacik, Stephen; Kelmelis, Eric

    2017-05-01

    Long range telescopic video imagery of distant terrestrial scenes, aircraft, rockets and other aerospace vehicles can be a powerful observational tool. But what about the associated acoustic activity? A new technology, Remote Acoustic Sensing (RAS), may provide a method to remotely listen to the acoustic activity near these distant objects. Local acoustic activity sometimes weakly modulates the ambient illumination in a way that can be remotely sensed. RAS is a new type of microphone that separates an acoustic transducer into two spatially separated components: 1) a naturally formed in situ acousto-optic modulator (AOM) located within the distant scene and 2) a remote sensing readout device that recovers the distant audio. These two elements are passively coupled over long distances at the speed of light by naturally occurring ambient light energy or other electromagnetic fields. Stereophonic, multichannel and acoustic beam forming are all possible using RAS techniques and when combined with high-definition video imagery it can help to provide a more cinema like immersive viewing experience. A practical implementation of a remote acousto-optic readout device can be a challenging engineering problem. The acoustic influence on the optical signal is generally weak and often with a strong bias term. The optical signal is further degraded by atmospheric seeing turbulence. In this paper, we consider two fundamentally different optical readout approaches: 1) a low pixel count photodiode based RAS photoreceiver and 2) audio extraction directly from a video stream. Most of our RAS experiments to date have used the first method for reasons of performance and simplicity. But there are potential advantages to extracting audio directly from a video stream. These advantages include the straight forward ability to work with multiple AOMs (useful for acoustic beam forming), simpler optical configurations, and a potential ability to use certain preexisting video recordings. However, doing so requires overcoming significant limitations typically including much lower sample rates, reduced sensitivity and dynamic range, more expensive video hardware, and the need for sophisticated video processing. The ATCOM real time image processing software environment provides many of the needed capabilities for researching video-acoustic signal extraction. ATCOM currently is a powerful tool for the visual enhancement of atmospheric turbulence distorted telescopic views. In order to explore the potential of acoustic signal recovery from video imagery we modified ATCOM to extract audio waveforms from the same telescopic video sources. In this paper, we demonstrate and compare both readout techniques for several aerospace test scenarios to better show where each has advantages.

  3. Ensemble of Chaotic and Naive Approaches for Performance Enhancement in Video Encryption.

    PubMed

    Chandrasekaran, Jeyamala; Thiruvengadam, S J

    2015-01-01

    Owing to the growth of high performance network technologies, multimedia applications over the Internet are increasing exponentially. Applications like video conferencing, video-on-demand, and pay-per-view depend upon encryption algorithms for providing confidentiality. Video communication is characterized by distinct features such as large volume, high redundancy between adjacent frames, video codec compliance, syntax compliance, and application specific requirements. Naive approaches for video encryption encrypt the entire video stream with conventional text based cryptographic algorithms. Although naive approaches are the most secure for video encryption, the computational cost associated with them is very high. This research work aims at enhancing the speed of naive approaches through chaos based S-box design. Chaotic equations are popularly known for randomness, extreme sensitivity to initial conditions, and ergodicity. The proposed methodology employs two-dimensional discrete Henon map for (i) generation of dynamic and key-dependent S-box that could be integrated with symmetric algorithms like Blowfish and Data Encryption Standard (DES) and (ii) generation of one-time keys for simple substitution ciphers. The proposed design is tested for randomness, nonlinearity, avalanche effect, bit independence criterion, and key sensitivity. Experimental results confirm that chaos based S-box design and key generation significantly reduce the computational cost of video encryption with no compromise in security.

  4. Ensemble of Chaotic and Naive Approaches for Performance Enhancement in Video Encryption

    PubMed Central

    Chandrasekaran, Jeyamala; Thiruvengadam, S. J.

    2015-01-01

    Owing to the growth of high performance network technologies, multimedia applications over the Internet are increasing exponentially. Applications like video conferencing, video-on-demand, and pay-per-view depend upon encryption algorithms for providing confidentiality. Video communication is characterized by distinct features such as large volume, high redundancy between adjacent frames, video codec compliance, syntax compliance, and application specific requirements. Naive approaches for video encryption encrypt the entire video stream with conventional text based cryptographic algorithms. Although naive approaches are the most secure for video encryption, the computational cost associated with them is very high. This research work aims at enhancing the speed of naive approaches through chaos based S-box design. Chaotic equations are popularly known for randomness, extreme sensitivity to initial conditions, and ergodicity. The proposed methodology employs two-dimensional discrete Henon map for (i) generation of dynamic and key-dependent S-box that could be integrated with symmetric algorithms like Blowfish and Data Encryption Standard (DES) and (ii) generation of one-time keys for simple substitution ciphers. The proposed design is tested for randomness, nonlinearity, avalanche effect, bit independence criterion, and key sensitivity. Experimental results confirm that chaos based S-box design and key generation significantly reduce the computational cost of video encryption with no compromise in security. PMID:26550603

  5. Compressed ultrasound video image-quality evaluation using a Likert scale and Kappa statistical analysis

    NASA Astrophysics Data System (ADS)

    Stewart, Brent K.; Carter, Stephen J.; Langer, Steven G.; Andrew, Rex K.

    1998-06-01

    Experiments using NASA's Advanced Communications Technology Satellite were conducted to provide an estimate of the compressed video quality required for preservation of clinically relevant features for the detection of trauma. Bandwidth rates of 128, 256 and 384 kbps were used. A five point Likert scale (1 equals no useful information and 5 equals good diagnostic quality) was used for a subjective preference questionnaire to evaluate the quality of the compressed ultrasound imagery at the three compression rates for several anatomical regions of interest. At 384 kbps the Likert scores (mean plus or minus SD) were abdomen (4.45 plus or minus 0.71), carotid artery (4.70 plus or minus 0.36), kidney (5.0 plus or minus 0.0), liver (4.67 plus or minus 0.58) and thyroid (4.03 plus or minus 0.74). Due to the volatile nature of the H.320 compressed digital video stream, no statistically significant results can be derived through this methodology. As the MPEG standard has at its roots many of the same intraframe and motion vector compression algorithms as the H.261 (such as that used in the previous ACTS/AMT experiments), we are using the MPEG compressed video sequences to best gauge what minimum bandwidths are necessary for preservation of clinically relevant features for the detection of trauma. We have been using an MPEG codec board to collect losslessly compressed video clips from high quality S- VHS tapes and through direct digitization of S-video. Due to the large number of videoclips and questions to be presented to the radiologists and for ease of application, we have developed a web browser interface for this video visual perception study. Due to the large numbers of observations required to reach statistical significance in most ROC studies, Kappa statistical analysis is used to analyze the degree of agreement between observers and between viewing assessment. If the degree of agreement amongst readers is high, then there is a possibility that the ratings (i.e., average Likert score at each bandwidth) do in fact reflect the dimension they are purported to reflect (video quality versus bandwidth). It is then possible to make intelligent choice of bandwidth for streaming compressed video and compressed videoclips.

  6. Data-driven analysis of functional brain interactions during free listening to music and speech.

    PubMed

    Fang, Jun; Hu, Xintao; Han, Junwei; Jiang, Xi; Zhu, Dajiang; Guo, Lei; Liu, Tianming

    2015-06-01

    Natural stimulus functional magnetic resonance imaging (N-fMRI) such as fMRI acquired when participants were watching video streams or listening to audio streams has been increasingly used to investigate functional mechanisms of the human brain in recent years. One of the fundamental challenges in functional brain mapping based on N-fMRI is to model the brain's functional responses to continuous, naturalistic and dynamic natural stimuli. To address this challenge, in this paper we present a data-driven approach to exploring functional interactions in the human brain during free listening to music and speech streams. Specifically, we model the brain responses using N-fMRI by measuring the functional interactions on large-scale brain networks with intrinsically established structural correspondence, and perform music and speech classification tasks to guide the systematic identification of consistent and discriminative functional interactions when multiple subjects were listening music and speech in multiple categories. The underlying premise is that the functional interactions derived from N-fMRI data of multiple subjects should exhibit both consistency and discriminability. Our experimental results show that a variety of brain systems including attention, memory, auditory/language, emotion, and action networks are among the most relevant brain systems involved in classic music, pop music and speech differentiation. Our study provides an alternative approach to investigating the human brain's mechanism in comprehension of complex natural music and speech.

  7. Reconstructing the flight kinematics of swarming and mating in wild mosquitoes

    PubMed Central

    Butail, Sachit; Manoukis, Nicholas; Diallo, Moussa; Ribeiro, José M.; Lehmann, Tovi; Paley, Derek A.

    2012-01-01

    We describe a novel tracking system for reconstructing three-dimensional tracks of individual mosquitoes in wild swarms and present the results of validating the system by filming swarms and mating events of the malaria mosquito Anopheles gambiae in Mali. The tracking system is designed to address noisy, low frame-rate (25 frames per second) video streams from a stereo camera system. Because flying A. gambiae move at 1–4 m s−1, they appear as faded streaks in the images or sometimes do not appear at all. We provide an adaptive algorithm to search for missing streaks and a likelihood function that uses streak endpoints to extract velocity information. A modified multi-hypothesis tracker probabilistically addresses occlusions and a particle filter estimates the trajectories. The output of the tracking algorithm is a set of track segments with an average length of 0.6–1 s. The segments are verified and combined under human supervision to create individual tracks up to the duration of the video (90 s). We evaluate tracking performance using an established metric for multi-target tracking and validate the accuracy using independent stereo measurements of a single swarm. Three-dimensional reconstructions of A. gambiae swarming and mating events are presented. PMID:22628212

  8. Moving People from Science Adjacent to Science Doers with Twitch.tv

    NASA Astrophysics Data System (ADS)

    Gay, Pamela L.; CosmoQuest

    2017-10-01

    The CosmoQuest community is testing the ability to attract people from playing online videogames to doing fully online citizen science by engaging people through the Twitch.tv streaming platform. Twitch.tv launched in 2011 as an online platform for video gamers to stream their gameplay while providing narrative. In its six years of regular growth, the platform has added support for people playing non-video games, and for those participating in non-game activities. As part of their expansion, in April 2017, Twitch.tv hosted a science week during which they streamed the Cosmos series and allowed different feeds provide real-time commentary. They also hosted panel discussions on a variety of science topics. CosmoQuest participated in this event and used it as a jumping off point for beginning to interact with Twitch.tv community members online. With CosmoQuest’s beta launch of Image Detectives, they expanded their use of this streaming platform to include regular “office hours”, during which team members did science with CosmoQuest’s online projects, took questions from community members, and otherwise promoted the CosmoQuest community. This presentation examines this case study, and looks at how well different kinds of Twitter engagements attracted audiences, the conversion rate from viewer to subscriber, and at how effectively CosmoQuest was able to migrate users from viewing citizen science on Twitch.tv to participating in citizen science on CosmoQuest.org.This project was supported through NASA cooperative agreement NNX17AD20A.

  9. Cultural Respect Encompassing Simulation Training: Being Heard About Health Through Broadband

    PubMed Central

    Min-Yu Lau, Phyllis; Woodward-Kron, Robyn; Livesay, Karen; Elliott, Kristine; Nicholson, Patricia

    2016-01-01

    Background Cultural Respect Encompassing Simulation Training (CREST) is a learning program that uses simulation to provide health professional students and practitioners with strategies to communicate sensitively with culturally and linguistically diverse (CALD) patients. It consists of training modules with a cultural competency evaluation framework and CALD simulated patients to interact with trainees in immersive simulation scenarios. The aim of this study was to test the feasibility of expanding the delivery of CREST to rural Australia using live video streaming; and to investigate the fidelity of cultural sensitivity – defined within the process of cultural competency which includes awareness, knowledge, skills, encounters and desire – of the streamed simulations. Design and Methods In this mixed-methods evaluative study, health professional trainees were recruited at three rural academic campuses and one rural hospital to pilot CREST sessions via live video streaming and simulation from the city campus in 2014. Cultural competency, teaching and learning evaluations were conducted. Results Forty-five participants rated 26 reliable items before and after each session and reported statistically significant improvement in 4 of 5 cultural competency domains, particularly in cultural skills (P<0.05). Qualitative data indicated an overall acknowledgement amongst participants of the importance of communication training and the quality of the simulation training provided remotely by CREST. Conclusions Cultural sensitivity education using live video-streaming and simulation can contribute to health professionals’ learning and is effective in improving cultural competency. CREST has the potential to be embedded within health professional curricula across Australian universities to address issues of health inequalities arising from a lack of cultural sensitivity training. Significance for public health There are significant health inequalities for migrant populations. They commonly have poorer access to health services and poorer health outcomes than the Australian-born population. The factors are multiple, complex and include language and cultural barriers. To address these disparities, culturally competent patient-centred care is increasingly recognised to be critical to improving care quality, patient satisfaction, patient compliance and patient outcomes. Yet there is a lack of quality in the teaching and learning of cultural competence in healthcare education curricula, particularly in rural settings where qualified trainers and resources can be limited. The Cultural Respect Encompassing Simulation Training (CREST) program offers opportunities to health professional students and practitioners to learn and develop communication skills with professionally trained culturally and linguistically diverse simulated patients who contribute their experiences and health perspectives. It has already been shown to contribute to health professionals' learning and is effective in improving cultural competency in urban settings. This study demonstrates that CREST when delivered via live video-streaming and simulation can achieve similar results in rural settings. PMID:27190975

  10. Context adaptive binary arithmetic coding-based data hiding in partially encrypted H.264/AVC videos

    NASA Astrophysics Data System (ADS)

    Xu, Dawen; Wang, Rangding

    2015-05-01

    A scheme of data hiding directly in a partially encrypted version of H.264/AVC videos is proposed which includes three parts, i.e., selective encryption, data embedding and data extraction. Selective encryption is performed on context adaptive binary arithmetic coding (CABAC) bin-strings via stream ciphers. By careful selection of CABAC entropy coder syntax elements for selective encryption, the encrypted bitstream is format-compliant and has exactly the same bit rate. Then a data-hider embeds the additional data into partially encrypted H.264/AVC videos using a CABAC bin-string substitution technique without accessing the plaintext of the video content. Since bin-string substitution is carried out on those residual coefficients with approximately the same magnitude, the quality of the decrypted video is satisfactory. Video file size is strictly preserved even after data embedding. In order to adapt to different application scenarios, data extraction can be done either in the encrypted domain or in the decrypted domain. Experimental results have demonstrated the feasibility and efficiency of the proposed scheme.

  11. Modeling Coupled Physical and Chemical Erosional Processes Using Structure from Motion Reconstruction and Multiphysics Simulation: Applications to Knickpoints in Bedrock Streams in Limestone Caves and on Earth's Surface

    NASA Astrophysics Data System (ADS)

    Bosch, R.; Ward, D.

    2017-12-01

    Investigation of erosion rates and processes at knickpoints in surface bedrock streams is an active area of research, involving complex feedbacks in the coupled relationships between dissolution, abrasion, and plucking that have not been sufficiently addressed. Even less research has addressed how these processes operate to propagate knickpoints through cave passages in layered sedimentary rocks, despite these features being common along subsurface streams. In both settings, there is evidence for mechanical and chemical erosion, but in cave passages the different hydrologic and hydraulic regimes, combined with an important role for the dissolution process, affect the relative roles and coupled interactions between these processes, and distinguish them from surface stream knickpoints. Using a novel approach of imaging cave passages using Structure from Motion (SFM), we create 3D geometry meshes to explore these systems using multiphysics simulation, and compare the processes as they occur in caves with those in surface streams. Here we focus on four field sites with actively eroding streambeds that include knickpoints: Upper River Acheron and Devil's Cooling Tub in Mammoth Cave, Kentucky; and two surface streams in Clermont County, Ohio, Avey's Run and Fox Run. SFM 3D reconstructions are built using images exported from 4K video shot at each field location. We demonstrate that SFM is a viable imaging approach for reconstructing cave passages with complex morphologies. We then use these reconstructions to create meshes upon which to run multiphysics simulations using STAR-CCM+. Our approach incorporates multiphase free-surface computational fluid dynamics simulations with sediment transport modeled using discrete element method grains. Physical and chemical properties of the water, bedrock, and sediment enable computation of shear stress, sediment impact forces, and chemical kinetic conditions at the bed surface. Preliminary results prove the efficacy of commercially available multiphysics simulation software for modeling various flow conditions, erosional processes, and their complex coupled interactions in cave passages and in surface stream channels to expand knowledge and understanding of overall cave system development and river profile erosion.

  12. An 802.11 n wireless local area network transmission scheme for wireless telemedicine applications.

    PubMed

    Lin, C F; Hung, S I; Chiang, I H

    2010-10-01

    In this paper, an 802.11 n transmission scheme is proposed for wireless telemedicine applications. IEEE 802.11n standards, a power assignment strategy, space-time block coding (STBC), and an object composition Petri net (OCPN) model are adopted. With the proposed wireless system, G.729 audio bit streams, Joint Photographic Experts Group 2000 (JPEG 2000) clinical images, and Moving Picture Experts Group 4 (MPEG-4) video bit streams achieve a transmission bit error rate (BER) of 10-7, 10-4, and 103 simultaneously. The proposed system meets the requirements prescribed for wireless telemedicine applications. An essential feature of this proposed transmission scheme is that clinical information that requires a high quality of service (QoS) is transmitted at a high power transmission rate with significant error protection. For maximizing resource utilization and minimizing the total transmission power, STBC and adaptive modulation techniques are used in the proposed 802.11 n wireless telemedicine system. Further, low power, direct mapping (DM), low-error protection scheme, and high-level modulation are adopted for messages that can tolerate a high BER. With the proposed transmission scheme, the required reliability of communication can be achieved. Our simulation results have shown that the proposed 802.11 n transmission scheme can be used for developing effective wireless telemedicine systems.

  13. Development of ultrasonic electrostatic microjets for distributed propulsion and microflight

    NASA Astrophysics Data System (ADS)

    Amirparviz, Babak

    This dissertation details the first attempt to design and fabricate a distributed micro propulsion system based on acoustic streaming. A novel micro propulsion method is suggested by combining Helmholtz resonance, acoustic streaming and flow entrainment and thrust augmentation. In this method, oscillatory motion of an electrostatically actuated diaphragm creates a high frequency acoustic field inside the cavity of a Helmholtz resonator. The initial fluid motion velocity is amplified by the Helmholtz resonator structure and creates a jet flow at the exit nozzle. Acoustic streaming is the phenomenon responsible for primary jet stream creation. Primary jets produced by a few resonators can be combined in an ejector configuration to induce flow entrainment and thrust augmentation. Basic governing equations for the electrostatic actuator, deformation of the diaphragm and the fluid flow inside the resonator are derived. These equations are linearized and used to derive an equivalent electrical circuit model for the operation of the device. Numerical solution of the governing equations and simulation of the circuit model are used to predict the performance of the experimental systems. Thrust values as high as 30.3muN are expected per resonator. A micro machined electrostatically-driven high frequency Helmholtz resonator prototype is designed and fabricated. A new micro fabrication technique is developed for bulk micromachining and in particular fabrication of the resonator. Geometric stops for wet anisotropic etching of silicon are introduced for the fist time for structure formation. Arrays of high frequency (>60kHz) micro Helmholtz resonators are fabricated. In one sample more than 1000 resonators cover the surface of a four-inch silicon wafer and in effect convert it to a distributed propulsion system. A high yield (>85%) micro fabrication process is presented for realization of this propulsion system taking advantage of newly developed deep glass micromachining and lithography on thin (15mum) silicon methods. Extensive test and characterization are performed on the micro jets using current frequency component analysis, laser interferometry, acoustic measurements, hot-wire anemometers, video particle imaging and load cells. The occurrence of acoustic streaming at jet nozzles is verified and flow velocities exceeding 1m/s are measured at the 15mum x 330mum jet exit nozzle.

  14. Analysis and application of intelligence network based on FTTH

    NASA Astrophysics Data System (ADS)

    Feng, Xiancheng; Yun, Xiang

    2008-12-01

    With the continued rapid growth of Internet, new network service emerges in endless stream, especially the increase of network game, meeting TV, video on demand, etc. The bandwidth requirement increase continuously. Network technique, optical device technical development is swift and violent. FTTH supports all present and future service with enormous bandwidth, including traditional telecommunication service, traditional data service and traditional TV service, and the future digital TV and VOD. With huge bandwidth of FTTH, it wins the final solution of broadband network, becomes the final goal of development of optical access network. Firstly, it introduces the main service which FTTH supports, main analysis key technology such as FTTH system composition way, topological structure, multiplexing, optical cable and device. It focus two kinds of realization methods - PON, P2P technology. Then it proposed that the solution of FTTH can support comprehensive access (service such as broadband data, voice, video and narrowband private line). Finally, it shows the engineering application for FTTH in the district and building. It brings enormous economic benefits and social benefit.

  15. Online Job Allocation with Hard Allocation Ratio Requirement (Author’s Manuscript)

    DTIC Science & Technology

    2016-04-14

    where each job can only be served by a subset of servers. Such a problem exists in many emerging Internet services, such as YouTube , Netflix, etc. For...example, in the case of YouTube , each video is replicated only in a small number of servers, and each server can only serve a limited number of...streams simultaneously. When a user accesses YouTube and makes a request to watch a video, this request needs to be allocated to one of the servers that

  16. Optimal erasure protection for scalably compressed video streams with limited retransmission.

    PubMed

    Taubman, David; Thie, Johnson

    2005-08-01

    This paper shows how the priority encoding transmission (PET) framework may be leveraged to exploit both unequal error protection and limited retransmission for RD-optimized delivery of streaming media. Previous work on scalable media protection with PET has largely ignored the possibility of retransmission. Conversely, the PET framework has not been harnessed by the substantial body of previous work on RD optimized hybrid forward error correction/automatic repeat request schemes. We limit our attention to sources which can be modeled as independently compressed frames (e.g., video frames), where each element in the scalable representation of each frame can be transmitted in one or both of two transmission slots. An optimization algorithm determines the level of protection which should be assigned to each element in each slot, subject to transmission bandwidth constraints. To balance the protection assigned to elements which are being transmitted for the first time with those which are being retransmitted, the proposed algorithm formulates a collection of hypotheses concerning its own behavior in future transmission slots. We show how the PET framework allows for a decoupled optimization algorithm with only modest complexity. Experimental results obtained with Motion JPEG2000 compressed video demonstrate that substantial performance benefits can be obtained using the proposed framework.

  17. Use of on-demand video to provide patient education on spinal cord injury

    PubMed Central

    Hoffman, Jeanne; Salzman, Cynthia; Garbaccio, Chris; Burns, Stephen P.; Crane, Deborah; Bombardier, Charles

    2011-01-01

    Background/objective Persons with chronic spinal cord injury (SCI) have a high lifetime need for ongoing patient education to reduce the risk of serious and costly medical conditions. We have addressed this need through monthly in-person public education programs called SCI Forums. More recently, we began videotaping these programs for streaming on our website to reach a geographically diverse audience of patients, caregivers, and providers. Design/methods We compared information from the in-person forums to that of the same forums shown streaming on our website during a 1-year period. Results Both the in-person and Internet versions of the forums received high overall ratings from individuals who completed evaluation forms. Eighty-eight percent of online evaluators and 96% of in-person evaluators reported that they gained new information from the forum; 52 and 64% said they changed their attitude, and 61 and 68% said they would probably change their behavior or take some kind of action based on information they learned. Ninety-one percent of online evaluators reported that video is better than text for presenting this kind of information. Conclusion Online video is an accessible, effective, and well-accepted way to present ongoing SCI education and can reach a wider geographical audience than in-person presentations. PMID:21903014

  18. Change-based threat detection in urban environments with a forward-looking camera

    NASA Astrophysics Data System (ADS)

    Morton, Kenneth, Jr.; Ratto, Christopher; Malof, Jordan; Gunter, Michael; Collins, Leslie; Torrione, Peter

    2012-06-01

    Roadside explosive threats continue to pose a significant risk to soldiers and civilians in conflict areas around the world. These objects are easy to manufacture and procure, but due to their ad hoc nature, they are difficult to reliably detect using standard sensing technologies. Although large roadside explosive hazards may be difficult to conceal in rural environments, urban settings provide a much more complicated background where seemingly innocuous objects (e.g., piles of trash, roadside debris) may be used to obscure threats. Since direct detection of all innocuous objects would flag too many objects to be of use, techniques must be employed to reduce the number of alarms generated and highlight only a limited subset of possibly threatening regions for the user. In this work, change detection techniques are used to reduce false alarm rates and increase detection capabilities for possible threat identification in urban environments. The proposed model leverages data from multiple video streams collected over the same regions by first applying video aligning and then using various distance metrics to detect changes based on image keypoints in the video streams. Data collected at an urban warfare simulation range at an Eastern US test site was used to evaluate the proposed approach, and significant reductions in false alarm rates compared to simpler techniques are illustrated.

  19. Two-stream Convolutional Neural Network for Methane Emissions Quantification

    NASA Astrophysics Data System (ADS)

    Wang, J.; Ravikumar, A. P.; McGuire, M.; Bell, C.; Tchapmi, L. P.; Brandt, A. R.

    2017-12-01

    Methane, a key component of natural gas, has a 25x higher global warming potential than carbon dioxide on a 100-year basis. Accurately monitoring and mitigating methane emissions require cost-effective detection and quantification technologies. Optical gas imaging, one of the most commonly used leak detection technology, adopted by Environmental Protection Agency, cannot estimate leak-sizes. In this work, we harness advances in computer science to allow for rapid and automatic leak quantification. Particularly, we utilize two-stream deep Convolutional Networks (ConvNets) to estimate leak-size by capturing complementary spatial information from still plume frames, and temporal information from plume motion between frames. We build large leak datasets for training and evaluating purposes by collecting about 20 videos (i.e. 397,400 frames) of leaks. The videos were recorded at six distances from the source, covering 10 -60 ft. Leak sources included natural gas well-heads, separators, and tanks. All frames were labeled with a true leak size, which has eight levels ranging from 0 to 140 MCFH. Preliminary analysis shows that two-stream ConvNets provides significant accuracy advantage over single steam ConvNets. Spatial stream ConvNet can achieve an accuracy of 65.2%, by extracting important features, including texture, plume area, and pattern. Temporal stream, fed by the results of optical flow analysis, results in an accuracy of 58.3%. The integration of the two-stream ConvNets gives a combined accuracy of 77.6%. For future work, we will split the training and testing datasets in distinct ways in order to test the generalization of the algorithm for different leak sources. Several analytic metrics, including confusion matrix and visualization of key features, will be used to understand accuracy rates and occurrences of false positives. The quantification algorithm can help to find and fix super-emitters, and improve the cost-effectiveness of leak detection and repair programs.

  20. Schemes for efficient transmission of encoded video streams on high-speed networks

    NASA Astrophysics Data System (ADS)

    Ramanathan, Srinivas; Vin, Harrick M.; Rangan, P. Venkat

    1994-04-01

    In this paper, we argue that significant performance benefits can accrue if integrated networks implement application-specific mechanisms that account for the diversities in media compression schemes. Towards this end, we propose a simple, yet effective, strategy called Frame Induced Packet Discarding (FIPD), in which, upon detection of loss of a threshold number (determined by an application's video encoding scheme) of packets belonging to a video frame, the network attempts to discard all the remaining packets of that frame. In order to analytically quantify the performance of FIPD so as to obtain fractional frame losses that can be guaranteed to video channels, we develop a finite state, discrete time markov chain model of the FIPD strategy. The fractional frame loss thus computed can serve as the criterion for admission control at the network. Performance evaluations demonstrate the utility of the FIPD strategy.

  1. Blade counting tool with a 3D borescope for turbine applications

    NASA Astrophysics Data System (ADS)

    Harding, Kevin G.; Gu, Jiajun; Tao, Li; Song, Guiju; Han, Jie

    2014-07-01

    Video borescopes are widely used for turbine and aviation engine inspection to guarantee the health of blades and prevent blade failure during running. When the moving components of a turbine engine are inspected with a video borescope, the operator must view every blade in a given stage. The blade counting tool is video interpretation software that runs simultaneously in the background during inspection. It identifies moving turbine blades in a video stream, tracks and counts the blades as they move across the screen. This approach includes blade detection to identify blades in different inspection scenarios and blade tracking to perceive blade movement even in hand-turning engine inspections. The software is able to label each blade by comparing counting results to a known blade count for the engine type and stage. On-screen indications show the borescope user labels for each blade and how many blades have been viewed as the turbine is rotated.

  2. Robust and Imperceptible Watermarking of Video Streams for Low Power Devices

    NASA Astrophysics Data System (ADS)

    Ishtiaq, Muhammad; Jaffar, M. Arfan; Khan, Muhammad A.; Jan, Zahoor; Mirza, Anwar M.

    With the advent of internet, every aspect of life is going online. From online working to watching videos, everything is now available on the internet. With the greater business benefits, increased availability and other online business advantages, there is a major challenge of security and ownership of data. Videos downloaded from an online store can easily be shared among non-intended or unauthorized users. Invisible watermarking is used to hide copyright protection information in the videos. The existing methods of watermarking are less robust and imperceptible and also the computational complexity of these methods does not suit low power devices. In this paper, we have proposed a new method to address the problem of robustness and imperceptibility. Experiments have shown that our method has better robustness and imperceptibility as well as our method is computationally efficient than previous approaches in practice. Hence our method can easily be applied on low power devices.

  3. Vroom: designing an augmented environment for remote collaboration in digital cinema production

    NASA Astrophysics Data System (ADS)

    Margolis, Todd; Cornish, Tracy

    2013-03-01

    As media technologies become increasingly affordable, compact and inherently networked, new generations of telecollaborative platforms continue to arise which integrate these new affordances. Virtual reality has been primarily concerned with creating simulations of environments that can transport participants to real or imagined spaces that replace the "real world". Meanwhile Augmented Reality systems have evolved to interleave objects from Virtual Reality environments into the physical landscape. Perhaps now there is a new class of systems that reverse this precept to enhance dynamic media landscapes and immersive physical display environments to enable intuitive data exploration through collaboration. Vroom (Virtual Room) is a next-generation reconfigurable tiled display environment in development at the California Institute for Telecommunications and Information Technology (Calit2) at the University of California, San Diego. Vroom enables freely scalable digital collaboratories, connecting distributed, high-resolution visualization resources for collaborative work in the sciences, engineering and the arts. Vroom transforms a physical space into an immersive media environment with large format interactive display surfaces, video teleconferencing and spatialized audio built on a highspeed optical network backbone. Vroom enables group collaboration for local and remote participants to share knowledge and experiences. Possible applications include: remote learning, command and control, storyboarding, post-production editorial review, high resolution video playback, 3D visualization, screencasting and image, video and multimedia file sharing. To support these various scenarios, Vroom features support for multiple user interfaces (optical tracking, touch UI, gesture interface, etc.), support for directional and spatialized audio, giga-pixel image interactivity, 4K video streaming, 3D visualization and telematic production. This paper explains the design process that has been utilized to make Vroom an accessible and intuitive immersive environment for remote collaboration specifically for digital cinema production.

  4. SCORHE: a novel and practical approach to video monitoring of laboratory mice housed in vivarium cage racks.

    PubMed

    Salem, Ghadi H; Dennis, John U; Krynitsky, Jonathan; Garmendia-Cedillos, Marcial; Swaroop, Kanchan; Malley, James D; Pajevic, Sinisa; Abuhatzira, Liron; Bustin, Michael; Gillet, Jean-Pierre; Gottesman, Michael M; Mitchell, James B; Pohida, Thomas J

    2015-03-01

    The System for Continuous Observation of Rodents in Home-cage Environment (SCORHE) was developed to demonstrate the viability of compact and scalable designs for quantifying activity levels and behavior patterns for mice housed within a commercial ventilated cage rack. The SCORHE in-rack design provides day- and night-time monitoring with the consistency and convenience of the home-cage environment. The dual-video camera custom hardware design makes efficient use of space, does not require home-cage modification, and is animal-facility user-friendly. Given the system's low cost and suitability for use in existing vivariums without modification to the animal husbandry procedures or housing setup, SCORHE opens up the potential for the wider use of automated video monitoring in animal facilities. SCORHE's potential uses include day-to-day health monitoring, as well as advanced behavioral screening and ethology experiments, ranging from the assessment of the short- and long-term effects of experimental cancer treatments to the evaluation of mouse models. When used for phenotyping and animal model studies, SCORHE aims to eliminate the concerns often associated with many mouse-monitoring methods, such as circadian rhythm disruption, acclimation periods, lack of night-time measurements, and short monitoring periods. Custom software integrates two video streams to extract several mouse activity and behavior measures. Studies comparing the activity levels of ABCB5 knockout and HMGN1 overexpresser mice with their respective C57BL parental strains demonstrate SCORHE's efficacy in characterizing the activity profiles for singly- and doubly-housed mice. Another study was conducted to demonstrate the ability of SCORHE to detect a change in activity resulting from administering a sedative.

  5. Evaluation of stereoscopic medical video content on an autostereoscopic display for undergraduate medical education

    NASA Astrophysics Data System (ADS)

    Ilgner, Justus F. R.; Kawai, Takashi; Shibata, Takashi; Yamazoe, Takashi; Westhofen, Martin

    2006-02-01

    Introduction: An increasing number of surgical procedures are performed in a microsurgical and minimally-invasive fashion. However, the performance of surgery, its possibilities and limitations become difficult to teach. Stereoscopic video has evolved from a complex production process and expensive hardware towards rapid editing of video streams with standard and HDTV resolution which can be displayed on portable equipment. This study evaluates the usefulness of stereoscopic video in teaching undergraduate medical students. Material and methods: From an earlier study we chose two clips each of three different microsurgical operations (tympanoplasty type III of the ear, endonasal operation of the paranasal sinuses and laser chordectomy for carcinoma of the larynx). This material was added by 23 clips of a cochlear implantation, which was specifically edited for a portable computer with an autostereoscopic display (PC-RD1-3D, SHARP Corp., Japan). The recording and synchronization of left and right image was performed at the University Hospital Aachen. The footage was edited stereoscopically at the Waseda University by means of our original software for non-linear editing of stereoscopic 3-D movies. Then the material was converted into the streaming 3-D video format. The purpose of the conversion was to present the video clips by a file type that does not depend on a television signal such as PAL or NTSC. 25 4th year medical students who participated in the general ENT course at Aachen University Hospital were asked to estimate depth clues within the six video clips plus cochlear implantation clips. Another 25 4th year students who were shown the material monoscopically on a conventional laptop served as control. Results: All participants noted that the additional depth information helped with understanding the relation of anatomical structures, even though none had hands-on experience with Ear, Nose and Throat operations before or during the course. The monoscopic group generally estimated resection depth to much lesser values than in reality. Although this was the case with some participants in the stereoscopic group, too, the estimation of depth features reflected the enhanced depth impression provided by stereoscopy. Conclusion: Following first implementation of stereoscopic video teaching, medical students who are inexperienced with ENT surgical procedures are able to reproduce depth information and therefore anatomically complex structures to a greater extent following stereoscopic video teaching. Besides extending video teaching to junior doctors, the next evaluation step will address its effect on the learning curve during the surgical training program.

  6. Efficient reversible data hiding in encrypted H.264/AVC videos

    NASA Astrophysics Data System (ADS)

    Xu, Dawen; Wang, Rangding

    2014-09-01

    Due to the security and privacy-preserving requirements for cloud data management, it is sometimes desired that video content is accessible in an encrypted form. Reversible data hiding in the encrypted domain is an emerging technology, as it can perform data hiding in encrypted videos without decryption, which preserves the confidentiality of the content. Furthermore, the original cover can be losslessly restored after decryption and data extraction. An efficient reversible data hiding scheme for encrypted H.264/AVC videos is proposed. During H.264/AVC encoding, the intraprediction mode, motion vector difference, and the sign bits of the residue coefficients are encrypted using a standard stream cipher. Then, the data-hider who does not know the original video content, may reversibly embed secret data into the encrypted H.264/AVC video by using a modified version of the histogram shifting technique. A scale factor is utilized for selecting the embedding zone, which is scalable for different capacity requirements. With an encrypted video containing hidden data, data extraction can be carried out either in the encrypted or decrypted domain. In addition, real reversibility is realized so that data extraction and video recovery are free of any error. Experimental results demonstrate the feasibility and efficiency of the proposed scheme.

  7. Bayesian Modeling of Temporal Coherence in Videos for Entity Discovery and Summarization.

    PubMed

    Mitra, Adway; Biswas, Soma; Bhattacharyya, Chiranjib

    2017-03-01

    A video is understood by users in terms of entities present in it. Entity Discovery is the task of building appearance model for each entity (e.g., a person), and finding all its occurrences in the video. We represent a video as a sequence of tracklets, each spanning 10-20 frames, and associated with one entity. We pose Entity Discovery as tracklet clustering, and approach it by leveraging Temporal Coherence (TC): the property that temporally neighboring tracklets are likely to be associated with the same entity. Our major contributions are the first Bayesian nonparametric models for TC at tracklet-level. We extend Chinese Restaurant Process (CRP) to TC-CRP, and further to Temporally Coherent Chinese Restaurant Franchise (TC-CRF) to jointly model entities and temporal segments using mixture components and sparse distributions. For discovering persons in TV serial videos without meta-data like scripts, these methods show considerable improvement over state-of-the-art approaches to tracklet clustering in terms of clustering accuracy, cluster purity and entity coverage. The proposed methods can perform online tracklet clustering on streaming videos unlike existing approaches, and can automatically reject false tracklets. Finally we discuss entity-driven video summarization- where temporal segments of the video are selected based on the discovered entities, to create a semantically meaningful summary.

  8. Earth-Facing Coronal Holes

    NASA Image and Video Library

    2016-11-09

    Two good-sized coronal holes have rotated around to the center of the sun where they will be spewing solar wind towards Earth (Nov. 8-9, 2016). Coronal holes are areas of open magnetic field from which solar wind particles stream into space. In this wavelength of extreme ultraviolet light they appear as the two dark areas at the center and lower portion of the sun. The stream of particles should reach Earth in a few days and are likely to generate aurora. Videos are available at http://photojournal.jpl.nasa.gov/catalog/PIA16909

  9. Automatic Association of Chats and Video Tracks for Activity Learning and Recognition in Aerial Video Surveillance

    PubMed Central

    Hammoud, Riad I.; Sahin, Cem S.; Blasch, Erik P.; Rhodes, Bradley J.; Wang, Tao

    2014-01-01

    We describe two advanced video analysis techniques, including video-indexed by voice annotations (VIVA) and multi-media indexing and explorer (MINER). VIVA utilizes analyst call-outs (ACOs) in the form of chat messages (voice-to-text) to associate labels with video target tracks, to designate spatial-temporal activity boundaries and to augment video tracking in challenging scenarios. Challenging scenarios include low-resolution sensors, moving targets and target trajectories obscured by natural and man-made clutter. MINER includes: (1) a fusion of graphical track and text data using probabilistic methods; (2) an activity pattern learning framework to support querying an index of activities of interest (AOIs) and targets of interest (TOIs) by movement type and geolocation; and (3) a user interface to support streaming multi-intelligence data processing. We also present an activity pattern learning framework that uses the multi-source associated data as training to index a large archive of full-motion videos (FMV). VIVA and MINER examples are demonstrated for wide aerial/overhead imagery over common data sets affording an improvement in tracking from video data alone, leading to 84% detection with modest misdetection/false alarm results due to the complexity of the scenario. The novel use of ACOs and chat messages in video tracking paves the way for user interaction, correction and preparation of situation awareness reports. PMID:25340453

  10. Automatic association of chats and video tracks for activity learning and recognition in aerial video surveillance.

    PubMed

    Hammoud, Riad I; Sahin, Cem S; Blasch, Erik P; Rhodes, Bradley J; Wang, Tao

    2014-10-22

    We describe two advanced video analysis techniques, including video-indexed by voice annotations (VIVA) and multi-media indexing and explorer (MINER). VIVA utilizes analyst call-outs (ACOs) in the form of chat messages (voice-to-text) to associate labels with video target tracks, to designate spatial-temporal activity boundaries and to augment video tracking in challenging scenarios. Challenging scenarios include low-resolution sensors, moving targets and target trajectories obscured by natural and man-made clutter. MINER includes: (1) a fusion of graphical track and text data using probabilistic methods; (2) an activity pattern learning framework to support querying an index of activities of interest (AOIs) and targets of interest (TOIs) by movement type and geolocation; and (3) a user interface to support streaming multi-intelligence data processing. We also present an activity pattern learning framework that uses the multi-source associated data as training to index a large archive of full-motion videos (FMV). VIVA and MINER examples are demonstrated for wide aerial/overhead imagery over common data sets affording an improvement in tracking from video data alone, leading to 84% detection with modest misdetection/false alarm results due to the complexity of the scenario. The novel use of ACOs and chat Sensors 2014, 14 19844 messages in video tracking paves the way for user interaction, correction and preparation of situation awareness reports.

  11. NASA in Silicon Valley Live - Episode 02 - Self-driving Robots, Planes and Automobiles

    NASA Image and Video Library

    2018-01-26

    NASA in Silicon Valley Live is a live show streamed on Twitch.tv that features conversations with the various researchers, scientists, engineers and all around cool people who work at NASA to push the boundaries of innovation. In this episode livestreamed on January 26, 2018, we explore autonomy, or “self-driving” technologies with Terry Fong, NASA chief roboticist, and Diana Acosta, technical lead for autonomous systems and robotics. Video credit: NASA/Ames Research Center NASA's Ames Research Center is located in California's Silicon Valley. Follow us on social media to hear about the latest developments in space, science, technology and aeronautics.

  12. Slow Monitoring Systems for CUORE

    NASA Astrophysics Data System (ADS)

    Dutta, Suryabrata; Cuore Collaboration

    2016-09-01

    The Cryogenic Underground Observatory for Rare Events (CUORE) is a ton-scale neutrinoless double-beta decay experiment under construction at the Laboratori Nazionali del Gran Sasso (LNGS). The experiment is comprised of 988 TeO2 bolometric crystals arranged into 19 towers and operated at a temperature of 10 mK. We have developed slow monitoring systems to monitor the cryostat during detector installation, commissioning, data taking, and other crucial phases of the experiment. Our systems use responsive LabVIEW virtual instruments and video streams of the cryostat. We built a website using the Angular, Bootstrap, and MongoDB frameworks to display this data in real-time. The website can also display archival data and send alarms. I will present how we constructed these slow monitoring systems to be robust, accurate, and secure, while maintaining reliable access for the entire collaboration from any platform in order to ensure efficient communications and fast diagnoses of all CUORE systems.

  13. Applying mobile and pervasive computer technology to enhance coordination of work in a surgical ward.

    PubMed

    Hansen, Thomas Riisgaard; Bardram, Jakob E

    2007-01-01

    Collaboration, coordination, and communication are crucial in maintaining an efficient and smooth flow of work in an operating ward. This coordination, however, often comes at a high price in terms of unsuccessfully trying to get hold of people, disturbing telephone calls, looking for people, and unnecessary stress. To accommodate this situation and to increase the quality of work in operating wards, we have designed a set of pervasive computer systems which supports what we call context-mediated communication and awareness. These systems use large interactive displays, video streaming from key locations, tracking systems, and mobile devices to support social awareness and different types of communication modalities relevant to the current context. In this paper we report qualitative data from a one-year deployment of the system in a local hospital. Overall, this study shows that 75% of the participants strongly agreed that these systems had made their work easier.

  14. Bonsai: an event-based framework for processing and controlling data streams

    PubMed Central

    Lopes, Gonçalo; Bonacchi, Niccolò; Frazão, João; Neto, Joana P.; Atallah, Bassam V.; Soares, Sofia; Moreira, Luís; Matias, Sara; Itskov, Pavel M.; Correia, Patrícia A.; Medina, Roberto E.; Calcaterra, Lorenza; Dreosti, Elena; Paton, Joseph J.; Kampff, Adam R.

    2015-01-01

    The design of modern scientific experiments requires the control and monitoring of many different data streams. However, the serial execution of programming instructions in a computer makes it a challenge to develop software that can deal with the asynchronous, parallel nature of scientific data. Here we present Bonsai, a modular, high-performance, open-source visual programming framework for the acquisition and online processing of data streams. We describe Bonsai's core principles and architecture and demonstrate how it allows for the rapid and flexible prototyping of integrated experimental designs in neuroscience. We specifically highlight some applications that require the combination of many different hardware and software components, including video tracking of behavior, electrophysiology and closed-loop control of stimulation. PMID:25904861

  15. Compact full-motion video hyperspectral cameras: development, image processing, and applications

    NASA Astrophysics Data System (ADS)

    Kanaev, A. V.

    2015-10-01

    Emergence of spectral pixel-level color filters has enabled development of hyper-spectral Full Motion Video (FMV) sensors operating in visible (EO) and infrared (IR) wavelengths. The new class of hyper-spectral cameras opens broad possibilities of its utilization for military and industry purposes. Indeed, such cameras are able to classify materials as well as detect and track spectral signatures continuously in real time while simultaneously providing an operator the benefit of enhanced-discrimination-color video. Supporting these extensive capabilities requires significant computational processing of the collected spectral data. In general, two processing streams are envisioned for mosaic array cameras. The first is spectral computation that provides essential spectral content analysis e.g. detection or classification. The second is presentation of the video to an operator that can offer the best display of the content depending on the performed task e.g. providing spatial resolution enhancement or color coding of the spectral analysis. These processing streams can be executed in parallel or they can utilize each other's results. The spectral analysis algorithms have been developed extensively, however demosaicking of more than three equally-sampled spectral bands has been explored scarcely. We present unique approach to demosaicking based on multi-band super-resolution and show the trade-off between spatial resolution and spectral content. Using imagery collected with developed 9-band SWIR camera we demonstrate several of its concepts of operation including detection and tracking. We also compare the demosaicking results to the results of multi-frame super-resolution as well as to the combined multi-frame and multiband processing.

  16. Problems with indirect determinations of peak streamflows in steep, desert stream channels

    USGS Publications Warehouse

    Glancy, Patrick A.; Williams, Rhea P.

    1994-01-01

    Many peak streamflow values used in flood analyses for desert areas are derived using the Manning equation. Data used in the equation are collected after the flow has subsided, and peak flow is thereby determined indirectly. Most measurement problems and associated errors in peak-flow determinations result from (1) channel erosion or deposition that cannot be discerned or properly evaluated after the fact, (2) unsteady and non-uniform flow that rapidly changes in magnitude, and (3) appreciable sediment transport that has unknown effects on energy dissipation. High calculated velocities and Froude numbers are unacceptable to some investigators. Measurement results could be improved by recording flows with a video camera, installing a recording stream gage and recording rain gages, measuring channel scour with buried chains, analyzing measured data by multiple techniques, and supplementing indirect measurements with direct measurements of stream velocities in similar ephemeral streams.

  17. Live Educational Outreach for Ocean Exploration: High-Bandwidth Ship-to-Shore Broadcasts Using Internet2

    NASA Astrophysics Data System (ADS)

    Coleman, D. F.; Ballard, R. D.

    2005-12-01

    During the past 3 field seasons, our group at the University of Rhode Island Graduate School of Oceanography, in partnership with the Institute for Exploration and a number of educational institutions, has conducted a series of ocean exploration expeditions with a significant focus on educational outreach through "telepresence" - utilizing live transmissions of video, audio, and data streams across the Internet and Internet2. Our educational partners include Immersion Presents, Boys and Girls Clubs of America, the Jason Foundation for Education, and the National Geographic Society, all who provided partial funding for the expeditions. The primary funding agency each year was NOAA's Office of Ocean Exploration and our outreach efforts were conducted in collaboration with them. During each expedition, remotely operated vehicle (ROV) systems were employed to examine interesting geological and archaeological sites on the seafloor. These expeditions include the investigation of ancient shipwrecks in the Black Sea in 2003, a survey of the Titanic shipwreck site in 2004, and a detailed sampling and mapping effort at the Lost City Hydrothermal Field in 2005. High-definition video cameras on the ROVs collected the footage that was then digitally encoded, IP-encapsulated, and streamed across a satellite link to a shore-based hub, where the streams were redistributed. During each expedition, live half-hour-long educational broadcasts were produced 4 times per day for 10 days. These shows were distributed using satellite and internet technologies to a variety of venues, including museums, aquariums, science centers, public schools, and universities. In addition to the live broadcasts, educational products were developed to enhance the learning experience. These include activity modules and curriculum-based material for teachers and informal educators. Each educational partner also maintained a web site that followed the expedition and provided additional background information to supplement the live feeds. This program continues to grow and has proven very effective at distributing interesting scientific content to a wide range of audiences.

  18. A DSP-based neural network non-uniformity correction algorithm for IRFPA

    NASA Astrophysics Data System (ADS)

    Liu, Chong-liang; Jin, Wei-qi; Cao, Yang; Liu, Xiu

    2009-07-01

    An effective neural network non-uniformity correction (NUC) algorithm based on DSP is proposed in this paper. The non-uniform response in infrared focal plane array (IRFPA) detectors produces corrupted images with a fixed-pattern noise(FPN).We introduced and analyzed the artificial neural network scene-based non-uniformity correction (SBNUC) algorithm. A design of DSP-based NUC development platform for IRFPA is described. The DSP hardware platform designed is of low power consumption, with 32-bit fixed point DSP TMS320DM643 as the kernel processor. The dependability and expansibility of the software have been improved by DSP/BIOS real-time operating system and Reference Framework 5. In order to realize real-time performance, the calibration parameters update is set at a lower task priority then video input and output in DSP/BIOS. In this way, calibration parameters updating will not affect video streams. The work flow of the system and the strategy of real-time realization are introduced. Experiments on real infrared imaging sequences demonstrate that this algorithm requires only a few frames to obtain high quality corrections. It is computationally efficient and suitable for all kinds of non-uniformity.

  19. Crew Field Notes: A New Tool for Planetary Surface Exploration

    NASA Technical Reports Server (NTRS)

    Horz, Friedrich; Evans, Cynthia; Eppler, Dean; Gernhardt, Michael; Bluethmann, William; Graf, Jodi; Bleisath, Scott

    2011-01-01

    The Desert Research and Technology Studies (DRATS) field tests of 2010 focused on the simultaneous operation of two rovers, a historical first. The complexity and data volume of two rovers operating simultaneously presented significant operational challenges for the on-site Mission Control Center, including the real time science support function. The latter was split into two "tactical" back rooms, one for each rover, that supported the real time traverse activities; in addition, a "strategic" science team convened overnight to synthesize the day's findings, and to conduct the strategic forward planning of the next day or days as detailed in [1, 2]. Current DRATS simulations and operations differ dramatically from those of Apollo, including the most evolved Apollo 15-17 missions, due to the advent of digital technologies. Modern digital still and video cameras, combined with the capability for real time transmission of large volumes of data, including multiple video streams, offer the prospect for the ground based science support room(s) in Mission Control to witness all crew activities in unprecedented detail and in real time. It was not uncommon during DRATS 2010 that each tactical science back room simultaneously received some 4-6 video streams from cameras mounted on the rover or the crews' backpacks. Some of the rover cameras are controllable PZT (pan, zoom, tilt) devices that can be operated by the crews (during extensive drives) or remotely by the back room (during EVAs). Typically, a dedicated "expert" and professional geologist in the tactical back room(s) controls, monitors and analyses a single video stream and provides the findings to the team, commonly supported by screen-saved images. It seems obvious, that the real time comprehension and synthesis of the verbal descriptions, extensive imagery, and other information (e.g. navigation data; time lines etc) flowing into the science support room(s) constitute a fundamental challenge to future mission operations: how can one analyze, comprehend and synthesize -in real time- the enormous data volume coming to the ground? Real time understanding of all data is needed for constructive interaction with the surface crews, and it becomes critical for the strategic forward planning process.

  20. Accepted into Education City

    ERIC Educational Resources Information Center

    Asquith, Christina

    2006-01-01

    Qatar's Education City, perhaps the world's most diverse campus, is almost entirely unknown in the United States, but represents the next step in the globalization of American higher education--international franchising. Aided by technology such as online libraries, distance learning and streaming video, U.S. universities offer--and charge tuition…

Top