Reconstructing Interlaced High-Dynamic-Range Video Using Joint Learning.
Inchang Choi; Seung-Hwan Baek; Kim, Min H
2017-11-01
For extending the dynamic range of video, it is a common practice to capture multiple frames sequentially with different exposures and combine them to extend the dynamic range of each video frame. However, this approach results in typical ghosting artifacts due to fast and complex motion in nature. As an alternative, video imaging with interlaced exposures has been introduced to extend the dynamic range. However, the interlaced approach has been hindered by jaggy artifacts and sensor noise, leading to concerns over image quality. In this paper, we propose a data-driven approach for jointly solving two specific problems of deinterlacing and denoising that arise in interlaced video imaging with different exposures. First, we solve the deinterlacing problem using joint dictionary learning via sparse coding. Since partial information of detail in differently exposed rows is often available via interlacing, we make use of the information to reconstruct details of the extended dynamic range from the interlaced video input. Second, we jointly solve the denoising problem by tailoring sparse coding to better handle additive noise in low-/high-exposure rows, and also adopt multiscale homography flow to temporal sequences for denoising. We anticipate that the proposed method will allow for concurrent capture of higher dynamic range video frames without suffering from ghosting artifacts. We demonstrate the advantages of our interlaced video imaging compared with the state-of-the-art high-dynamic-range video methods.
Evaluation of privacy in high dynamic range video sequences
NASA Astrophysics Data System (ADS)
Řeřábek, Martin; Yuan, Lin; Krasula, Lukáš; Korshunov, Pavel; Fliegel, Karel; Ebrahimi, Touradj
2014-09-01
The ability of high dynamic range (HDR) to capture details in environments with high contrast has a significant impact on privacy in video surveillance. However, the extent to which HDR imaging affects privacy, when compared to a typical low dynamic range (LDR) imaging, is neither well studied nor well understood. To achieve such an objective, a suitable dataset of images and video sequences is needed. Therefore, we have created a publicly available dataset of HDR video for privacy evaluation PEViD-HDR, which is an HDR extension of an existing Privacy Evaluation Video Dataset (PEViD). PEViD-HDR video dataset can help in the evaluations of privacy protection tools, as well as for showing the importance of HDR imaging in video surveillance applications and its influence on the privacy-intelligibility trade-off. We conducted a preliminary subjective experiment demonstrating the usability of the created dataset for evaluation of privacy issues in video. The results confirm that a tone-mapped HDR video contains more privacy sensitive information and details compared to a typical LDR video.
Backwards compatible high dynamic range video compression
NASA Astrophysics Data System (ADS)
Dolzhenko, Vladimir; Chesnokov, Vyacheslav; Edirisinghe, Eran A.
2014-02-01
This paper presents a two layer CODEC architecture for high dynamic range video compression. The base layer contains the tone mapped video stream encoded with 8 bits per component which can be decoded using conventional equipment. The base layer content is optimized for rendering on low dynamic range displays. The enhancement layer contains the image difference, in perceptually uniform color space, between the result of inverse tone mapped base layer content and the original video stream. Prediction of the high dynamic range content reduces the redundancy in the transmitted data while still preserves highlights and out-of-gamut colors. Perceptually uniform colorspace enables using standard ratedistortion optimization algorithms. We present techniques for efficient implementation and encoding of non-uniform tone mapping operators with low overhead in terms of bitstream size and number of operations. The transform representation is based on human vision system model and suitable for global and local tone mapping operators. The compression techniques include predicting the transform parameters from previously decoded frames and from already decoded data for current frame. Different video compression techniques are compared: backwards compatible and non-backwards compatible using AVC and HEVC codecs.
Introducing a Public Stereoscopic 3D High Dynamic Range (SHDR) Video Database
NASA Astrophysics Data System (ADS)
Banitalebi-Dehkordi, Amin
2017-03-01
High dynamic range (HDR) displays and cameras are paving their ways through the consumer market at a rapid growth rate. Thanks to TV and camera manufacturers, HDR systems are now becoming available commercially to end users. This is taking place only a few years after the blooming of 3D video technologies. MPEG/ITU are also actively working towards the standardization of these technologies. However, preliminary research efforts in these video technologies are hammered by the lack of sufficient experimental data. In this paper, we introduce a Stereoscopic 3D HDR database of videos that is made publicly available to the research community. We explain the procedure taken to capture, calibrate, and post-process the videos. In addition, we provide insights on potential use-cases, challenges, and research opportunities, implied by the combination of higher dynamic range of the HDR aspect, and depth impression of the 3D aspect.
Chroma sampling and modulation techniques in high dynamic range video coding
NASA Astrophysics Data System (ADS)
Dai, Wei; Krishnan, Madhu; Topiwala, Pankaj
2015-09-01
High Dynamic Range and Wide Color Gamut (HDR/WCG) Video Coding is an area of intense research interest in the engineering community, for potential near-term deployment in the marketplace. HDR greatly enhances the dynamic range of video content (up to 10,000 nits), as well as broadens the chroma representation (BT.2020). The resulting content offers new challenges in its coding and transmission. The Moving Picture Experts Group (MPEG) of the International Standards Organization (ISO) is currently exploring coding efficiency and/or the functionality enhancements of the recently developed HEVC video standard for HDR and WCG content. FastVDO has developed an advanced approach to coding HDR video, based on splitting the HDR signal into a smoothed luminance (SL) signal, and an associated base signal (B). Both signals are then chroma downsampled to YFbFr 4:2:0 signals, using advanced resampling filters, and coded using the Main10 High Efficiency Video Coding (HEVC) standard, which has been developed jointly by ISO/IEC MPEG and ITU-T WP3/16 (VCEG). Our proposal offers both efficient coding, and backwards compatibility with the existing HEVC Main10 Profile. That is, an existing Main10 decoder can produce a viewable standard dynamic range video, suitable for existing screens. Subjective tests show visible improvement over the anchors. Objective tests show a sizable gain of over 25% in PSNR (RGB domain) on average, for a key set of test clips selected by the ISO/MPEG committee.
Video enhancement workbench: an operational real-time video image processing system
NASA Astrophysics Data System (ADS)
Yool, Stephen R.; Van Vactor, David L.; Smedley, Kirk G.
1993-01-01
Video image sequences can be exploited in real-time, giving analysts rapid access to information for military or criminal investigations. Video-rate dynamic range adjustment subdues fluctuations in image intensity, thereby assisting discrimination of small or low- contrast objects. Contrast-regulated unsharp masking enhances differentially shadowed or otherwise low-contrast image regions. Real-time removal of localized hotspots, when combined with automatic histogram equalization, may enhance resolution of objects directly adjacent. In video imagery corrupted by zero-mean noise, real-time frame averaging can assist resolution and location of small or low-contrast objects. To maximize analyst efficiency, lengthy video sequences can be screened automatically for low-frequency, high-magnitude events. Combined zoom, roam, and automatic dynamic range adjustment permit rapid analysis of facial features captured by video cameras recording crimes in progress. When trying to resolve small objects in murky seawater, stereo video places the moving imagery in an optimal setting for human interpretation.
HEVC for high dynamic range services
NASA Astrophysics Data System (ADS)
Kim, Seung-Hwan; Zhao, Jie; Misra, Kiran; Segall, Andrew
2015-09-01
Displays capable of showing a greater range of luminance values can render content containing high dynamic range information in a way such that the viewers have a more immersive experience. This paper introduces the design aspects of a high dynamic range (HDR) system, and examines the performance of the HDR processing chain in terms of compression efficiency. Specifically it examines the relation between recently introduced Society of Motion Picture and Television Engineers (SMPTE) ST 2084 transfer function and the High Efficiency Video Coding (HEVC) standard. SMPTE ST 2084 is designed to cover the full range of an HDR signal from 0 to 10,000 nits, however in many situations the valid signal range of actual video might be smaller than SMPTE ST 2084 supported range. The above restricted signal range results in restricted range of code values for input video data and adversely impacts compression efficiency. In this paper, we propose a code value remapping method that extends the restricted range code values into the full range code values so that the existing standards such as HEVC may better compress the video content. The paper also identifies related non-normative encoder-only changes that are required for remapping method for a fair comparison with anchor. Results are presented comparing the efficiency of the current approach versus the proposed remapping method for HM-16.2.
High dynamic range adaptive real-time smart camera: an overview of the HDR-ARTiST project
NASA Astrophysics Data System (ADS)
Lapray, Pierre-Jean; Heyrman, Barthélémy; Ginhac, Dominique
2015-04-01
Standard cameras capture only a fraction of the information that is visible to the human visual system. This is specifically true for natural scenes including areas of low and high illumination due to transitions between sunlit and shaded areas. When capturing such a scene, many cameras are unable to store the full Dynamic Range (DR) resulting in low quality video where details are concealed in shadows or washed out by sunlight. The imaging technique that can overcome this problem is called HDR (High Dynamic Range) imaging. This paper describes a complete smart camera built around a standard off-the-shelf LDR (Low Dynamic Range) sensor and a Virtex-6 FPGA board. This smart camera called HDR-ARtiSt (High Dynamic Range Adaptive Real-time Smart camera) is able to produce a real-time HDR live video color stream by recording and combining multiple acquisitions of the same scene while varying the exposure time. This technique appears as one of the most appropriate and cheapest solution to enhance the dynamic range of real-life environments. HDR-ARtiSt embeds real-time multiple captures, HDR processing, data display and transfer of a HDR color video for a full sensor resolution (1280 1024 pixels) at 60 frames per second. The main contributions of this work are: (1) Multiple Exposure Control (MEC) dedicated to the smart image capture with alternating three exposure times that are dynamically evaluated from frame to frame, (2) Multi-streaming Memory Management Unit (MMMU) dedicated to the memory read/write operations of the three parallel video streams, corresponding to the different exposure times, (3) HRD creating by combining the video streams using a specific hardware version of the Devebecs technique, and (4) Global Tone Mapping (GTM) of the HDR scene for display on a standard LCD monitor.
Single-layer HDR video coding with SDR backward compatibility
NASA Astrophysics Data System (ADS)
Lasserre, S.; François, E.; Le Léannec, F.; Touzé, D.
2016-09-01
The migration from High Definition (HD) TV to Ultra High Definition (UHD) is already underway. In addition to an increase of picture spatial resolution, UHD will bring more color and higher contrast by introducing Wide Color Gamut (WCG) and High Dynamic Range (HDR) video. As both Standard Dynamic Range (SDR) and HDR devices will coexist in the ecosystem, the transition from Standard Dynamic Range (SDR) to HDR will require distribution solutions supporting some level of backward compatibility. This paper presents a new HDR content distribution scheme, named SL-HDR1, using a single layer codec design and providing SDR compatibility. The solution is based on a pre-encoding HDR-to-SDR conversion, generating a backward compatible SDR video, with side dynamic metadata. The resulting SDR video is then compressed, distributed and decoded using standard-compliant decoders (e.g. HEVC Main 10 compliant). The decoded SDR video can be directly rendered on SDR displays without adaptation. Dynamic metadata of limited size are generated by the pre-processing and used to reconstruct the HDR signal from the decoded SDR video, using a post-processing that is the functional inverse of the pre-processing. Both HDR quality and artistic intent are preserved. Pre- and post-processing are applied independently per picture, do not involve any inter-pixel dependency, and are codec agnostic. Compression performance, and SDR quality are shown to be solidly improved compared to the non-backward and backward-compatible approaches, respectively using the Perceptual Quantization (PQ) and Hybrid Log Gamma (HLG) Opto-Electronic Transfer Functions (OETF).
Live HDR video streaming on commodity hardware
NASA Astrophysics Data System (ADS)
McNamee, Joshua; Hatchett, Jonathan; Debattista, Kurt; Chalmers, Alan
2015-09-01
High Dynamic Range (HDR) video provides a step change in viewing experience, for example the ability to clearly see the soccer ball when it is kicked from the shadow of the stadium into sunshine. To achieve the full potential of HDR video, so-called true HDR, it is crucial that all the dynamic range that was captured is delivered to the display device and tone mapping is confined only to the display. Furthermore, to ensure widespread uptake of HDR imaging, it should be low cost and available on commodity hardware. This paper describes an end-to-end HDR pipeline for capturing, encoding and streaming high-definition HDR video in real-time using off-the-shelf components. All the lighting that is captured by HDR-enabled consumer cameras is delivered via the pipeline to any display, including HDR displays and even mobile devices with minimum latency. The system thus provides an integrated HDR video pipeline that includes everything from capture to post-production, archival and storage, compression, transmission, and display.
NASA Astrophysics Data System (ADS)
Cvetkovic, Sascha D.; Schirris, Johan; de With, Peter H. N.
2009-01-01
For real-time imaging in surveillance applications, visibility of details is of primary importance to ensure customer confidence. If we display High Dynamic-Range (HDR) scenes whose contrast spans four or more orders of magnitude on a conventional monitor without additional processing, results are unacceptable. Compression of the dynamic range is therefore a compulsory part of any high-end video processing chain because standard monitors are inherently Low- Dynamic Range (LDR) devices with maximally two orders of display dynamic range. In real-time camera processing, many complex scenes are improved with local contrast enhancements, bringing details to the best possible visibility. In this paper, we show how a multi-scale high-frequency enhancement scheme, in which gain is a non-linear function of the detail energy, can be used for the dynamic range compression of HDR real-time video camera signals. We also show the connection of our enhancement scheme to the processing way of the Human Visual System (HVS). Our algorithm simultaneously controls perceived sharpness, ringing ("halo") artifacts (contrast) and noise, resulting in a good balance between visibility of details and non-disturbance of artifacts. The overall quality enhancement, suitable for both HDR and LDR scenes, is based on a careful selection of the filter types for the multi-band decomposition and a detailed analysis of the signal per frequency band.
World's Most Advanced Planetarium Opens; University Partners Sought
NASA Astrophysics Data System (ADS)
Duncan, Douglas K.
2015-01-01
The 40 year old-Fiske Planetarium at the Univ. of Colorado has remodeled as the most advanced planetarium ever built. The 20m diameter dome features a stunning video image 8,000 x 8,000 pixels, up to 60 frames per second, produced by 6 JVC projectors. It also features the first US installation of the Megastar IIa Opto-mechanical planetarium that projects 20 million individual stars and 170 deep sky objects. You can use binoculars indoor and see individual Milky Way stars.The video projectors have high dynamic range, but not as great as the eye. In order to preserve the remarkable Megastar sky while still using video, each projector shines through a computer controlled variable density filter than extends the dynamic range by about 4 magnitudes. It therefore is possible to show a Mauna Kea quality star field and also beautiful bright videos.Unlike most planetariums, the #1 audience of Fiske is college students - the more than 2,000 who take Introductory Astronomy at Colorado each year. WE ARE SEEKING OTHER UNIVERSITIES WITH FULL-DOME VIDEO PLANETARIUMS to join us in the production of college-level material. We already have a beautiful production studio funded by Hewlett Packard and an experienced full-time Video Producer for Educational Programs. Please seek out Fiske Director Dr. Doug Duncan if interested in possible collaboration.
High dynamic range subjective testing
NASA Astrophysics Data System (ADS)
Allan, Brahim; Nilsson, Mike
2016-09-01
This paper describes of a set of subjective tests that the authors have carried out to assess the end user perception of video encoded with High Dynamic Range technology when viewed in a typical home environment. Viewers scored individual single clips of content, presented in High Definition (HD) and Ultra High Definition (UHD), in Standard Dynamic Range (SDR), and in High Dynamic Range (HDR) using both the Perceptual Quantizer (PQ) and Hybrid Log Gamma (HLG) transfer characteristics, and presented in SDR as the backwards compatible rendering of the HLG representation. The quality of SDR HD was improved by approximately equal amounts by either increasing the dynamic range or increasing the resolution to UHD. A further smaller increase in quality was observed in the Mean Opinion Scores of the viewers by increasing both the dynamic range and the resolution, but this was not quite statistically significant.
Quantitative evaluation of low-cost frame-grabber boards for personal computers.
Kofler, J M; Gray, J E; Fuelberth, J T; Taubel, J P
1995-11-01
Nine moderately priced frame-grabber boards for both Macintosh (Apple Computers, Cupertino, CA) and IBM-compatible computers were evaluated using a Society of Motion Pictures and Television Engineers (SMPTE) pattern and a video signal generator for dynamic range, gray-scale reproducibility, and spatial integrity of the captured image. The degradation of the video information ranged from minor to severe. Some boards are of reasonable quality for applications in diagnostic imaging and education. However, price and quality are not necessarily directly related.
The virtual brain: 30 years of video-game play and cognitive abilities.
Latham, Andrew J; Patston, Lucy L M; Tippett, Lynette J
2013-09-13
Forty years have passed since video-games were first made widely available to the public and subsequently playing games has become a favorite past-time for many. Players continuously engage with dynamic visual displays with success contingent on the time-pressured deployment, and flexible allocation, of attention as well as precise bimanual movements. Evidence to date suggests that both brief and extensive exposure to video-game play can result in a broad range of enhancements to various cognitive faculties that generalize beyond the original context. Despite promise, video-game research is host to a number of methodological issues that require addressing before progress can be made in this area. Here an effort is made to consolidate the past 30 years of literature examining the effects of video-game play on cognitive faculties and, more recently, neural systems. Future work is required to identify the mechanism that allows the act of video-game play to generate such a broad range of generalized enhancements.
The virtual brain: 30 years of video-game play and cognitive abilities
Latham, Andrew J.; Patston, Lucy L. M.; Tippett, Lynette J.
2013-01-01
Forty years have passed since video-games were first made widely available to the public and subsequently playing games has become a favorite past-time for many. Players continuously engage with dynamic visual displays with success contingent on the time-pressured deployment, and flexible allocation, of attention as well as precise bimanual movements. Evidence to date suggests that both brief and extensive exposure to video-game play can result in a broad range of enhancements to various cognitive faculties that generalize beyond the original context. Despite promise, video-game research is host to a number of methodological issues that require addressing before progress can be made in this area. Here an effort is made to consolidate the past 30 years of literature examining the effects of video-game play on cognitive faculties and, more recently, neural systems. Future work is required to identify the mechanism that allows the act of video-game play to generate such a broad range of generalized enhancements. PMID:24062712
Examining the effect of task on viewing behavior in videos using saliency maps
NASA Astrophysics Data System (ADS)
Alers, Hani; Redi, Judith A.; Heynderickx, Ingrid
2012-03-01
Research has shown that when viewing still images, people will look at these images in a different manner if instructed to evaluate their quality. They will tend to focus less on the main features of the image and, instead, scan the entire image area looking for clues for its level of quality. It is questionable, however, whether this finding can be extended to videos considering their dynamic nature. One can argue that when watching a video the viewer will always focus on the dynamically changing features of the video regardless of the given task. To test whether this is true, an experiment was conducted where half of the participants viewed videos with the task of quality evaluation while the other half were simply told to watch the videos as if they were watching a movie on TV or a video downloaded from the internet. The videos contained content which was degraded with compression artifacts over a wide range of quality. An eye tracking device was used to record the viewing behavior in both conditions. By comparing the behavior during each task, it was possible to observe a systematic difference in the viewing behavior which seemed to correlate to the quality of the videos.
Video game-based exercises for balance rehabilitation: a single-subject design.
Betker, Aimee L; Szturm, Tony; Moussavi, Zahra K; Nett, Cristabel
2006-08-01
To investigate whether coupling foot center of pressure (COP)-controlled video games to standing balance exercises will improve dynamic balance control and to determine whether the motivational and challenging aspects of the video games would increase a subject's desire to perform the exercises and complete the rehabilitation process. Case study, pre- and postexercise. University hospital outpatient clinic. A young adult with excised cerebellar tumor, 1 middle-aged adult with single right cerebrovascular accident, and 1 middle-aged adult with traumatic brain injury. A COP-controlled, video game-based exercise system. The following were calculated during 12 different tasks: the number of falls, range of COP excursion, and COP path length. Postexercise, subjects exhibited a lower fall count, decreased COP excursion limits for some tasks, increased practice volume, and increased attention span during training. The COP-controlled video game-based exercise regime motivated subjects to increase their practice volume and attention span during training. This in turn improved subjects' dynamic balance control.
NASA Astrophysics Data System (ADS)
Francisco Salgado, Jose
2010-01-01
Astronomer and visual artist Jose Francisco Salgado has directed two astronomical video suites to accompany live performances of classical music works. The suites feature awe-inspiring images, historical illustrations, and visualizations produced by NASA, ESA, and the Adler Planetarium. By the end of 2009, his video suites Gustav Holst's The Planets and Astronomical Pictures at an Exhibition will have been presented more than 40 times in over 10 countries. Lately Salgado, an avid photographer, has been experimenting with high dynamic range imaging, time-lapse, infrared, and fisheye photography, as well as with stereoscopic photography and video to enhance his multimedia works.
Water surface modeling from a single viewpoint video.
Li, Chuan; Pickup, David; Saunders, Thomas; Cosker, Darren; Marshall, David; Hall, Peter; Willis, Philip
2013-07-01
We introduce a video-based approach for producing water surface models. Recent advances in this field output high-quality results but require dedicated capturing devices and only work in limited conditions. In contrast, our method achieves a good tradeoff between the visual quality and the production cost: It automatically produces a visually plausible animation using a single viewpoint video as the input. Our approach is based on two discoveries: first, shape from shading (SFS) is adequate to capture the appearance and dynamic behavior of the example water; second, shallow water model can be used to estimate a velocity field that produces complex surface dynamics. We will provide qualitative evaluation of our method and demonstrate its good performance across a wide range of scenes.
NASA Astrophysics Data System (ADS)
Froehlich, Jan; Grandinetti, Stefan; Eberhardt, Bernd; Walter, Simon; Schilling, Andreas; Brendel, Harald
2014-03-01
High quality video sequences are required for the evaluation of tone mapping operators and high dynamic range (HDR) displays. We provide scenic and documentary scenes with a dynamic range of up to 18 stops. The scenes are staged using professional film lighting, make-up and set design to enable the evaluation of image and material appearance. To address challenges for HDR-displays and temporal tone mapping operators, the sequences include highlights entering and leaving the image, brightness changing over time, high contrast skin tones, specular highlights and bright, saturated colors. HDR-capture is carried out using two cameras mounted on a mirror-rig. To achieve a cinematic depth of field, digital motion picture cameras with Super-35mm size sensors are used. We provide HDR-video sequences to serve as a common ground for the evaluation of temporal tone mapping operators and HDR-displays. They are available to the scientific community for further research.
Evaluation of a HDR image sensor with logarithmic response for mobile video-based applications
NASA Astrophysics Data System (ADS)
Tektonidis, Marco; Pietrzak, Mateusz; Monnin, David
2017-10-01
The performance of mobile video-based applications using conventional LDR (Low Dynamic Range) image sensors highly depends on the illumination conditions. As an alternative, HDR (High Dynamic Range) image sensors with logarithmic response are capable to acquire illumination-invariant HDR images in a single shot. We have implemented a complete image processing framework for a HDR sensor, including preprocessing methods (nonuniformity correction (NUC), cross-talk correction (CTC), and demosaicing) as well as tone mapping (TM). We have evaluated the HDR sensor for video-based applications w.r.t. the display of images and w.r.t. image analysis techniques. Regarding the display we have investigated the image intensity statistics over time, and regarding image analysis we assessed the number of feature correspondences between consecutive frames of temporal image sequences. For the evaluation we used HDR image data recorded from a vehicle on outdoor or combined outdoor/indoor itineraries, and we performed a comparison with corresponding conventional LDR image data.
Photometric Calibration of Consumer Video Cameras
NASA Technical Reports Server (NTRS)
Suggs, Robert; Swift, Wesley, Jr.
2007-01-01
Equipment and techniques have been developed to implement a method of photometric calibration of consumer video cameras for imaging of objects that are sufficiently narrow or sufficiently distant to be optically equivalent to point or line sources. Heretofore, it has been difficult to calibrate consumer video cameras, especially in cases of image saturation, because they exhibit nonlinear responses with dynamic ranges much smaller than those of scientific-grade video cameras. The present method not only takes this difficulty in stride but also makes it possible to extend effective dynamic ranges to several powers of ten beyond saturation levels. The method will likely be primarily useful in astronomical photometry. There are also potential commercial applications in medical and industrial imaging of point or line sources in the presence of saturation.This development was prompted by the need to measure brightnesses of debris in amateur video images of the breakup of the Space Shuttle Columbia. The purpose of these measurements is to use the brightness values to estimate relative masses of debris objects. In most of the images, the brightness of the main body of Columbia was found to exceed the dynamic ranges of the cameras. A similar problem arose a few years ago in the analysis of video images of Leonid meteors. The present method is a refined version of the calibration method developed to solve the Leonid calibration problem. In this method, one performs an endto- end calibration of the entire imaging system, including not only the imaging optics and imaging photodetector array but also analog tape recording and playback equipment (if used) and any frame grabber or other analog-to-digital converter (if used). To automatically incorporate the effects of nonlinearity and any other distortions into the calibration, the calibration images are processed in precisely the same manner as are the images of meteors, space-shuttle debris, or other objects that one seeks to analyze. The light source used to generate the calibration images is an artificial variable star comprising a Newtonian collimator illuminated by a light source modulated by a rotating variable neutral- density filter. This source acts as a point source, the brightness of which varies at a known rate. A video camera to be calibrated is aimed at this source. Fixed neutral-density filters are inserted in or removed from the light path as needed to make the video image of the source appear to fluctuate between dark and saturated bright. The resulting video-image data are analyzed by use of custom software that determines the integrated signal in each video frame and determines the system response curve (measured output signal versus input brightness). These determinations constitute the calibration, which is thereafter used in automatic, frame-by-frame processing of the data from the video images to be analyzed.
Advanced Video Guidance Sensor (AVGS) Development Testing
NASA Technical Reports Server (NTRS)
Howard, Richard T.; Johnston, Albert S.; Bryan, Thomas C.; Book, Michael L.
2004-01-01
NASA's Marshall Space Flight Center was the driving force behind the development of the Advanced Video Guidance Sensor, an active sensor system that provides near-range sensor data as part of an automatic rendezvous and docking system. The sensor determines the relative positions and attitudes between the active sensor and the passive target at ranges up to 300 meters. The AVGS uses laser diodes to illuminate retro-reflectors in the target, a solid-state camera to detect the return from the target, and image capture electronics and a digital signal processor to convert the video information into the relative positions and attitudes. The AVGS will fly as part of the Demonstration of Autonomous Rendezvous Technologies (DART) in October, 2004. This development effort has required a great deal of testing of various sorts at every phase of development. Some of the test efforts included optical characterization of performance with the intended target, thermal vacuum testing, performance tests in long range vacuum facilities, EMI/EMC tests, and performance testing in dynamic situations. The sensor has been shown to track a target at ranges of up to 300 meters, both in vacuum and ambient conditions, to survive and operate during the thermal vacuum cycling specific to the DART mission, to handle EM1 well, and to perform well in dynamic situations.
Bezanilla, F
1985-03-01
A modified digital audio processor, a video cassette recorder, and some simple added circuitry are assembled into a recording device of high capacity. The unit converts two analog channels into digital form at 44-kHz sampling rate and stores the information in digital form in a common video cassette. Bandwidth of each channel is from direct current to approximately 20 kHz and the dynamic range is close to 90 dB. The total storage capacity in a 3-h video cassette is 2 Gbytes. The information can be retrieved in analog or digital form.
Bezanilla, F
1985-01-01
A modified digital audio processor, a video cassette recorder, and some simple added circuitry are assembled into a recording device of high capacity. The unit converts two analog channels into digital form at 44-kHz sampling rate and stores the information in digital form in a common video cassette. Bandwidth of each channel is from direct current to approximately 20 kHz and the dynamic range is close to 90 dB. The total storage capacity in a 3-h video cassette is 2 Gbytes. The information can be retrieved in analog or digital form. PMID:3978213
NASA Astrophysics Data System (ADS)
Adedayo, Bada; Wang, Qi; Alcaraz Calero, Jose M.; Grecos, Christos
2015-02-01
The recent explosion in video-related Internet traffic has been driven by the widespread use of smart mobile devices, particularly smartphones with advanced cameras that are able to record high-quality videos. Although many of these devices offer the facility to record videos at different spatial and temporal resolutions, primarily with local storage considerations in mind, most users only ever use the highest quality settings. The vast majority of these devices are optimised for compressing the acquired video using a single built-in codec and have neither the computational resources nor battery reserves to transcode the video to alternative formats. This paper proposes a new low-complexity dynamic resource allocation engine for cloud-based video transcoding services that are both scalable and capable of being delivered in real-time. Firstly, through extensive experimentation, we establish resource requirement benchmarks for a wide range of transcoding tasks. The set of tasks investigated covers the most widely used input formats (encoder type, resolution, amount of motion and frame rate) associated with mobile devices and the most popular output formats derived from a comprehensive set of use cases, e.g. a mobile news reporter directly transmitting videos to the TV audience of various video format requirements, with minimal usage of resources both at the reporter's end and at the cloud infrastructure end for transcoding services.
Solid State Television Camera (CID)
NASA Technical Reports Server (NTRS)
Steele, D. W.; Green, W. T.
1976-01-01
The design, development and test are described of a charge injection device (CID) camera using a 244x248 element array. A number of video signal processing functions are included which maximize the output video dynamic range while retaining the inherently good resolution response of the CID. Some of the unique features of the camera are: low light level performance, high S/N ratio, antiblooming, geometric distortion, sequential scanning and AGC.
Context-dependent JPEG backward-compatible high-dynamic range image compression
NASA Astrophysics Data System (ADS)
Korshunov, Pavel; Ebrahimi, Touradj
2013-10-01
High-dynamic range (HDR) imaging is expected, together with ultrahigh definition and high-frame rate video, to become a technology that may change photo, TV, and film industries. Many cameras and displays capable of capturing and rendering both HDR images and video are already available in the market. The popularity and full-public adoption of HDR content is, however, hindered by the lack of standards in evaluation of quality, file formats, and compression, as well as large legacy base of low-dynamic range (LDR) displays that are unable to render HDR. To facilitate the wide spread of HDR usage, the backward compatibility of HDR with commonly used legacy technologies for storage, rendering, and compression of video and images are necessary. Although many tone-mapping algorithms are developed for generating viewable LDR content from HDR, there is no consensus of which algorithm to use and under which conditions. We, via a series of subjective evaluations, demonstrate the dependency of the perceptual quality of the tone-mapped LDR images on the context: environmental factors, display parameters, and image content itself. Based on the results of subjective tests, it proposes to extend JPEG file format, the most popular image format, in a backward compatible manner to deal with HDR images also. An architecture to achieve such backward compatibility with JPEG is proposed. A simple implementation of lossy compression demonstrates the efficiency of the proposed architecture compared with the state-of-the-art HDR image compression.
HDR video synthesis for vision systems in dynamic scenes
NASA Astrophysics Data System (ADS)
Shopovska, Ivana; Jovanov, Ljubomir; Goossens, Bart; Philips, Wilfried
2016-09-01
High dynamic range (HDR) image generation from a number of differently exposed low dynamic range (LDR) images has been extensively explored in the past few decades, and as a result of these efforts a large number of HDR synthesis methods have been proposed. Since HDR images are synthesized by combining well-exposed regions of the input images, one of the main challenges is dealing with camera or object motion. In this paper we propose a method for the synthesis of HDR video from a single camera using multiple, differently exposed video frames, with circularly alternating exposure times. One of the potential applications of the system is in driver assistance systems and autonomous vehicles, involving significant camera and object movement, non- uniform and temporally varying illumination, and the requirement of real-time performance. To achieve these goals simultaneously, we propose a HDR synthesis approach based on weighted averaging of aligned radiance maps. The computational complexity of high-quality optical flow methods for motion compensation is still pro- hibitively high for real-time applications. Instead, we rely on more efficient global projective transformations to solve camera movement, while moving objects are detected by thresholding the differences between the trans- formed and brightness adapted images in the set. To attain temporal consistency of the camera motion in the consecutive HDR frames, the parameters of the perspective transformation are stabilized over time by means of computationally efficient temporal filtering. We evaluated our results on several reference HDR videos, on synthetic scenes, and using 14-bit raw images taken with a standard camera.
Molecular dynamics simulations through GPU video games technologies
Loukatou, Styliani; Papageorgiou, Louis; Fakourelis, Paraskevas; Filntisi, Arianna; Polychronidou, Eleftheria; Bassis, Ioannis; Megalooikonomou, Vasileios; Makałowski, Wojciech; Vlachakis, Dimitrios; Kossida, Sophia
2016-01-01
Bioinformatics is the scientific field that focuses on the application of computer technology to the management of biological information. Over the years, bioinformatics applications have been used to store, process and integrate biological and genetic information, using a wide range of methodologies. One of the most de novo techniques used to understand the physical movements of atoms and molecules is molecular dynamics (MD). MD is an in silico method to simulate the physical motions of atoms and molecules under certain conditions. This has become a state strategic technique and now plays a key role in many areas of exact sciences, such as chemistry, biology, physics and medicine. Due to their complexity, MD calculations could require enormous amounts of computer memory and time and therefore their execution has been a big problem. Despite the huge computational cost, molecular dynamics have been implemented using traditional computers with a central memory unit (CPU). A graphics processing unit (GPU) computing technology was first designed with the goal to improve video games, by rapidly creating and displaying images in a frame buffer such as screens. The hybrid GPU-CPU implementation, combined with parallel computing is a novel technology to perform a wide range of calculations. GPUs have been proposed and used to accelerate many scientific computations including MD simulations. Herein, we describe the new methodologies developed initially as video games and how they are now applied in MD simulations. PMID:27525251
Prediction-guided quantization for video tone mapping
NASA Astrophysics Data System (ADS)
Le Dauphin, Agnès.; Boitard, Ronan; Thoreau, Dominique; Olivier, Yannick; Francois, Edouard; LeLéannec, Fabrice
2014-09-01
Tone Mapping Operators (TMOs) compress High Dynamic Range (HDR) content to address Low Dynamic Range (LDR) displays. However, before reaching the end-user, this tone mapped content is usually compressed for broadcasting or storage purposes. Any TMO includes a quantization step to convert floating point values to integer ones. In this work, we propose to adapt this quantization, in the loop of an encoder, to reduce the entropy of the tone mapped video content. Our technique provides an appropriate quantization for each mode of both the Intra and Inter-prediction that is performed in the loop of a block-based encoder. The mode that minimizes a rate-distortion criterion uses its associated quantization to provide integer values for the rest of the encoding process. The method has been implemented in HEVC and was tested over two different scenarios: the compression of tone mapped LDR video content (using the HM10.0) and the compression of perceptually encoded HDR content (HM14.0). Results show an average bit-rate reduction under the same PSNR for all the sequences and TMO considered of 20.3% and 27.3% for tone mapped content and 2.4% and 2.7% for HDR content.
NASA Astrophysics Data System (ADS)
O'Byrne, Michael; Ghosh, Bidisha; Schoefs, Franck; O'Donnell, Deirdre; Wright, Robert; Pakrashi, Vikram
2015-07-01
Video based tracking is capable of analysing bridge vibrations that are characterised by large amplitudes and low frequencies. This paper presents the use of video images and associated image processing techniques to obtain the dynamic response of a pedestrian suspension bridge in Cork, Ireland. This historic structure is one of the four suspension bridges in Ireland and is notable for its dynamic nature. A video camera is mounted on the river-bank and the dynamic responses of the bridge have been measured from the video images. The dynamic response is assessed without the need of a reflector on the bridge and in the presence of various forms of luminous complexities in the video image scenes. Vertical deformations of the bridge were measured in this regard. The video image tracking for the measurement of dynamic responses of the bridge were based on correlating patches in time-lagged scenes in video images and utilisinga zero mean normalisedcross correlation (ZNCC) metric. The bridge was excited by designed pedestrian movement and by individual cyclists traversing the bridge. The time series data of dynamic displacement responses of the bridge were analysedto obtain the frequency domain response. Frequencies obtained from video analysis were checked against accelerometer data from the bridge obtained while carrying out the same set of experiments used for video image based recognition.
NASA Astrophysics Data System (ADS)
O'Byrne, Michael; Ghosh, Bidisha; Schoefs, Franck; O'Donnell, Deirdre; Wright, Robert; Pakrashi, Vikram
2015-07-01
Video based tracking is capable of analysing bridge vibrations that are characterised by large amplitudes and low frequencies. This paper presents the use of video images and associated image processing techniques to obtain the dynamic response of a pedestrian suspension bridge in Cork, Ireland. This historic structure is one of the four suspension bridges in Ireland and is notable for its dynamic nature. A video camera is mounted on the river-bank and the dynamic responses of the bridge have been measured from the video images. The dynamic response is assessed without the need of a reflector on the bridge and in the presence of various forms of luminous complexities in the video image scenes. Vertical deformations of the bridge were measured in this regard. The video image tracking for the measurement of dynamic responses of the bridge were based on correlating patches in time-lagged scenes in video images and utilisinga zero mean normalised cross correlation (ZNCC) metric. The bridge was excited by designed pedestrian movement and by individual cyclists traversing the bridge. The time series data of dynamic displacement responses of the bridge were analysedto obtain the frequency domain response. Frequencies obtained from video analysis were checked against accelerometer data from the bridge obtained while carrying out the same set of experiments used for video image based recognition.
Characterizing popularity dynamics of online videos
NASA Astrophysics Data System (ADS)
Ren, Zhuo-Ming; Shi, Yu-Qiang; Liao, Hao
2016-07-01
Online popularity has a major impact on videos, music, news and other contexts in online systems. Characterizing online popularity dynamics is nature to explain the observed properties in terms of the already acquired popularity of each individual. In this paper, we provide a quantitative, large scale, temporal analysis of the popularity dynamics in two online video-provided websites, namely MovieLens and Netflix. The two collected data sets contain over 100 million records and even span a decade. We characterize that the popularity dynamics of online videos evolve over time, and find that the dynamics of the online video popularity can be characterized by the burst behaviors, typically occurring in the early life span of a video, and later restricting to the classic preferential popularity increase mechanism.
Beam/seam alignment control for electron beam welding
Burkhardt, Jr., James H.; Henry, J. James; Davenport, Clyde M.
1980-01-01
This invention relates to a dynamic beam/seam alignment control system for electron beam welds utilizing video apparatus. The system includes automatic control of workpiece illumination, near infrared illumination of the workpiece to limit the range of illumination and camera sensitivity adjustment, curve fitting of seam position data to obtain an accurate measure of beam/seam alignment, and automatic beam detection and calculation of the threshold beam level from the peak beam level of the preceding video line to locate the beam or seam edges.
Di Sante, Gabriele; Casimiro, Mathew C.; Pestell, Timothy G.; Pestell, Richard G.
2016-01-01
Time-lapse video microscopy can be defined as the real time imaging of living cells. This technique relies on the collection of images at different time points. Time intervals can be set through a computer interface that controls the microscope-integrated camera. This kind of microscopy requires both the ability to acquire very rapid events and the signal generated by the observed cellular structure during these events. After the images have been collected, a movie of the entire experiment is assembled to show the dynamic of the molecular events of interest. Time-lapse video microscopy has a broad range of applications in the biomedical research field and is a powerful and unique tool for following the dynamics of the cellular events in real time. Through this technique, we can assess cellular events such as migration, division, signal transduction, growth, and death. Moreover, using fluorescent molecular probes we are able to mark specific molecules, such as DNA, RNA or proteins and follow them through their molecular pathways and functions. Time-lapse video microscopy has multiple advantages, the major one being the ability to collect data at the single-cell level, that make it a unique technology for investigation in the field of cell biology. However, time-lapse video microscopy has limitations that can interfere with the acquisition of high quality images. Images can be compromised by both external factors; temperature fluctuations, vibrations, humidity and internal factors; pH, cell motility. Herein, we describe a protocol for the dynamic acquisition of a specific protein, Parkin, fused with the enhanced yellow fluorescent protein (EYFP) in order to track the selective removal of damaged mitochondria, using a time-lapse video microscopy approach. PMID:27168174
Di Sante, Gabriele; Casimiro, Mathew C; Pestell, Timothy G; Pestell, Richard G
2016-05-04
Time-lapse video microscopy can be defined as the real time imaging of living cells. This technique relies on the collection of images at different time points. Time intervals can be set through a computer interface that controls the microscope-integrated camera. This kind of microscopy requires both the ability to acquire very rapid events and the signal generated by the observed cellular structure during these events. After the images have been collected, a movie of the entire experiment is assembled to show the dynamic of the molecular events of interest. Time-lapse video microscopy has a broad range of applications in the biomedical research field and is a powerful and unique tool for following the dynamics of the cellular events in real time. Through this technique, we can assess cellular events such as migration, division, signal transduction, growth, and death. Moreover, using fluorescent molecular probes we are able to mark specific molecules, such as DNA, RNA or proteins and follow them through their molecular pathways and functions. Time-lapse video microscopy has multiple advantages, the major one being the ability to collect data at the single-cell level, that make it a unique technology for investigation in the field of cell biology. However, time-lapse video microscopy has limitations that can interfere with the acquisition of high quality images. Images can be compromised by both external factors; temperature fluctuations, vibrations, humidity and internal factors; pH, cell motility. Herein, we describe a protocol for the dynamic acquisition of a specific protein, Parkin, fused with the enhanced yellow fluorescent protein (EYFP) in order to track the selective removal of damaged mitochondria, using a time-lapse video microscopy approach.
NASA Astrophysics Data System (ADS)
Tornow, Ralf P.; Milczarek, Aleksandra; Odstrcilik, Jan; Kolar, Radim
2017-07-01
A parallel video ophthalmoscope was developed to acquire short video sequences (25 fps, 250 frames) of both eyes simultaneously with exact synchronization. Video sequences were registered off-line to compensate for eye movements. From registered video sequences dynamic parameters like cardiac cycle induced reflection changes and eye movements can be calculated and compared between eyes.
NASA Astrophysics Data System (ADS)
Unaldi, Numan; Asari, Vijayan K.; Rahman, Zia-ur
2009-05-01
Recently we proposed a wavelet-based dynamic range compression algorithm to improve the visual quality of digital images captured from high dynamic range scenes with non-uniform lighting conditions. The fast image enhancement algorithm that provides dynamic range compression, while preserving the local contrast and tonal rendition, is also a good candidate for real time video processing applications. Although the colors of the enhanced images produced by the proposed algorithm are consistent with the colors of the original image, the proposed algorithm fails to produce color constant results for some "pathological" scenes that have very strong spectral characteristics in a single band. The linear color restoration process is the main reason for this drawback. Hence, a different approach is required for the final color restoration process. In this paper the latest version of the proposed algorithm, which deals with this issue is presented. The results obtained by applying the algorithm to numerous natural images show strong robustness and high image quality.
Implication of high dynamic range and wide color gamut content distribution
NASA Astrophysics Data System (ADS)
Lu, Taoran; Pu, Fangjun; Yin, Peng; Chen, Tao; Husak, Walt
2015-09-01
High Dynamic Range (HDR) and Wider Color Gamut (WCG) content represents a greater range of luminance levels and a more complete reproduction of colors found in real-world scenes. The current video distribution environments deliver Standard Dynamic Range (SDR) signal. Therefore, there might be some significant implication on today's end-to-end ecosystem from content creation to distribution and finally to consumption. For SDR content, the common practice is to apply compression on Y'CbCr 4:2:0 using gamma transfer function and non-constant luminance 4:2:0 chroma subsampling. For HDR and WCG content, it is desirable to examine if such signal format still works well for compression, and it is interesting to know if the overall system performance can be further improved by exploring different signal formats and processing workflows. In this paper, we will provide some of our insight into those problems.
Peters, R M H; Zweekhorst, M B M; van Brakel, W H; Bunders, J F G; Irwanto
2016-01-01
The Stigma Assessment and Reduction of Impact project aims to assess the effectiveness of stigma-reduction interventions in the field of leprosy. Participatory video seemed to be a promising approach to reducing stigma among stigmatized individuals (in this study the video makers) and the stigmatisers (video audience). This study focuses on the video makers and seeks to assess the impact on them of making a participatory video and to increase understanding of how to deal with foreseeable difficulties. Participants were selected on the basis of criteria and in collaboration with the community health centre. This study draws on six qualitative methods including interviews with the video makers and participant observation. Triangulation was used to increase the validity of the findings. Two videos were produced. The impact on participants ranged from having a good time to a greater sense of togetherness, increased self-esteem, individual agency and willingness to take action in the community. Concealment of leprosy is a persistent challenge, and physical limitations and group dynamics are also areas that require attention. Provided these three areas are properly taken into account, participatory video has the potential to address stigma at least at three levels - intrapersonal, interpersonal and community - and possibly more.
Hill, Hamish R M; Crowe, Trevor P; Gonsalvez, Craig J
2016-01-01
To pilot an intervention involving reflective dialogue based on video recordings of clinical supervision. Fourteen participants (seven psychotherapists and their supervisors) completed a reflective practice protocol after viewing a video of their most recent supervision session, then shared their reflections in a second session. Thematic analysis of individual reflections and feedback resulted in the following dominant themes: (1) Increased discussion of supervisee anxiety and the tensions between autonomy and dependence; (2) intentions to alter supervisory roles and practice; (3) identification of and reflection on parallel process (defined as the dynamic transmission of relationship patterns between therapy and supervision); and (4) a range of perceived impacts including improvements in supervisory alliance. The results suggest that reflective dialogue based on supervision videos can play a useful role in psychotherapy supervision, including with relatively inexperienced supervisees. Suggestions are provided for the encouragement of ongoing reflective dialogue in routine supervision practice.
The Use of Smart Glasses for Surgical Video Streaming.
Hiranaka, Takafumi; Nakanishi, Yuta; Fujishiro, Takaaki; Hida, Yuichi; Tsubosaka, Masanori; Shibata, Yosaku; Okimura, Kenjiro; Uemoto, Harunobu
2017-04-01
Observation of surgical procedures performed by experts is extremely important for acquisition and improvement of surgical skills. Smart glasses are small computers, which comprise a head-mounted monitor and video camera, and can be connected to the internet. They can be used for remote observation of surgeries by video streaming. Although Google Glass is the most commonly used smart glasses for medical purposes, it is still unavailable commercially and has some limitations. This article reports the use of a different type of smart glasses, InfoLinker, for surgical video streaming. InfoLinker has been commercially available in Japan for industrial purposes for more than 2 years. It is connected to a video server via wireless internet directly, and streaming video can be seen anywhere an internet connection is available. We have attempted live video streaming of knee arthroplasty operations that were viewed at several different locations, including foreign countries, on a common web browser. Although the quality of video images depended on the resolution and dynamic range of the video camera, speed of internet connection, and the wearer's attention to minimize image shaking, video streaming could be easily performed throughout the procedure. The wearer could confirm the quality of the video as the video was being shot by the head-mounted display. The time and cost for observation of surgical procedures can be reduced by InfoLinker, and further improvement of hardware as well as the wearer's video shooting technique is expected. We believe that this can be used in other medical settings.
Image Alignment for Multiple Camera High Dynamic Range Microscopy.
Eastwood, Brian S; Childs, Elisabeth C
2012-01-09
This paper investigates the problem of image alignment for multiple camera high dynamic range (HDR) imaging. HDR imaging combines information from images taken with different exposure settings. Combining information from multiple cameras requires an alignment process that is robust to the intensity differences in the images. HDR applications that use a limited number of component images require an alignment technique that is robust to large exposure differences. We evaluate the suitability for HDR alignment of three exposure-robust techniques. We conclude that image alignment based on matching feature descriptors extracted from radiant power images from calibrated cameras yields the most accurate and robust solution. We demonstrate the use of this alignment technique in a high dynamic range video microscope that enables live specimen imaging with a greater level of detail than can be captured with a single camera.
Image Alignment for Multiple Camera High Dynamic Range Microscopy
Eastwood, Brian S.; Childs, Elisabeth C.
2012-01-01
This paper investigates the problem of image alignment for multiple camera high dynamic range (HDR) imaging. HDR imaging combines information from images taken with different exposure settings. Combining information from multiple cameras requires an alignment process that is robust to the intensity differences in the images. HDR applications that use a limited number of component images require an alignment technique that is robust to large exposure differences. We evaluate the suitability for HDR alignment of three exposure-robust techniques. We conclude that image alignment based on matching feature descriptors extracted from radiant power images from calibrated cameras yields the most accurate and robust solution. We demonstrate the use of this alignment technique in a high dynamic range video microscope that enables live specimen imaging with a greater level of detail than can be captured with a single camera. PMID:22545028
Brezinski, Mark E
2017-01-01
Optical coherence tomography (OCT) elastography (OCTE) has the potential to be an important diagnostic tool for pathologies including coronary artery disease, osteoarthritis, malignancies, and even dental caries. Many groups have performed OCTE, including our own, using a wide range of approaches. However, we will demonstrate current OCTE approaches are not scalable to real-time, in vivo imaging. As will be discussed, among the most important reasons is current designs focus on the system and not the target. Specifically, tissue dynamic responses are not accounted, with examples being the tissue strain response time, preload variability, and conditioning variability. Tissue dynamic responses, and to a lesser degree static tissue properties, prevent accurate video rate modulus assessments for current embodiments. Accounting for them is the focus of this paper. A top-down approach will be presented to overcome these challenges to real time in vivo tissue characterization. Discussed first is an example clinical scenario where OTCE would be of substantial relevance, the prevention of acute myocardial infarction or heart attacks. Then the principles behind OCTE are examined. Next, constrains on in vivo application of current OCTE are evaluated, focusing on dynamic tissue responses. An example is the tissue strain response, where it takes about 20 msec after a stress is applied to reach plateau. This response delay is not an issue at slow acquisition rates, as most current OCTE approaches are preformed, but it is for video rate OCTE. Since at video rate each frame is only 30 msec, for essentially all current approaches this means the strain for a given stress is changing constantly during the B-scan. Therefore the modulus can’t be accurately assessed. This serious issue is an even greater problem for pulsed techniques as it means the strain/modulus for a given stress (at a location) is unpredictably changing over a B-scan. The paper concludes by introducing a novel video rate approach to overcome these challenges. PMID:29286052
Brezinski, Mark E
2014-12-01
Optical coherence tomography (OCT) elastography (OCTE) has the potential to be an important diagnostic tool for pathologies including coronary artery disease, osteoarthritis, malignancies, and even dental caries. Many groups have performed OCTE, including our own, using a wide range of approaches. However, we will demonstrate current OCTE approaches are not scalable to real-time, in vivo imaging. As will be discussed, among the most important reasons is current designs focus on the system and not the target. Specifically, tissue dynamic responses are not accounted, with examples being the tissue strain response time, preload variability, and conditioning variability. Tissue dynamic responses, and to a lesser degree static tissue properties, prevent accurate video rate modulus assessments for current embodiments. Accounting for them is the focus of this paper. A top-down approach will be presented to overcome these challenges to real time in vivo tissue characterization. Discussed first is an example clinical scenario where OTCE would be of substantial relevance, the prevention of acute myocardial infarction or heart attacks. Then the principles behind OCTE are examined. Next, constrains on in vivo application of current OCTE are evaluated, focusing on dynamic tissue responses. An example is the tissue strain response, where it takes about 20 msec after a stress is applied to reach plateau. This response delay is not an issue at slow acquisition rates, as most current OCTE approaches are preformed, but it is for video rate OCTE. Since at video rate each frame is only 30 msec, for essentially all current approaches this means the strain for a given stress is changing constantly during the B-scan. Therefore the modulus can't be accurately assessed. This serious issue is an even greater problem for pulsed techniques as it means the strain/modulus for a given stress (at a location) is unpredictably changing over a B-scan. The paper concludes by introducing a novel video rate approach to overcome these challenges.
McCoy, S.W.; Kean, J.W.; Coe, J.A.; Staley, D.M.; Wasklewicz, T.A.; Tucker, G.E.
2010-01-01
Many theoretical and laboratory studies have been undertaken to understand debris-flow processes and their associated hazards. However, complete and quantitative data sets from natural debris flows needed for confirmation of these results are limited. We used a novel combination of in situ measurements of debris-flow dynamics, video imagery, and pre- and postflow 2-cm-resolution digital terrain models to study a natural debris-flow event. Our field data constrain the initial and final reach morphology and key flow dynamics. The observed event consisted of multiple surges, each with clear variation of flow properties along the length of the surge. Steep, highly resistant, surge fronts of coarse-grained material without measurable pore-fluid pressure were pushed along by relatively fine-grained and water-rich tails that had a wide range of pore-fluid pressures (some two times greater than hydrostatic). Surges with larger nonequilibrium pore-fluid pressures had longer travel distances. A wide range of travel distances from different surges of similar size indicates that dynamic flow properties are of equal or greater importance than channel properties in determining where a particular surge will stop. Progressive vertical accretion of multiple surges generated the total thickness of mapped debris-flow deposits; nevertheless, deposits had massive, vertically unstratified sedimentological textures. ?? 2010 Geological Society of America.
Evaluation of color encodings for high dynamic range pixels
NASA Astrophysics Data System (ADS)
Boitard, Ronan; Mantiuk, Rafal K.; Pouli, Tania
2015-03-01
Traditional Low Dynamic Range (LDR) color spaces encode a small fraction of the visible color gamut, which does not encompass the range of colors produced on upcoming High Dynamic Range (HDR) displays. Future imaging systems will require encoding much wider color gamut and luminance range. Such wide color gamut can be represented using floating point HDR pixel values but those are inefficient to encode. They also lack perceptual uniformity of the luminance and color distribution, which is provided (in approximation) by most LDR color spaces. Therefore, there is a need to devise an efficient, perceptually uniform and integer valued representation for high dynamic range pixel values. In this paper we evaluate several methods for encoding colour HDR pixel values, in particular for use in image and video compression. Unlike other studies we test both luminance and color difference encoding in a rigorous 4AFC threshold experiments to determine the minimum bit-depth required. Results show that the Perceptual Quantizer (PQ) encoding provides the best perceptual uniformity in the considered luminance range, however the gain in bit-depth is rather modest. More significant difference can be observed between color difference encoding schemes, from which YDuDv encoding seems to be the most efficient.
NASA Astrophysics Data System (ADS)
Hanhart, Philippe; Řeřábek, Martin; Ebrahimi, Touradj
2015-09-01
This paper reports the details and results of the subjective evaluations conducted at EPFL to evaluate the responses to the Call for Evidence (CfE) for High Dynamic Range (HDR) and Wide Color Gamut (WCG) Video Coding issued by Moving Picture Experts Group (MPEG). The CfE on HDR/WCG Video Coding aims to explore whether the coding efficiency and/or the functionality of the current version of HEVC standard can be signi_cantly improved for HDR and WCG content. In total, nine submissions, five for Category 1 and four for Category 3a, were compared to the HEVC Main 10 Profile based Anchor. More particularly, five HDR video contents, compressed at four bit rates by each proponent responding to the CfE, were used in the subjective evaluations. Further, the side-by-side presentation methodology was used for the subjective experiment to discriminate small differences between the Anchor and proponents. Subjective results shows that the proposals provide evidence that the coding efficiency can be improved in a statistically noticeable way over MPEG CfE Anchors in terms of perceived quality within the investigated content. The paper further benchmarks the selected objective metrics based on their correlations with the subjective ratings. It is shown that PSNR-DE1000, HDRVDP- 2, and PSNR-Lx can reliably detect visible differences between the proposed encoding solutions and current HEVC standard.
Fast exposure time decision in multi-exposure HDR imaging
NASA Astrophysics Data System (ADS)
Piao, Yongjie; Jin, Guang
2012-10-01
Currently available imaging and display system exists the problem of insufficient dynamic range, and the system cannot restore all the information for an high dynamic range (HDR) scene. The number of low dynamic range(LDR) image samples and fastness of exposure time decision impacts the real-time performance of the system dramatically. In order to realize a real-time HDR video acquisition system, this paper proposed a fast and robust method for exposure time selection in under and over exposure area which is based on system response function. The method utilized the monotony of the imaging system. According to this characteristic the exposure time is adjusted to an initial value to make the median value of the image equals to the middle value of the system output range; then adjust the exposure time to make the pixel value on two sides of histogram be the middle value of the system output range. Thus three low dynamic range images are acquired. Experiments show that the proposed method for adjusting the initial exposure time can converge in two iterations which is more fast and stable than average gray control method. As to the exposure time adjusting in under and over exposed area, the proposed method can use the dynamic range of the system more efficiently than fixed exposure time method.
Dynamic full-scalability conversion in scalable video coding
NASA Astrophysics Data System (ADS)
Lee, Dong Su; Bae, Tae Meon; Thang, Truong Cong; Ro, Yong Man
2007-02-01
For outstanding coding efficiency with scalability functions, SVC (Scalable Video Coding) is being standardized. SVC can support spatial, temporal and SNR scalability and these scalabilities are useful to provide a smooth video streaming service even in a time varying network such as a mobile environment. But current SVC is insufficient to support dynamic video conversion with scalability, thereby the adaptation of bitrate to meet a fluctuating network condition is limited. In this paper, we propose dynamic full-scalability conversion methods for QoS adaptive video streaming in SVC. To accomplish full scalability dynamic conversion, we develop corresponding bitstream extraction, encoding and decoding schemes. At the encoder, we insert the IDR NAL periodically to solve the problems of spatial scalability conversion. At the extractor, we analyze the SVC bitstream to get the information which enable dynamic extraction. Real time extraction is achieved by using this information. Finally, we develop the decoder so that it can manage the changing scalability. Experimental results showed that dynamic full-scalability conversion was verified and it was necessary for time varying network condition.
Warfighter Visualizations Compilations
2013-05-01
list of the user’s favorite websites or other textual content, sub-categorized into types, such as blogs, social networking sites, comics , videos...available: The example in the prototype shows a random archived comic from the website. Other options include thumbnail strips of imagery or dynamic...varied, and range from serving as statistical benchmarks, for increasing social consciousness and interaction, for improving educational interactions
Rehm, K; Seeley, G W; Dallas, W J; Ovitt, T W; Seeger, J F
1990-01-01
One of the goals of our research in the field of digital radiography has been to develop contrast-enhancement algorithms for eventual use in the display of chest images on video devices with the aim of preserving the diagnostic information presently available with film, some of which would normally be lost because of the smaller dynamic range of video monitors. The ASAHE algorithm discussed in this article has been tested by investigating observer performance in a difficult detection task involving phantoms and simulated lung nodules, using film as the output medium. The results of the experiment showed that the algorithm is successful in providing contrast-enhanced, natural-looking chest images while maintaining diagnostic information. The algorithm did not effect an increase in nodule detectability, but this was not unexpected because film is a medium capable of displaying a wide range of gray levels. It is sufficient at this stage to show that there is no degradation in observer performance. Future tests will evaluate the performance of the ASAHE algorithm in preparing chest images for video display.
Dynamic Textures Modeling via Joint Video Dictionary Learning.
Wei, Xian; Li, Yuanxiang; Shen, Hao; Chen, Fang; Kleinsteuber, Martin; Wang, Zhongfeng
2017-04-06
Video representation is an important and challenging task in the computer vision community. In this paper, we consider the problem of modeling and classifying video sequences of dynamic scenes which could be modeled in a dynamic textures (DT) framework. At first, we assume that image frames of a moving scene can be modeled as a Markov random process. We propose a sparse coding framework, named joint video dictionary learning (JVDL), to model a video adaptively. By treating the sparse coefficients of image frames over a learned dictionary as the underlying "states", we learn an efficient and robust linear transition matrix between two adjacent frames of sparse events in time series. Hence, a dynamic scene sequence is represented by an appropriate transition matrix associated with a dictionary. In order to ensure the stability of JVDL, we impose several constraints on such transition matrix and dictionary. The developed framework is able to capture the dynamics of a moving scene by exploring both sparse properties and the temporal correlations of consecutive video frames. Moreover, such learned JVDL parameters can be used for various DT applications, such as DT synthesis and recognition. Experimental results demonstrate the strong competitiveness of the proposed JVDL approach in comparison with state-of-the-art video representation methods. Especially, it performs significantly better in dealing with DT synthesis and recognition on heavily corrupted data.
Full-motion video analysis for improved gender classification
NASA Astrophysics Data System (ADS)
Flora, Jeffrey B.; Lochtefeld, Darrell F.; Iftekharuddin, Khan M.
2014-06-01
The ability of computer systems to perform gender classification using the dynamic motion of the human subject has important applications in medicine, human factors, and human-computer interface systems. Previous works in motion analysis have used data from sensors (including gyroscopes, accelerometers, and force plates), radar signatures, and video. However, full-motion video, motion capture, range data provides a higher resolution time and spatial dataset for the analysis of dynamic motion. Works using motion capture data have been limited by small datasets in a controlled environment. In this paper, we explore machine learning techniques to a new dataset that has a larger number of subjects. Additionally, these subjects move unrestricted through a capture volume, representing a more realistic, less controlled environment. We conclude that existing linear classification methods are insufficient for the gender classification for larger dataset captured in relatively uncontrolled environment. A method based on a nonlinear support vector machine classifier is proposed to obtain gender classification for the larger dataset. In experimental testing with a dataset consisting of 98 trials (49 subjects, 2 trials per subject), classification rates using leave-one-out cross-validation are improved from 73% using linear discriminant analysis to 88% using the nonlinear support vector machine classifier.
Measuring Engagement as Students Learn Dynamic Systems and Control with a Video Game
ERIC Educational Resources Information Center
Coller, B. D.; Shernoff, David J.; Strati, Anna
2011-01-01
The paper presents results of a multi-year quasi-experimental study of student engagement during which a video game was introduced into an undergraduate dynamic systems and control course. The video game, "EduTorcs", provided challenges in which students devised control algorithms that drive virtual cars and ride virtual bikes through a…
Designing After-School Learning Using the Massively Multiplayer Online Role-Playing Game
ERIC Educational Resources Information Center
King, Elizabeth M.
2015-01-01
Digital games have become popular for engaging students in a range of learning goals, both in the classroom and the after-school space. In this article, I discuss a specific genre of video game, the massively multiplayer online role-playing game (MMO), which has been identified as a dynamic environment for encountering 21st-century workplace…
Dynamic video encryption algorithm for H.264/AVC based on a spatiotemporal chaos system.
Xu, Hui; Tong, Xiao-Jun; Zhang, Miao; Wang, Zhu; Li, Ling-Hao
2016-06-01
Video encryption schemes mostly employ the selective encryption method to encrypt parts of important and sensitive video information, aiming to ensure the real-time performance and encryption efficiency. The classic block cipher is not applicable to video encryption due to the high computational overhead. In this paper, we propose the encryption selection control module to encrypt video syntax elements dynamically which is controlled by the chaotic pseudorandom sequence. A novel spatiotemporal chaos system and binarization method is used to generate a key stream for encrypting the chosen syntax elements. The proposed scheme enhances the resistance against attacks through the dynamic encryption process and high-security stream cipher. Experimental results show that the proposed method exhibits high security and high efficiency with little effect on the compression ratio and time cost.
Unattended real-time re-establishment of visibility in high dynamic range video and stills
NASA Astrophysics Data System (ADS)
Abidi, B.
2014-05-01
We describe a portable unattended persistent surveillance system that corrects for harsh illumination conditions, where bright sun light creates mixed contrast effects, i.e., heavy shadows and washouts. These effects result in high dynamic range scenes, where illuminance can vary from few luxes to a 6 figure value. When using regular monitors and cameras, such wide span of illuminations can only be visualized if the actual range of values is compressed, leading to the creation of saturated and/or dark noisy areas and a loss of information in these areas. Images containing extreme mixed contrast cannot be fully enhanced from a single exposure, simply because all information is not present in the original data. The active intervention in the acquisition process is required. A software package, capable of integrating multiple types of COTS and custom cameras, ranging from Unmanned Aerial Systems (UAS) data links to digital single-lens reflex cameras (DSLR), is described. Hardware and software are integrated via a novel smart data acquisition algorithm, which communicates to the camera the parameters that would maximize information content in the final processed scene. A fusion mechanism is then applied to the smartly acquired data, resulting in an enhanced scene where information in both dark and bright areas is revealed. Multi-threading and parallel processing are exploited to produce automatic real time full motion corrected video. A novel enhancement algorithm was also devised to process data from legacy and non-controllable cameras. The software accepts and processes pre-recorded sequences and stills, enhances visible, night vision, and Infrared data, and successfully applies to night time and dark scenes. Various user options are available, integrating custom functionalities of the application into intuitive and easy to use graphical interfaces. The ensuing increase in visibility in surveillance video and intelligence imagery will expand the performance and timely decision making of the human analyst, as well as that of unmanned systems performing automatic data exploitation, such as target detection and identification.
Small Moving Vehicle Detection in a Satellite Video of an Urban Area
Yang, Tao; Wang, Xiwen; Yao, Bowei; Li, Jing; Zhang, Yanning; He, Zhannan; Duan, Wencheng
2016-01-01
Vehicle surveillance of a wide area allows us to learn much about the daily activities and traffic information. With the rapid development of remote sensing, satellite video has become an important data source for vehicle detection, which provides a broader field of surveillance. The achieved work generally focuses on aerial video with moderately-sized objects based on feature extraction. However, the moving vehicles in satellite video imagery range from just a few pixels to dozens of pixels and exhibit low contrast with respect to the background, which makes it hard to get available appearance or shape information. In this paper, we look into the problem of moving vehicle detection in satellite imagery. To the best of our knowledge, it is the first time to deal with moving vehicle detection from satellite videos. Our approach consists of two stages: first, through foreground motion segmentation and trajectory accumulation, the scene motion heat map is dynamically built. Following this, a novel saliency based background model which intensifies moving objects is presented to segment the vehicles in the hot regions. Qualitative and quantitative experiments on sequence from a recent Skybox satellite video dataset demonstrates that our approach achieves a high detection rate and low false alarm simultaneously. PMID:27657091
ERIC Educational Resources Information Center
Cain, William Christopher
2017-01-01
The following study was framed around a simple question: when a group of people is engaged in video conferencing, "what sort of things can they do to improve their group dynamics?" This is an important question for current and future educational practice because web-based video conferencing has increasingly become an important tool for…
STS-74/Mir photogrammetric appendage structural dynamics experiment
NASA Technical Reports Server (NTRS)
Welch, Sharon S.; Gilbert, Michael G.
1996-01-01
The Photogrammetric Appendage Structural Dynamics Experiment (PASDE) is an International Space Station (ISS) Phase-1 risk mitigation experiment. Phase-1 experiments are performed during docking missions of the U.S. Space Shuttle to the Russian Space Station Mir. The purpose of the experiment is to demonstrate the use of photogrammetric techniques for determination of structural dynamic mode parameters of solar arrays and other spacecraft appendages. Photogrammetric techniques are a low cost alternative to appendage mounted accelerometers for the ISS program. The objective of the first flight of PASDE, on STS-74 in November 1995, was to obtain video images of Mir Kvant-2 solar array response to various structural dynamic excitation events. More than 113 minutes of high quality structural response video data was collected during the mission. The PASDE experiment hardware consisted of three instruments each containing two video cameras, two video tape recorders, a modified video signal time inserter, and associated avionics boxes. The instruments were designed, fabricated, and tested at the NASA Langley Research Center in eight months. The flight hardware was integrated into standard Hitchhiker canisters at the NASA Goddard Space Flight Center and then installed into the Space Shuttle cargo bay in locations selected to achieve good video coverage and photogrammetric geometry.
Synchronous digitization for high dynamic range lock-in amplification in beam-scanning microscopy
Muir, Ryan D.; Sullivan, Shane Z.; Oglesbee, Robert A.; Simpson, Garth J.
2014-01-01
Digital lock-in amplification (LIA) with synchronous digitization (SD) is shown to provide significant signal to noise (S/N) and linear dynamic range advantages in beam-scanning microscopy measurements using pulsed laser sources. Direct comparisons between SD-LIA and conventional LIA in homodyne second harmonic generation measurements resulted in S/N enhancements consistent with theoretical models. SD-LIA provided notably larger S/N enhancements in the limit of low light intensities, through the smooth transition between photon counting and signal averaging developed in previous work. Rapid beam scanning instrumentation with up to video rate acquisition speeds minimized photo-induced sample damage. The corresponding increased allowance for higher laser power without sample damage is advantageous for increasing the observed signal content. PMID:24689588
NASA Astrophysics Data System (ADS)
Kabir, Salman; Smith, Craig; Armstrong, Frank; Barnard, Gerrit; Schneider, Alex; Guidash, Michael; Vogelsang, Thomas; Endsley, Jay
2018-03-01
Differential binary pixel technology is a threshold-based timing, readout, and image reconstruction method that utilizes the subframe partial charge transfer technique in a standard four-transistor (4T) pixel CMOS image sensor to achieve a high dynamic range video with stop motion. This technology improves low light signal-to-noise ratio (SNR) by up to 21 dB. The method is verified in silicon using a Taiwan Semiconductor Manufacturing Company's 65 nm 1.1 μm pixel technology 1 megapixel test chip array and is compared with a traditional 4 × oversampling technique using full charge transfer to show low light SNR superiority of the presented technology.
ERIC Educational Resources Information Center
Maggio, Severine; Lete, Bernard; Chenu, Florence; Jisa, Harriet; Fayol, Michel
2012-01-01
This study examines the dynamics of cognitive processes during writing. Participants were 5th, 7th and 9th graders ranging in age from 10 to 15 years. They were shown a short silent video composed of clips illustrating conflictual situations between people in school, and were invited to produce a narrative text. Three chronometric measures of word…
NASA Technical Reports Server (NTRS)
Ritman, E. L.; Wood, E. H.
1973-01-01
The current status and application are described of the biplane video roentgen densitometry, videometry and video digitization systems. These techniques were developed, and continue to be developed for studies of the effects of gravitational and inertial forces on cardiovascular and respiratory dynamics in intact animals and man. Progress is reported in the field of lung dynamics and three-dimensional reconstruction of the dynamic thoracic contents from roentgen video images. It is anticipated that these data will provide added insight into the role of shape and internal spatial relationships (which is altered particularly by acceleration and position of the body) of these organs as an indication of their functional status.
Ebe, Kazuyu; Sugimoto, Satoru; Utsunomiya, Satoru; Kagamu, Hiroshi; Aoyama, Hidefumi; Court, Laurence; Tokuyama, Katsuichi; Baba, Ryuta; Ogihara, Yoshisada; Ichikawa, Kosuke; Toyama, Joji
2015-08-01
To develop and evaluate a new video image-based QA system, including in-house software, that can display a tracking state visually and quantify the positional accuracy of dynamic tumor tracking irradiation in the Vero4DRT system. Sixteen trajectories in six patients with pulmonary cancer were obtained with the ExacTrac in the Vero4DRT system. Motion data in the cranio-caudal direction (Y direction) were used as the input for a programmable motion table (Quasar). A target phantom was placed on the motion table, which was placed on the 2D ionization chamber array (MatriXX). Then, the 4D modeling procedure was performed on the target phantom during a reproduction of the patient's tumor motion. A substitute target with the patient's tumor motion was irradiated with 6-MV x-rays under the surrogate infrared system. The 2D dose images obtained from the MatriXX (33 frames/s; 40 s) were exported to in-house video-image analyzing software. The absolute differences in the Y direction between the center of the exposed target and the center of the exposed field were calculated. Positional errors were observed. The authors' QA results were compared to 4D modeling function errors and gimbal motion errors obtained from log analyses in the ExacTrac to verify the accuracy of their QA system. The patients' tumor motions were evaluated in the wave forms, and the peak-to-peak distances were also measured to verify their reproducibility. Thirteen of sixteen trajectories (81.3%) were successfully reproduced with Quasar. The peak-to-peak distances ranged from 2.7 to 29.0 mm. Three trajectories (18.7%) were not successfully reproduced due to the limited motions of the Quasar. Thus, 13 of 16 trajectories were summarized. The mean number of video images used for analysis was 1156. The positional errors (absolute mean difference + 2 standard deviation) ranged from 0.54 to 1.55 mm. The error values differed by less than 1 mm from 4D modeling function errors and gimbal motion errors in the ExacTrac log analyses (n = 13). The newly developed video image-based QA system, including in-house software, can analyze more than a thousand images (33 frames/s). Positional errors are approximately equivalent to those in ExacTrac log analyses. This system is useful for the visual illustration of the progress of the tracking state and for the quantification of positional accuracy during dynamic tumor tracking irradiation in the Vero4DRT system.
3D video-based deformation measurement of the pelvis bone under dynamic cyclic loading
2011-01-01
Background Dynamic three-dimensional (3D) deformation of the pelvic bones is a crucial factor in the successful design and longevity of complex orthopaedic oncological implants. The current solutions are often not very promising for the patient; thus it would be interesting to measure the dynamic 3D-deformation of the whole pelvic bone in order to get a more realistic dataset for a better implant design. Therefore we hypothesis if it would be possible to combine a material testing machine with a 3D video motion capturing system, used in clinical gait analysis, to measure the sub millimetre deformation of a whole pelvis specimen. Method A pelvis specimen was placed in a standing position on a material testing machine. Passive reflective markers, traceable by the 3D video motion capturing system, were fixed to the bony surface of the pelvis specimen. While applying a dynamic sinusoidal load the 3D-movement of the markers was recorded by the cameras and afterwards the 3D-deformation of the pelvis specimen was computed. The accuracy of the 3D-movement of the markers was verified with 3D-displacement curve with a step function using a manual driven 3D micro-motion-stage. Results The resulting accuracy of the measurement system depended on the number of cameras tracking a marker. The noise level for a marker seen by two cameras was during the stationary phase of the calibration procedure ± 0.036 mm, and ± 0.022 mm if tracked by 6 cameras. The detectable 3D-movement performed by the 3D-micro-motion-stage was smaller than the noise level of the 3D-video motion capturing system. Therefore the limiting factor of the setup was the noise level, which resulted in a measurement accuracy for the dynamic test setup of ± 0.036 mm. Conclusion This 3D test setup opens new possibilities in dynamic testing of wide range materials, like anatomical specimens, biomaterials, and its combinations. The resulting 3D-deformation dataset can be used for a better estimation of material characteristics of the underlying structures. This is an important factor in a reliable biomechanical modelling and simulation as well as in a successful design of complex implants. PMID:21762533
High-Order Model and Dynamic Filtering for Frame Rate Up-Conversion.
Bao, Wenbo; Zhang, Xiaoyun; Chen, Li; Ding, Lianghui; Gao, Zhiyong
2018-08-01
This paper proposes a novel frame rate up-conversion method through high-order model and dynamic filtering (HOMDF) for video pixels. Unlike the constant brightness and linear motion assumptions in traditional methods, the intensity and position of the video pixels are both modeled with high-order polynomials in terms of time. Then, the key problem of our method is to estimate the polynomial coefficients that represent the pixel's intensity variation, velocity, and acceleration. We propose to solve it with two energy objectives: one minimizes the auto-regressive prediction error of intensity variation by its past samples, and the other minimizes video frame's reconstruction error along the motion trajectory. To efficiently address the optimization problem for these coefficients, we propose the dynamic filtering solution inspired by video's temporal coherence. The optimal estimation of these coefficients is reformulated into a dynamic fusion of the prior estimate from pixel's temporal predecessor and the maximum likelihood estimate from current new observation. Finally, frame rate up-conversion is implemented using motion-compensated interpolation by pixel-wise intensity variation and motion trajectory. Benefited from the advanced model and dynamic filtering, the interpolated frame has much better visual quality. Extensive experiments on the natural and synthesized videos demonstrate the superiority of HOMDF over the state-of-the-art methods in both subjective and objective comparisons.
Using dynamic mode decomposition for real-time background/foreground separation in video
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kutz, Jose Nathan; Grosek, Jacob; Brunton, Steven
The technique of dynamic mode decomposition (DMD) is disclosed herein for the purpose of robustly separating video frames into background (low-rank) and foreground (sparse) components in real-time. Foreground/background separation is achieved at the computational cost of just one singular value decomposition (SVD) and one linear equation solve, thus producing results orders of magnitude faster than robust principal component analysis (RPCA). Additional techniques, including techniques for analyzing the video for multi-resolution time-scale components, and techniques for reusing computations to allow processing of streaming video in real time, are also described herein.
Video-microscopy of NCAP films: the observation of LC droplets in real time
NASA Astrophysics Data System (ADS)
Reamey, Robert H.; Montoya, Wayne; Wong, Abraham
1992-06-01
We have used video-microscopy to observe the behavior of liquid crystal (LC) droplets within nematic droplet-polymer films (NCAP) as the droplets respond to an applied electric field. The textures observed at intermediate fields yielded information about the process of liquid crystal orientation dynamics within droplets. The nematic droplet-polymer films had low LC content (less than 1 percent) to allow the observation of individual droplets in a 2 - 6 micrometers size range. The aqueous emulsification technique was used to prepare the films as it allows the straightforward preparation of low LC content films with a controlled droplet size range. Standard electro-optical (E-O) tests were also performed on the films, allowing us to correlate single droplet behavior with that of the film as a whole. Hysteresis measured in E-O tests was visually confirmed by droplet orientation dynamics; a film which had high hysteresis in E-O tests exhibited distinctly different LC orientations within the droplet when ramped up in voltage than when ramped down in voltage. Ramping the applied voltage to well above saturation resulted in some droplets becoming `stuck'' in a new droplet structure which can be made to revert back to bipolar with high voltage pulses or with heat.
Modeling Ullage Dynamics of Tank Pressure Control Experiment during Jet Mixing in Microgravity
NASA Technical Reports Server (NTRS)
Kartuzova, O.; Kassemi, M.
2016-01-01
A CFD model for simulating the fluid dynamics of the jet induced mixing process is utilized in this paper to model the pressure control portion of the Tank Pressure Control Experiment (TPCE) in microgravity1. The Volume of Fluid (VOF) method is used for modeling the dynamics of the interface during mixing. The simulations were performed at a range of jet Weber numbers from non-penetrating to fully penetrating. Two different initial ullage positions were considered. The computational results for the jet-ullage interaction are compared with still images from the video of the experiment. A qualitative comparison shows that the CFD model was able to capture the main features of the interfacial dynamics, as well as the jet penetration of the ullage.
Code of Federal Regulations, 2010 CFR
2010-01-01
...” means any toy, game, or other article designed, labeled, advertised, or otherwise intended for use by... designed primarily for use by adults which may be used incidentally by children, or video games. (2) The term video games means video game hardware systems, which are games that both produce a dynamic video...
Code of Federal Regulations, 2011 CFR
2011-01-01
...” means any toy, game, or other article designed, labeled, advertised, or otherwise intended for use by... designed primarily for use by adults which may be used incidentally by children, or video games. (2) The term video games means video game hardware systems, which are games that both produce a dynamic video...
Code of Federal Regulations, 2012 CFR
2012-01-01
...” means any toy, game, or other article designed, labeled, advertised, or otherwise intended for use by... designed primarily for use by adults which may be used incidentally by children, or video games. (2) The term video games means video game hardware systems, which are games that both produce a dynamic video...
16 CFR § 1505.1 - Definitions.
Code of Federal Regulations, 2013 CFR
2013-01-01
...” means any toy, game, or other article designed, labeled, advertised, or otherwise intended for use by... designed primarily for use by adults which may be used incidentally by children, or video games. (2) The term video games means video game hardware systems, which are games that both produce a dynamic video...
Code of Federal Regulations, 2014 CFR
2014-01-01
...” means any toy, game, or other article designed, labeled, advertised, or otherwise intended for use by... designed primarily for use by adults which may be used incidentally by children, or video games. (2) The term video games means video game hardware systems, which are games that both produce a dynamic video...
Trelease, R B; Nieder, G L; Dørup, J; Hansen, M S
2000-04-15
Continuing evolution of computer-based multimedia technologies has produced QuickTime, a multiplatform digital media standard that is supported by stand-alone commercial programs and World Wide Web browsers. While its core functions might be most commonly employed for production and delivery of conventional video programs (e.g., lecture videos), additional QuickTime VR "virtual reality" features can be used to produce photorealistic, interactive "non-linear movies" of anatomical structures ranging in size from microscopic through gross anatomic. But what is really included in QuickTime VR and how can it be easily used to produce novel and innovative visualizations for education and research? This tutorial introduces the QuickTime multimedia environment, its QuickTime VR extensions, basic linear and non-linear digital video technologies, image acquisition, and other specialized QuickTime VR production methods. Four separate practical applications are presented for light and electron microscopy, dissectable preserved specimens, and explorable functional anatomy in magnetic resonance cinegrams.
A Novel Method to Increase LinLog CMOS Sensors’ Performance in High Dynamic Range Scenarios
Martínez-Sánchez, Antonio; Fernández, Carlos; Navarro, Pedro J.; Iborra, Andrés
2011-01-01
Images from high dynamic range (HDR) scenes must be obtained with minimum loss of information. For this purpose it is necessary to take full advantage of the quantification levels provided by the CCD/CMOS image sensor. LinLog CMOS sensors satisfy the above demand by offering an adjustable response curve that combines linear and logarithmic responses. This paper presents a novel method to quickly adjust the parameters that control the response curve of a LinLog CMOS image sensor. We propose to use an Adaptive Proportional-Integral-Derivative controller to adjust the exposure time of the sensor, together with control algorithms based on the saturation level and the entropy of the images. With this method the sensor’s maximum dynamic range (120 dB) can be used to acquire good quality images from HDR scenes with fast, automatic adaptation to scene conditions. Adaptation to a new scene is rapid, with a sensor response adjustment of less than eight frames when working in real time video mode. At least 67% of the scene entropy can be retained with this method. PMID:22164083
A novel method to increase LinLog CMOS sensors' performance in high dynamic range scenarios.
Martínez-Sánchez, Antonio; Fernández, Carlos; Navarro, Pedro J; Iborra, Andrés
2011-01-01
Images from high dynamic range (HDR) scenes must be obtained with minimum loss of information. For this purpose it is necessary to take full advantage of the quantification levels provided by the CCD/CMOS image sensor. LinLog CMOS sensors satisfy the above demand by offering an adjustable response curve that combines linear and logarithmic responses. This paper presents a novel method to quickly adjust the parameters that control the response curve of a LinLog CMOS image sensor. We propose to use an Adaptive Proportional-Integral-Derivative controller to adjust the exposure time of the sensor, together with control algorithms based on the saturation level and the entropy of the images. With this method the sensor's maximum dynamic range (120 dB) can be used to acquire good quality images from HDR scenes with fast, automatic adaptation to scene conditions. Adaptation to a new scene is rapid, with a sensor response adjustment of less than eight frames when working in real time video mode. At least 67% of the scene entropy can be retained with this method.
Parachute Aerodynamics From Video Data
NASA Technical Reports Server (NTRS)
Schoenenberger, Mark; Queen, Eric M.; Cruz, Juan R.
2005-01-01
A new data analysis technique for the identification of static and dynamic aerodynamic stability coefficients from wind tunnel test video data is presented. This new technique was applied to video data obtained during a parachute wind tunnel test program conducted in support of the Mars Exploration Rover Mission. Total angle-of-attack data obtained from video images were used to determine the static pitching moment curve of the parachute. During the original wind tunnel test program the static pitching moment curve had been determined by forcing the parachute to a specific total angle-of -attack and measuring the forces generated. It is shown with the new technique that this parachute, when free to rotate, trims at an angle-of-attack two degrees lower than was measured during the forced-angle tests. An attempt was also made to extract pitch damping information from the video data. Results suggest that the parachute is dynamically unstable at the static trim point and tends to become dynamically stable away from the trim point. These trends are in agreement with limit-cycle-like behavior observed in the video. However, the chaotic motion of the parachute produced results with large uncertainty bands.
Characterization of Axial Inducer Cavitation Instabilities via High Speed Video Recordings
NASA Technical Reports Server (NTRS)
Arellano, Patrick; Peneda, Marinelle; Ferguson, Thomas; Zoladz, Thomas
2011-01-01
Sub-scale water tests were undertaken to assess the viability of utilizing high resolution, high frame-rate digital video recordings of a liquid rocket engine turbopump axial inducer to characterize cavitation instabilities. These high speed video (HSV) images of various cavitation phenomena, including higher order cavitation, rotating cavitation, alternating blade cavitation, and asymmetric cavitation, as well as non-cavitating flows for comparison, were recorded from various orientations through an acrylic tunnel using one and two cameras at digital recording rates ranging from 6,000 to 15,700 frames per second. The physical characteristics of these cavitation forms, including the mechanisms that define the cavitation frequency, were identified. Additionally, these images showed how the cavitation forms changed and transitioned from one type (tip vortex) to another (sheet cavitation) as the inducer boundary conditions (inlet pressures) were changed. Image processing techniques were developed which tracked the formation and collapse of cavitating fluid in a specified target area, both in the temporal and frequency domains, in order to characterize the cavitation instability frequency. The accuracy of the analysis techniques was found to be very dependent on target size for higher order cavitation, but much less so for the other phenomena. Tunnel-mounted piezoelectric, dynamic pressure transducers were present throughout these tests and were used as references in correlating the results obtained by image processing. Results showed good agreement between image processing and dynamic pressure spectral data. The test set-up, test program, and test results including H-Q and suction performance, dynamic environment and cavitation characterization, and image processing techniques and results will be discussed.
Videos and Animations for Vocabulary Learning: A Study on Difficult Words
ERIC Educational Resources Information Center
Lin, Chih-cheng; Tseng, Yi-fang
2012-01-01
Studies on using still images and dynamic videos in multimedia annotations produced inconclusive results. A further examination, however, showed that the principle of using videos to explain complex concepts was not observed in the previous studies. This study was intended to investigate whether videos, compared with pictures, better assist…
Understanding viral video dynamics through an epidemic modelling approach
NASA Astrophysics Data System (ADS)
Sachak-Patwa, Rahil; Fadai, Nabil T.; Van Gorder, Robert A.
2018-07-01
Motivated by the hypothesis that the spread of viral videos is analogous to the spread of a disease epidemic, we formulate a novel susceptible-exposed-infected-recovered-susceptible (SEIRS) delay differential equation epidemic model to describe the popularity evolution of viral videos. Our models incorporate time-delay, in order to accurately describe the virtual contact process between individuals and the temporary immunity of individuals to videos after they have grown tired of watching them. We validate our models by fitting model parameters to viewing data from YouTube music videos, in order to demonstrate that the model solutions accurately reproduce real behaviour seen in this data. We use an SEIR model to describe the initial growth and decline of daily views, and an SEIRS model to describe the long term behaviour of the popularity of music videos. We also analyse the decay rates in the daily views of videos, determining whether they follow a power law or exponential distribution. Although we focus on viral videos, the modelling approach may be used to understand dynamics emergent from other areas of science which aim to describe consumer behaviour.
Photo-acoustic and video-acoustic methods for sensing distant sound sources
NASA Astrophysics Data System (ADS)
Slater, Dan; Kozacik, Stephen; Kelmelis, Eric
2017-05-01
Long range telescopic video imagery of distant terrestrial scenes, aircraft, rockets and other aerospace vehicles can be a powerful observational tool. But what about the associated acoustic activity? A new technology, Remote Acoustic Sensing (RAS), may provide a method to remotely listen to the acoustic activity near these distant objects. Local acoustic activity sometimes weakly modulates the ambient illumination in a way that can be remotely sensed. RAS is a new type of microphone that separates an acoustic transducer into two spatially separated components: 1) a naturally formed in situ acousto-optic modulator (AOM) located within the distant scene and 2) a remote sensing readout device that recovers the distant audio. These two elements are passively coupled over long distances at the speed of light by naturally occurring ambient light energy or other electromagnetic fields. Stereophonic, multichannel and acoustic beam forming are all possible using RAS techniques and when combined with high-definition video imagery it can help to provide a more cinema like immersive viewing experience. A practical implementation of a remote acousto-optic readout device can be a challenging engineering problem. The acoustic influence on the optical signal is generally weak and often with a strong bias term. The optical signal is further degraded by atmospheric seeing turbulence. In this paper, we consider two fundamentally different optical readout approaches: 1) a low pixel count photodiode based RAS photoreceiver and 2) audio extraction directly from a video stream. Most of our RAS experiments to date have used the first method for reasons of performance and simplicity. But there are potential advantages to extracting audio directly from a video stream. These advantages include the straight forward ability to work with multiple AOMs (useful for acoustic beam forming), simpler optical configurations, and a potential ability to use certain preexisting video recordings. However, doing so requires overcoming significant limitations typically including much lower sample rates, reduced sensitivity and dynamic range, more expensive video hardware, and the need for sophisticated video processing. The ATCOM real time image processing software environment provides many of the needed capabilities for researching video-acoustic signal extraction. ATCOM currently is a powerful tool for the visual enhancement of atmospheric turbulence distorted telescopic views. In order to explore the potential of acoustic signal recovery from video imagery we modified ATCOM to extract audio waveforms from the same telescopic video sources. In this paper, we demonstrate and compare both readout techniques for several aerospace test scenarios to better show where each has advantages.
A Wide Dynamic Range Tapped Linear Array Image Sensor
NASA Astrophysics Data System (ADS)
Washkurak, William D.; Chamberlain, Savvas G.; Prince, N. Daryl
1988-08-01
Detectors for acousto-optic signal processing applications require fast transient response as well as wide dynamic range. There are two major choices of detectors: conductive or integration mode. Conductive mode detectors have an initial transient period before they reach then' i equilibrium state. The duration of 1 his period is dependent on light level as well as detector capacitance. At low light levels a conductive mode detector is very slow; response time is typically on the order of milliseconds. Generally. to obtain fast transient response an integrating mode detector is preferred. With integrating mode detectors. the dynamic range is determined by the charge storage capability of the tran-sport shift registers and the noise level of the image sensor. The conventional net hod used to improve dynamic range is to increase the shift register charge storage capability. To achieve a dynamic range of fifty thousand assuming two hundred noise equivalent electrons, a charge storage capability of ten million electrons would be required. In order to accommodate this amount of charge. unrealistic shift registers widths would be required. Therefore, with an integrating mode detector it is difficult to achieve a dynamic range of over four orders of magnitude of input light intensity. Another alternative is to solve the problem at the photodetector aml not the shift, register. DALSA's wide dynamic range detector utilizes an optimized, ion implant doped, profiled MOSFET photodetector specifically designed for wide dynamic range. When this new detector operates at high speed and at low light levels the photons are collected and stored in an integrating fashion. However. at bright light levels where transient periods are short, the detector switches into a conductive mode. The light intensity is logarithmically compressed into small charge packets, easily carried by the CCD shift register. As a result of the logarithmic conversion, dynamic ranges of over six orders of magnitide are obtained. To achieve the short integration times necessary in acousto-optic applications. t he wide dynamic range detector has been implemented into a tapped array architecture with eight outputs and 256 photoelements. Operation of each 01)1,1)111 at 16 MHz yields detector integration times of 2 micro-seconds. Buried channel two phase CCD shift register technology is utilized to minimize image sensor noise improve video output rates and increase ease of operation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ebe, Kazuyu, E-mail: nrr24490@nifty.com; Tokuyama, Katsuichi; Baba, Ryuta
Purpose: To develop and evaluate a new video image-based QA system, including in-house software, that can display a tracking state visually and quantify the positional accuracy of dynamic tumor tracking irradiation in the Vero4DRT system. Methods: Sixteen trajectories in six patients with pulmonary cancer were obtained with the ExacTrac in the Vero4DRT system. Motion data in the cranio–caudal direction (Y direction) were used as the input for a programmable motion table (Quasar). A target phantom was placed on the motion table, which was placed on the 2D ionization chamber array (MatriXX). Then, the 4D modeling procedure was performed on themore » target phantom during a reproduction of the patient’s tumor motion. A substitute target with the patient’s tumor motion was irradiated with 6-MV x-rays under the surrogate infrared system. The 2D dose images obtained from the MatriXX (33 frames/s; 40 s) were exported to in-house video-image analyzing software. The absolute differences in the Y direction between the center of the exposed target and the center of the exposed field were calculated. Positional errors were observed. The authors’ QA results were compared to 4D modeling function errors and gimbal motion errors obtained from log analyses in the ExacTrac to verify the accuracy of their QA system. The patients’ tumor motions were evaluated in the wave forms, and the peak-to-peak distances were also measured to verify their reproducibility. Results: Thirteen of sixteen trajectories (81.3%) were successfully reproduced with Quasar. The peak-to-peak distances ranged from 2.7 to 29.0 mm. Three trajectories (18.7%) were not successfully reproduced due to the limited motions of the Quasar. Thus, 13 of 16 trajectories were summarized. The mean number of video images used for analysis was 1156. The positional errors (absolute mean difference + 2 standard deviation) ranged from 0.54 to 1.55 mm. The error values differed by less than 1 mm from 4D modeling function errors and gimbal motion errors in the ExacTrac log analyses (n = 13). Conclusions: The newly developed video image-based QA system, including in-house software, can analyze more than a thousand images (33 frames/s). Positional errors are approximately equivalent to those in ExacTrac log analyses. This system is useful for the visual illustration of the progress of the tracking state and for the quantification of positional accuracy during dynamic tumor tracking irradiation in the Vero4DRT system.« less
Tutorial videos of bioinformatics resources: online distribution trial in Japan named TogoTV.
Kawano, Shin; Ono, Hiromasa; Takagi, Toshihisa; Bono, Hidemasa
2012-03-01
In recent years, biological web resources such as databases and tools have become more complex because of the enormous amounts of data generated in the field of life sciences. Traditional methods of distributing tutorials include publishing textbooks and posting web documents, but these static contents cannot adequately describe recent dynamic web services. Due to improvements in computer technology, it is now possible to create dynamic content such as video with minimal effort and low cost on most modern computers. The ease of creating and distributing video tutorials instead of static content improves accessibility for researchers, annotators and curators. This article focuses on online video repositories for educational and tutorial videos provided by resource developers and users. It also describes a project in Japan named TogoTV (http://togotv.dbcls.jp/en/) and discusses the production and distribution of high-quality tutorial videos, which would be useful to viewer, with examples. This article intends to stimulate and encourage researchers who develop and use databases and tools to distribute how-to videos as a tool to enhance product usability.
Tutorial videos of bioinformatics resources: online distribution trial in Japan named TogoTV
Kawano, Shin; Ono, Hiromasa; Takagi, Toshihisa
2012-01-01
In recent years, biological web resources such as databases and tools have become more complex because of the enormous amounts of data generated in the field of life sciences. Traditional methods of distributing tutorials include publishing textbooks and posting web documents, but these static contents cannot adequately describe recent dynamic web services. Due to improvements in computer technology, it is now possible to create dynamic content such as video with minimal effort and low cost on most modern computers. The ease of creating and distributing video tutorials instead of static content improves accessibility for researchers, annotators and curators. This article focuses on online video repositories for educational and tutorial videos provided by resource developers and users. It also describes a project in Japan named TogoTV (http://togotv.dbcls.jp/en/) and discusses the production and distribution of high-quality tutorial videos, which would be useful to viewer, with examples. This article intends to stimulate and encourage researchers who develop and use databases and tools to distribute how-to videos as a tool to enhance product usability. PMID:21803786
Video-Based Fingerprint Verification
Qin, Wei; Yin, Yilong; Liu, Lili
2013-01-01
Conventional fingerprint verification systems use only static information. In this paper, fingerprint videos, which contain dynamic information, are utilized for verification. Fingerprint videos are acquired by the same capture device that acquires conventional fingerprint images, and the user experience of providing a fingerprint video is the same as that of providing a single impression. After preprocessing and aligning processes, “inside similarity” and “outside similarity” are defined and calculated to take advantage of both dynamic and static information contained in fingerprint videos. Match scores between two matching fingerprint videos are then calculated by combining the two kinds of similarity. Experimental results show that the proposed video-based method leads to a relative reduction of 60 percent in the equal error rate (EER) in comparison to the conventional single impression-based method. We also analyze the time complexity of our method when different combinations of strategies are used. Our method still outperforms the conventional method, even if both methods have the same time complexity. Finally, experimental results demonstrate that the proposed video-based method can lead to better accuracy than the multiple impressions fusion method, and the proposed method has a much lower false acceptance rate (FAR) when the false rejection rate (FRR) is quite low. PMID:24008283
Flame experiments at the advanced light source: new insights into soot formation processes.
Hansen, Nils; Skeen, Scott A; Michelsen, Hope A; Wilson, Kevin R; Kohse-Höinghaus, Katharina
2014-05-26
The following experimental protocols and the accompanying video are concerned with the flame experiments that are performed at the Chemical Dynamics Beamline of the Advanced Light Source (ALS) of the Lawrence Berkeley National Laboratory(1-4). This video demonstrates how the complex chemical structures of laboratory-based model flames are analyzed using flame-sampling mass spectrometry with tunable synchrotron-generated vacuum-ultraviolet (VUV) radiation. This experimental approach combines isomer-resolving capabilities with high sensitivity and a large dynamic range(5,6). The first part of the video describes experiments involving burner-stabilized, reduced-pressure (20-80 mbar) laminar premixed flames. A small hydrocarbon fuel was used for the selected flame to demonstrate the general experimental approach. It is shown how species' profiles are acquired as a function of distance from the burner surface and how the tunability of the VUV photon energy is used advantageously to identify many combustion intermediates based on their ionization energies. For example, this technique has been used to study gas-phase aspects of the soot-formation processes, and the video shows how the resonance-stabilized radicals, such as C3H3, C3H5, and i-C4H5, are identified as important intermediates(7). The work has been focused on soot formation processes, and, from the chemical point of view, this process is very intriguing because chemical structures containing millions of carbon atoms are assembled from a fuel molecule possessing only a few carbon atoms in just milliseconds. The second part of the video highlights a new experiment, in which an opposed-flow diffusion flame and synchrotron-based aerosol mass spectrometry are used to study the chemical composition of the combustion-generated soot particles(4). The experimental results indicate that the widely accepted H-abstraction-C2H2-addition (HACA) mechanism is not the sole molecular growth process responsible for the formation of the observed large polycyclic aromatic hydrocarbons (PAHs).
Flame Experiments at the Advanced Light Source: New Insights into Soot Formation Processes
Hansen, Nils; Skeen, Scott A.; Michelsen, Hope A.; Wilson, Kevin R.; Kohse-Höinghaus, Katharina
2014-01-01
The following experimental protocols and the accompanying video are concerned with the flame experiments that are performed at the Chemical Dynamics Beamline of the Advanced Light Source (ALS) of the Lawrence Berkeley National Laboratory1-4. This video demonstrates how the complex chemical structures of laboratory-based model flames are analyzed using flame-sampling mass spectrometry with tunable synchrotron-generated vacuum-ultraviolet (VUV) radiation. This experimental approach combines isomer-resolving capabilities with high sensitivity and a large dynamic range5,6. The first part of the video describes experiments involving burner-stabilized, reduced-pressure (20-80 mbar) laminar premixed flames. A small hydrocarbon fuel was used for the selected flame to demonstrate the general experimental approach. It is shown how species’ profiles are acquired as a function of distance from the burner surface and how the tunability of the VUV photon energy is used advantageously to identify many combustion intermediates based on their ionization energies. For example, this technique has been used to study gas-phase aspects of the soot-formation processes, and the video shows how the resonance-stabilized radicals, such as C3H3, C3H5, and i-C4H5, are identified as important intermediates7. The work has been focused on soot formation processes, and, from the chemical point of view, this process is very intriguing because chemical structures containing millions of carbon atoms are assembled from a fuel molecule possessing only a few carbon atoms in just milliseconds. The second part of the video highlights a new experiment, in which an opposed-flow diffusion flame and synchrotron-based aerosol mass spectrometry are used to study the chemical composition of the combustion-generated soot particles4. The experimental results indicate that the widely accepted H-abstraction-C2H2-addition (HACA) mechanism is not the sole molecular growth process responsible for the formation of the observed large polycyclic aromatic hydrocarbons (PAHs). PMID:24894694
Establishing the reliability of rhesus macaque social network assessment from video observations
Feczko, Eric; Mitchell, Thomas A. J.; Walum, Hasse; Brooks, Jenna M.; Heitz, Thomas R.; Young, Larry J.; Parr, Lisa A.
2015-01-01
Understanding the properties of a social environment is important for understanding the dynamics of social relationships. Understanding such dynamics is relevant for multiple fields, ranging from animal behaviour to social and cognitive neuroscience. To quantify social environment properties, recent studies have incorporated social network analysis. Social network analysis quantifies both the global and local properties of a social environment, such as social network efficiency and the roles played by specific individuals, respectively. Despite the plethora of studies incorporating social network analysis, methods to determine the amount of data necessary to derive reliable social networks are still being developed. Determining the amount of data necessary for a reliable network is critical for measuring changes in the social environment, for example following an experimental manipulation, and therefore may be critical for using social network analysis to statistically assess social behaviour. In this paper, we extend methods for measuring error in acquired data and for determining the amount of data necessary to generate reliable social networks. We derived social networks from a group of 10 male rhesus macaques, Macaca mulatta, for three behaviours: spatial proximity, grooming and mounting. Behaviours were coded using a video observation technique, where video cameras recorded the compound where the 10 macaques resided. We collected, coded and used 10 h of video data to construct these networks. Using the methods described here, we found in our data that 1 h of spatial proximity observations produced reliable social networks. However, this may not be true for other studies due to differences in data acquisition. Our results have broad implications for measuring and predicting the amount of error in any social network, regardless of species. PMID:26392632
Deep visual-semantic for crowded video understanding
NASA Astrophysics Data System (ADS)
Deng, Chunhua; Zhang, Junwen
2018-03-01
Visual-semantic features play a vital role for crowded video understanding. Convolutional Neural Networks (CNNs) have experienced a significant breakthrough in learning representations from images. However, the learning of visualsemantic features, and how it can be effectively extracted for video analysis, still remains a challenging task. In this study, we propose a novel visual-semantic method to capture both appearance and dynamic representations. In particular, we propose a spatial context method, based on the fractional Fisher vector (FV) encoding on CNN features, which can be regarded as our main contribution. In addition, to capture temporal context information, we also applied fractional encoding method on dynamic images. Experimental results on the WWW crowed video dataset demonstrate that the proposed method outperform the state of the art.
Seeing and Doing Science--With Video.
ERIC Educational Resources Information Center
Berger, Michelle Abel
1994-01-01
The article presents a video-based unit on camouflage for students in grades K-5, explaining how to make the classroom VCR a dynamic teaching tool. Information is offered on introducing the unit, active viewing strategies, and follow-up activities. Tips for teaching with video are included. (SM)
Surgical gesture classification from video and kinematic data.
Zappella, Luca; Béjar, Benjamín; Hager, Gregory; Vidal, René
2013-10-01
Much of the existing work on automatic classification of gestures and skill in robotic surgery is based on dynamic cues (e.g., time to completion, speed, forces, torque) or kinematic data (e.g., robot trajectories and velocities). While videos could be equally or more discriminative (e.g., videos contain semantic information not present in kinematic data), they are typically not used because of the difficulties associated with automatic video interpretation. In this paper, we propose several methods for automatic surgical gesture classification from video data. We assume that the video of a surgical task (e.g., suturing) has been segmented into video clips corresponding to a single gesture (e.g., grabbing the needle, passing the needle) and propose three methods to classify the gesture of each video clip. In the first one, we model each video clip as the output of a linear dynamical system (LDS) and use metrics in the space of LDSs to classify new video clips. In the second one, we use spatio-temporal features extracted from each video clip to learn a dictionary of spatio-temporal words, and use a bag-of-features (BoF) approach to classify new video clips. In the third one, we use multiple kernel learning (MKL) to combine the LDS and BoF approaches. Since the LDS approach is also applicable to kinematic data, we also use MKL to combine both types of data in order to exploit their complementarity. Our experiments on a typical surgical training setup show that methods based on video data perform equally well, if not better, than state-of-the-art approaches based on kinematic data. In turn, the combination of both kinematic and video data outperforms any other algorithm based on one type of data alone. Copyright © 2013 Elsevier B.V. All rights reserved.
Effect of tone mapping operators on visual attention deployment
NASA Astrophysics Data System (ADS)
Narwaria, Manish; Perreira Da Silva, Matthieu; Le Callet, Patrick; Pepion, Romuald
2012-10-01
High Dynamic Range (HDR) images/videos require the use of a tone mapping operator (TMO) when visualized on Low Dynamic Range (LDR) displays. From an artistic intention point of view, TMOs are not necessarily transparent and might induce different behavior to view the content. In this paper, we investigate and quantify how TMOs modify visual attention (VA). To that end both objective and subjective tests in the form of eye-tracking experiments have been conducted on several still image content that have been processed by 11 different TMOs. Our studies confirm that TMOs can indeed modify human attention and fixation behavior significantly. Therefore our studies suggest that VA needs consideration for evaluating the overall perceptual impact of TMOs on HDR content. Since the existing studies so far have only considered the quality or aesthetic appeal angle, this study brings in a new perspective regarding the importance of VA in HDR content processing for visualization on LDR displays.
Video Views and Reviews: Golgi Export, Targeting, and Plasma Membrane Caveolae
ERIC Educational Resources Information Center
Watters, Christopher
2004-01-01
In this article, the author reviews videos from "Molecular Biology of the Cell (MBC)" depicting various aspects of plasma membrane (PM) dynamics, including the targeting of newly synthesized components and the organization of those PM invaginations called caveolae. The papers accompanying these videos describe, respectively, the constitutive…
The effect of a silicone wristband in dynamic balance.
Teruya, Thiago Toshi; Matareli, Bruno Machado; Soares Romano, Fillipe; Mochizuki, Luis
2013-10-01
The effect of a wristband on the dynamic balance of young adults was assessed. Twenty healthy young adults wore a commercial Power BalanceT or fake silicone wristband. A 3D accelerometer was attached to their lumbar region to measure body sway. They played the video game Tightrope (Wii video game console) with and without a wristband; body sway acceleration was measured. Mean balance sway acceleration and its variability were the same in all conditions, so silicone wristbands do not modify dynamic balance control.
Video markers tracking methods for bike fitting
NASA Astrophysics Data System (ADS)
Rajkiewicz, Piotr; Łepkowska, Katarzyna; Cygan, Szymon
2015-09-01
Sports cycling is becoming increasingly popular over last years. Obtaining and maintaining a proper position on the bike has been shown to be crucial for performance, comfort and injury avoidance. Various techniques of bike fitting are available - from rough settings based on body dimensions to professional services making use of sophisticated equipment and expert knowledge. Modern fitting techniques use mainly joint angles as a criterion of proper position. In this work we examine performance of two proposed methods for dynamic cyclist position assessment based on video data recorded during stationary cycling. Proposed methods are intended for home use, to help amateur cyclist improve their position on the bike, and therefore no professional equipment is used. As a result of data processing, ranges of angles in selected joints are provided. Finally strengths and weaknesses of both proposed methods are discussed.
Gaze Allocation in a Dynamic Situation: Effects of Social Status and Speaking
ERIC Educational Resources Information Center
Foulsham, Tom; Cheng, Joey T.; Tracy, Jessica L.; Henrich, Joseph; Kingstone, Alan
2010-01-01
Human visual attention operates in a context that is complex, social and dynamic. To explore this, we recorded people taking part in a group decision-making task and then showed video clips of these situations to new participants while tracking their eye movements. Observers spent the majority of time looking at the people in the videos, and in…
Self-induced stretch syncope of adolescence: a video-EEG documentation.
Mazzuca, Michel; Thomas, Pierre
2007-12-01
We present the first video-EEG documentation, with ECG and EMG features, of stretch syncope of adolescence in a young, healthy 16-year-old boy. Stretch syncope of adolescence is a rarely reported, benign cause of fainting in young patients, which can be confused with epileptic seizures. In our patient, syncopes were self-induced to avoid school. Dynamic transcranial Doppler showed evidence of blood flow decrease in both posterior cerebral arteries mimicking effects of a Valsalva manoeuvre. Dynamic angiogram of the vertebral arteries was normal. Hypotheses concerning the physiopathology are discussed. [Published with video sequences].
Local adaptive tone mapping for video enhancement
NASA Astrophysics Data System (ADS)
Lachine, Vladimir; Dai, Min (.
2015-03-01
As new technologies like High Dynamic Range cameras, AMOLED and high resolution displays emerge on consumer electronics market, it becomes very important to deliver the best picture quality for mobile devices. Tone Mapping (TM) is a popular technique to enhance visual quality. However, the traditional implementation of Tone Mapping procedure is limited by pixel's value to value mapping, and the performance is restricted in terms of local sharpness and colorfulness. To overcome the drawbacks of traditional TM, we propose a spatial-frequency based framework in this paper. In the proposed solution, intensity component of an input video/image signal is split on low pass filtered (LPF) and high pass filtered (HPF) bands. Tone Mapping (TM) function is applied to LPF band to improve the global contrast/brightness, and HPF band is added back afterwards to keep the local contrast. The HPF band may be adjusted by a coring function to avoid noise boosting and signal overshooting. Colorfulness of an original image may be preserved or enhanced by chroma components correction by means of saturation function. Localized content adaptation is further improved by dividing an image to a set of non-overlapped regions and modifying each region individually. The suggested framework allows users to implement a wide range of tone mapping applications with perceptional local sharpness and colorfulness preserved or enhanced. Corresponding hardware circuit may be integrated in camera, video or display pipeline with minimal hardware budget
OPSO - The OpenGL based Field Acquisition and Telescope Guiding System
NASA Astrophysics Data System (ADS)
Škoda, P.; Fuchs, J.; Honsa, J.
2006-07-01
We present OPSO, a modular pointing and auto-guiding system for the coudé spectrograph of the Ondřejov observatory 2m telescope. The current field and slit viewing CCD cameras with image intensifiers are giving only standard TV video output. To allow the acquisition and guiding of very faint targets, we have designed an image enhancing system working in real time on TV frames grabbed by BT878-based video capture card. Its basic capabilities include the sliding averaging of hundreds of frames with bad pixel masking and removal of outliers, display of median of set of frames, quick zooming, contrast and brightness adjustment, plotting of horizontal and vertical cross cuts of seeing disk within given intensity range and many more. From the programmer's point of view, the system consists of three tasks running in parallel on a Linux PC. One C task controls the video capturing over Video for Linux (v4l2) interface and feeds the frames into the large block of shared memory, where the core image processing is done by another C program calling the OpenGL library. The GUI is, however, dynamically built in Python from XML description of widgets prepared in Glade. All tasks are exchanging information by IPC calls using the shared memory segments.
Robust video copy detection approach based on local tangent space alignment
NASA Astrophysics Data System (ADS)
Nie, Xiushan; Qiao, Qianping
2012-04-01
We propose a robust content-based video copy detection approach based on local tangent space alignment (LTSA), which is an efficient dimensionality reduction algorithm. The idea is motivated by the fact that the content of video becomes richer and the dimension of content becomes higher. It does not give natural tools for video analysis and understanding because of the high dimensionality. The proposed approach reduces the dimensionality of video content using LTSA, and then generates video fingerprints in low dimensional space for video copy detection. Furthermore, a dynamic sliding window is applied to fingerprint matching. Experimental results show that the video copy detection approach has good robustness and discrimination.
Gooding, Lori F; Mori-Inoue, Satoko
2011-01-01
The purpose of this study was to examine the effect of video exposure on music therapy students' perceptions of clinical applications of popular music in the field of music therapy. Fifty-one participants were randomly divided into two groups and exposed to a popular song in either audio-only or music video format. Participants were asked to indicate clinical applications; specifically, participants chose: (a) possible population(s), (b) most appropriate population(s), (c) possible age range(s), (d) most appropriate age ranges, (e) possible goal area(s) and (f) most appropriate goal area. Data for each of these categories were compiled and analyzed, with no significant differences found in the choices made by the audio-only and video groups. Three items, (a) selection of the bereavement population, (b) selection of bereavement as the most appropriate population and (c) selection of the age ranges of pre teen/mature adult, were additionally selected for further analysis due to their relationship to the video content. Analysis results revealed a significant difference between the video and audio-only groups for the selection of these specific items, with the video group's selections more closely aligned to the video content. Results of this pilot study suggest that music video exposure to popular music can impact how students choose to implement popular songs in the field of music therapy.
Star Wars in psychotherapy: video games in the office.
Ceranoglu, Tolga Atilla
2010-01-01
Video games are used in medical practice during psycho-education in chronic disease management, physical therapy, rehabilitation following traumatic brain injury, and as an adjunct in pain management during medical procedures or cancer chemotherapy. In psychiatric practice, video games aid in social skills training of children with developmental delays and in cognitive behavioral therapy (CBT). This most popular children's toy may prove a useful tool in dynamic psychotherapy of youth. The author provides a framework for using video games in psychotherapy by considering the characteristics of video games and describes the ways their use has facilitated various stages of therapeutic process. Just as other play techniques build a relationship and encourage sharing of emotional themes, sitting together in front of a console and screen facilitates a relationship and allows a safe path for the patient's conflict to emerge. During video game play, the therapist may observe thought processes, impulsivity, temperament, decision-making, and sharing, among other aspects of a child's clinical presentation. Several features inherent to video games require a thoughtful approach as resistance and transference in therapy may be elaborated differently in comparison to more traditional toys. Familiarity with the video game content and its dynamics benefits child mental health clinicians in their efforts to help children and their families.
Species and Scale Dependence of Bacterial Motion Dynamics
NASA Astrophysics Data System (ADS)
Sund, N. L.; Yang, X.; Parashar, R.; Plymale, A.; Hu, D.; Kelly, R.; Scheibe, T. D.
2017-12-01
Many metal reducing bacteria are motile with their motion characteristics described by run-and-tumble behavior exhibiting series of flights (jumps) and waiting (residence) time spanning a wide range of values. Accurate models of motility allow for improved design and evaluation of in-situ bioremediation in the subsurface. While many bioremediation models neglect the motion of the bacteria, others treat motility using an advection dispersion equation, which assumes that the motion of the bacteria is Brownian.The assumption of Brownian motion to describe motility has enormous implications on predictive capabilities of bioremediation models, yet experimental evidence of this assumption is mixed [1][2][3]. We hypothesize that this is due to the species and scale dependence of the motion dynamics. We test our hypothesis by analyzing videos of motile bacteria of five different species in open domains. Trajectories of individual cells ranging from several seconds to few minutes in duration are extracted in neutral conditions (in the absence of any chemical gradient). The density of the bacteria is kept low so that the interaction between the bacteria is minimal. Preliminary results show a transition from Fickian (Brownian) to non-Fickian behavior for one species of bacteria (Pelosinus) and persistent Fickian behavior of another species (Geobacter).Figure: Video frames of motile bacteria with the last 10 seconds of their trajectories drawn in red. (left) Pelosinus and (right) Geobacter.[1] Ariel, Gil, et al. "Swarming bacteria migrate by Lévy Walk." Nature Communications 6 (2015).[2] Saragosti, Jonathan, Pascal Silberzan, and Axel Buguin. "Modeling E. coli tumbles by rotational diffusion. Implications for chemotaxis." PloS one 7.4 (2012): e35412.[3] Wu, Mingming, et al. "Collective bacterial dynamics revealed using a three-dimensional population-scale defocused particle tracking technique." Applied and Environmental Microbiology 72.7 (2006): 4987-4994.
Tsujimoto, Yukio; Nose, Yorihito; Ohba, Kenkichi
2003-01-01
The pitot tube is a common device to measure flow velocity. If the pitot tube is used as an urodynamic catheter, urinary velocity and urethral pressure may be measured simultaneously. However, to our knowledge, urodynamic studies with the pitot tube have not been reported. We experimentally and clinically evaluated the feasibility of the pitot tube to measure urinary velocity with a transrectal ultrasound guided video urodynamic system. We carried out a basal experiment measuring flow velocity in model urethras of 4.5-8.0 mm in inner diameter with a 12-Fr pitot tube. In a clinical trial, 79 patients underwent transrectal ultrasound guided video urodynamic studies with the 12-Fr pitot tube. Urinary velocity was calculated from dynamic pressure (Pd) with the pitot tube formula and the correcting equation according to the results of the basal experiment. Velocity measured by the pitot tube was proportional to the average velocity in model urethras and the coefficients were determined by diameters of model urethras. We obtained a formula to calculate urinary velocity from the basal experiment. The urinary velocity could be obtained in 32 of 79 patients. Qmax was 8.1 +/- 4.3 mL/s (mean +/- SD; range, 18.4-1.3 mL/s), urethral diameter was 7.3 +/- 3.0 mm (mean +/- SD; range, 18.7-4.3 mm) and urinary velocity was 69.4 +/- 43.6 (mean +/- SD; range, 181.3-0 cm/s) at maximum flow rate. The correlation coefficient of Qmax measured by a flowmeter versus Qdv flow rate calculated with urethral diameter and velocity was 0.41 without significant difference. The use of the pitot tube as an urodynamic catheter to a transrectal ultrasound-guided video urodynamic system can measure urethral pressure, diameter and urinary velocity simultaneously. However, a thinner pitot tube and further clinical trials are needed to obtain more accurate results.
The development of video game enjoyment in a role playing game.
Wirth, Werner; Ryffel, Fabian; von Pape, Thilo; Karnowski, Veronika
2013-04-01
This study examines the development of video game enjoyment over time. The results of a longitudinal study (N=62) show that enjoyment increases over several sessions. Moreover, results of a multilevel regression model indicate a causal link between the dependent variable video game enjoyment and the predictor variables exploratory behavior, spatial presence, competence, suspense and solution, and simulated experiences of life. These findings are important for video game research because they reveal the antecedents of video game enjoyment in a real-world longitudinal setting. Results are discussed in terms of the dynamics of video game enjoyment under real-world conditions.
Video Denoising via Dynamic Video Layering
NASA Astrophysics Data System (ADS)
Guo, Han; Vaswani, Namrata
2018-07-01
Video denoising refers to the problem of removing "noise" from a video sequence. Here the term "noise" is used in a broad sense to refer to any corruption or outlier or interference that is not the quantity of interest. In this work, we develop a novel approach to video denoising that is based on the idea that many noisy or corrupted videos can be split into three parts - the "low-rank layer", the "sparse layer", and a small residual (which is small and bounded). We show, using extensive experiments, that our denoising approach outperforms the state-of-the-art denoising algorithms.
Use of streamed internet video for cytology training and education: www.PathLab.org.
Poller, David; Ljung, Britt-Marie; Gonda, Peter
2009-05-01
An Internet-based method is described for submission of video clips to a website editor to be reviewed, edited, and then uploaded onto a video server, with a hypertext link to a website. The information on the webpages is searchable via the website sitemap on Internet search engines. A survey of video users who accessed a single 59-minute FNA cytology training cytology video via the website showed a mean score for usefulness for specialists/consultants of 3.75, range 1-5, n = 16, usefulness for trainees mean score was 4.4, range 3-5, n = 12, with a mean score for visual and sound quality of 3.9, range 2-5, n = 16. Fifteen out of 17 respondents thought that posting video training material on the Internet was a good idea, and 9 of 17 respondents would also consider submitting training videos to a similar website. This brief exercise has shown that there is value in posting educational or training video content on the Internet and that the use of streamed video accessed via the Internet will be of increasing importance. (c) 2009 Wiley-Liss, Inc.
NASA Astrophysics Data System (ADS)
Sullivan, Gary J.; Topiwala, Pankaj N.; Luthra, Ajay
2004-11-01
H.264/MPEG-4 AVC is the latest international video coding standard. It was jointly developed by the Video Coding Experts Group (VCEG) of the ITU-T and the Moving Picture Experts Group (MPEG) of ISO/IEC. It uses state-of-the-art coding tools and provides enhanced coding efficiency for a wide range of applications, including video telephony, video conferencing, TV, storage (DVD and/or hard disk based, especially high-definition DVD), streaming video, digital video authoring, digital cinema, and many others. The work on a new set of extensions to this standard has recently been completed. These extensions, known as the Fidelity Range Extensions (FRExt), provide a number of enhanced capabilities relative to the base specification as approved in the Spring of 2003. In this paper, an overview of this standard is provided, including the highlights of the capabilities of the new FRExt features. Some comparisons with the existing MPEG-2 and MPEG-4 Part 2 standards are also provided.
Promoting Academic Programs Using Online Videos
ERIC Educational Resources Information Center
Clark, Thomas; Stewart, Julie
2007-01-01
In the last 20 years, the Internet has evolved from simply conveying text and then still photographs and music to the present-day medium in which individuals are contributors and consumers of a nearly infinite number of professional and do-it-yourself videos. In this dynamic environment, new generations of Internet users are streaming video and…
Video Salient Object Detection via Fully Convolutional Networks.
Wang, Wenguan; Shen, Jianbing; Shao, Ling
This paper proposes a deep learning model to efficiently detect salient regions in videos. It addresses two important issues: 1) deep video saliency model training with the absence of sufficiently large and pixel-wise annotated video data and 2) fast video saliency training and detection. The proposed deep video saliency network consists of two modules, for capturing the spatial and temporal saliency information, respectively. The dynamic saliency model, explicitly incorporating saliency estimates from the static saliency model, directly produces spatiotemporal saliency inference without time-consuming optical flow computation. We further propose a novel data augmentation technique that simulates video training data from existing annotated image data sets, which enables our network to learn diverse saliency information and prevents overfitting with the limited number of training videos. Leveraging our synthetic video data (150K video sequences) and real videos, our deep video saliency model successfully learns both spatial and temporal saliency cues, thus producing accurate spatiotemporal saliency estimate. We advance the state-of-the-art on the densely annotated video segmentation data set (MAE of .06) and the Freiburg-Berkeley Motion Segmentation data set (MAE of .07), and do so with much improved speed (2 fps with all steps).This paper proposes a deep learning model to efficiently detect salient regions in videos. It addresses two important issues: 1) deep video saliency model training with the absence of sufficiently large and pixel-wise annotated video data and 2) fast video saliency training and detection. The proposed deep video saliency network consists of two modules, for capturing the spatial and temporal saliency information, respectively. The dynamic saliency model, explicitly incorporating saliency estimates from the static saliency model, directly produces spatiotemporal saliency inference without time-consuming optical flow computation. We further propose a novel data augmentation technique that simulates video training data from existing annotated image data sets, which enables our network to learn diverse saliency information and prevents overfitting with the limited number of training videos. Leveraging our synthetic video data (150K video sequences) and real videos, our deep video saliency model successfully learns both spatial and temporal saliency cues, thus producing accurate spatiotemporal saliency estimate. We advance the state-of-the-art on the densely annotated video segmentation data set (MAE of .06) and the Freiburg-Berkeley Motion Segmentation data set (MAE of .07), and do so with much improved speed (2 fps with all steps).
Guide to Synchronization of Video Systems to IRIG Timing
1992-07-01
and industry. 1-2 CHAPTER 2 SYNCHRONISATION Before delving into the details of synchronization , a review is needed of the reasons for synchronizing ... Synchronization of Video Systems to IRIG Timing Optical Systems Group Range Commanders Council White Sands Missile Range, NM 88002-5110 RCC Document 456-92 Range...This document addresses a broad field of video synchronization to IRIG timing with emphasis on color synchronization . This document deals with
Nardelli, M; Del Piccolo, L; Danzi, Op; Perlini, C; Tedeschi, F; Greco, A; Scilingo, Ep; Valenza, G
2017-07-01
Emphatic doctor-patient communication has been associated with an improved psycho-physiological well-being involving cardiovascular and neuroendocrine responses. Nevertheless, a comprehensive assessment of heartbeat linear and nonlinear/complex dynamics throughout the communication of a life-threatening disease has not been performed yet. To this extent, we here study heart rate variability (HRV) series gathered from 17 subjects while watching a video where an oncologist discloses the diagnosis of a cancer metastasis to a patient. Further 17 subjects watched the same video including additional affective emphatic contents. For the assessment of the two groups, linear heartbeat dynamics was quantified through measures defined in the time and frequency domains, whereas nonlinear/complex dynamics referred to measures of entropy, and combined Lagged Poincare Plots (LPP) and symbolic analyses. Considering differences between the beginning and the end of the video, results from non-parametric statistical tests demonstrated that the group watching emphatic contents showed HRV changes in the LF/HF ratio exclusively. Conversely, the group watching the purely informative video showed changes in vagal activity (i.e., HF power), LF/HF ratio, as well as LPP measures. Additionally, a Support Vector Machine algorithm including HRV nonlinear/complex information was able to automatically discern between groups with an accuracy of 76.47%. We therefore propose the use of heartbeat nonlinear/complex dynamics to objectively assess the empathy level of healthy women.
ERIC Educational Resources Information Center
Watters, Christopher D.
2003-01-01
This article reviews three "Molecular Biology of the Cell" movies. These include videos on nuclear dynamics and nuclear localization signals, spindle and chromosomal movements during mitosis, and fibroblast motility and substrate adhesiveness. (Contains 5 figures.)
Embed dynamic content in your poster.
Hutchins, B Ian
2013-01-29
A new technology has emerged that will facilitate the presentation of dynamic or otherwise inaccessible data on posters at scientific meetings. Video, audio, or other digital files hosted on mobile-friendly sites can be linked to through a quick response (QR) code, a two-dimensional barcode that can be scanned by smartphones, which then display the content. This approach is more affordable than acquiring tablet computers for playing dynamic content and can reach many users at large conferences. This resource details how to host videos, generate QR codes, and view the associated files on mobile devices.
Hyun, Dai-Kyung; Ryu, Seung-Jin; Lee, Hae-Yeoun; Lee, Heung-Kyu
2013-01-01
In many court cases, surveillance videos are used as significant court evidence. As these surveillance videos can easily be forged, it may cause serious social issues, such as convicting an innocent person. Nevertheless, there is little research being done on forgery of surveillance videos. This paper proposes a forensic technique to detect forgeries of surveillance video based on sensor pattern noise (SPN). We exploit the scaling invariance of the minimum average correlation energy Mellin radial harmonic (MACE-MRH) correlation filter to reliably unveil traces of upscaling in videos. By excluding the high-frequency components of the investigated video and adaptively choosing the size of the local search window, the proposed method effectively localizes partially manipulated regions. Empirical evidence from a large database of test videos, including RGB (Red, Green, Blue)/infrared video, dynamic-/static-scene video and compressed video, indicates the superior performance of the proposed method. PMID:24051524
A novel key-frame extraction approach for both video summary and video index.
Lei, Shaoshuai; Xie, Gang; Yan, Gaowei
2014-01-01
Existing key-frame extraction methods are basically video summary oriented; yet the index task of key-frames is ignored. This paper presents a novel key-frame extraction approach which can be available for both video summary and video index. First a dynamic distance separability algorithm is advanced to divide a shot into subshots based on semantic structure, and then appropriate key-frames are extracted in each subshot by SVD decomposition. Finally, three evaluation indicators are proposed to evaluate the performance of the new approach. Experimental results show that the proposed approach achieves good semantic structure for semantics-based video index and meanwhile produces video summary consistent with human perception.
Seeing Change in Time: Video Games to Teach about Temporal Change in Scientific Phenomena
NASA Astrophysics Data System (ADS)
Corredor, Javier; Gaydos, Matthew; Squire, Kurt
2014-06-01
This article explores how learning biological concepts can be facilitated by playing a video game that depicts interactions and processes at the subcellular level. Particularly, this article reviews the effects of a real-time strategy game that requires players to control the behavior of a virus and interact with cell structures in a way that resembles the actual behavior of biological agents. The evaluation of the video game presented here aims at showing that video games have representational advantages that facilitate the construction of dynamic mental models. Ultimately, the article shows that when video game's characteristics come in contact with expert knowledge during game design, the game becomes an excellent medium for supporting the learning of disciplinary content related to dynamic processes. In particular, results show that students who participated in a game-based intervention aimed at teaching biology described a higher number of temporal-dependent interactions as measured by the coding of verbal protocols and drawings than students who used texts and diagrams to learn the same topic.
Self-expressive Dictionary Learning for Dynamic 3D Reconstruction.
Zheng, Enliang; Ji, Dinghuang; Dunn, Enrique; Frahm, Jan-Michael
2017-08-22
We target the problem of sparse 3D reconstruction of dynamic objects observed by multiple unsynchronized video cameras with unknown temporal overlap. To this end, we develop a framework to recover the unknown structure without sequencing information across video sequences. Our proposed compressed sensing framework poses the estimation of 3D structure as the problem of dictionary learning, where the dictionary is defined as an aggregation of the temporally varying 3D structures. Given the smooth motion of dynamic objects, we observe any element in the dictionary can be well approximated by a sparse linear combination of other elements in the same dictionary (i.e. self-expression). Our formulation optimizes a biconvex cost function that leverages a compressed sensing formulation and enforces both structural dependency coherence across video streams, as well as motion smoothness across estimates from common video sources. We further analyze the reconstructability of our approach under different capture scenarios, and its comparison and relation to existing methods. Experimental results on large amounts of synthetic data as well as real imagery demonstrate the effectiveness of our approach.
Meet David, Our Teacher's Helper.
ERIC Educational Resources Information Center
Newell, William; And Others
1984-01-01
DAVID, Dynamic Audio Video Instructional Device, is composed of a conventional videotape recorder, a microcomputer, and a video controller, and has been successfully used for speech reading and sign language instruction with deaf students. (CL)
Strategies of Collaborative Work in the Classroom through the Design of Video Games
ERIC Educational Resources Information Center
Muñoz González, Juan Manuel; Rubio García, Sebastián; Cruz Pichardo, Ivanovna M.
2015-01-01
At the present time, the use of video games goes beyond mere amusement or entertainment due to its potential for developing capacities, dexterity and skills. Thus, video games have extended to environments like that of education, serving as didactic resources within dynamics that respond to the interests and necessities of the 21st century…
Video streaming into the mainstream.
Garrison, W
2001-12-01
Changes in Internet technology are making possible the delivery of a richer mixture of media through data streaming. High-quality, dynamic content, such as video and audio, can be incorporated into Websites simply, flexibly and interactively. Technologies such as G3 mobile communication, ADSL, cable and satellites enable new ways of delivering medical services, information and learning. Systems such as Quicktime, Windows Media and Real Video provide reliable data streams as video-on-demand and users can tailor the experience to their own interests. The Learning Development Centre at the University of Portsmouth have used streaming technologies together with e-learning tools such as dynamic HTML, Flash, 3D objects and online assessment successfully to deliver on-line course content in economics and earth science. The Lifesign project--to develop, catalogue and stream health sciences media for teaching--is described and future medical applications are discussed.
Visualization of fluid dynamics at NASA Ames
NASA Technical Reports Server (NTRS)
Watson, Val
1989-01-01
The hardware and software currently used for visualization of fluid dynamics at NASA Ames is described. The software includes programs to create scenes (for example particle traces representing the flow over an aircraft), programs to interactively view the scenes, and programs to control the creation of video tapes and 16mm movies. The hardware includes high performance graphics workstations, a high speed network, digital video equipment, and film recorders.
Repurposing video recordings for structure motion estimations
NASA Astrophysics Data System (ADS)
Khaloo, Ali; Lattanzi, David
2016-04-01
Video monitoring of public spaces is becoming increasingly ubiquitous, particularly near essential structures and facilities. During any hazard event that dynamically excites a structure, such as an earthquake or hurricane, proximal video cameras may inadvertently capture the motion time-history of the structure during the event. If this dynamic time-history could be extracted from the repurposed video recording it would become a valuable forensic analysis tool for engineers performing post-disaster structural evaluations. The difficulty is that almost all potential video cameras are not installed to monitor structure motions, leading to camera perspective distortions and other associated challenges. This paper presents a method for extracting structure motions from videos using a combination of computer vision techniques. Images from a video recording are first reprojected into synthetic images that eliminate perspective distortion, using as-built knowledge of a structure for calibration. The motion of the camera itself during an event is also considered. Optical flow, a technique for tracking per-pixel motion, is then applied to these synthetic images to estimate the building motion. The developed method was validated using the experimental records of the NEESHub earthquake database. The results indicate that the technique is capable of estimating structural motions, particularly the frequency content of the response. Further work will evaluate variants and alternatives to the optical flow algorithm, as well as study the impact of video encoding artifacts on motion estimates.
Dynamic frame resizing with convolutional neural network for efficient video compression
NASA Astrophysics Data System (ADS)
Kim, Jaehwan; Park, Youngo; Choi, Kwang Pyo; Lee, JongSeok; Jeon, Sunyoung; Park, JeongHoon
2017-09-01
In the past, video codecs such as vc-1 and H.263 used a technique to encode reduced-resolution video and restore original resolution from the decoder for improvement of coding efficiency. The techniques of vc-1 and H.263 Annex Q are called dynamic frame resizing and reduced-resolution update mode, respectively. However, these techniques have not been widely used due to limited performance improvements that operate well only under specific conditions. In this paper, video frame resizing (reduced/restore) technique based on machine learning is proposed for improvement of coding efficiency. The proposed method features video of low resolution made by convolutional neural network (CNN) in encoder and reconstruction of original resolution using CNN in decoder. The proposed method shows improved subjective performance over all the high resolution videos which are dominantly consumed recently. In order to assess subjective quality of the proposed method, Video Multi-method Assessment Fusion (VMAF) which showed high reliability among many subjective measurement tools was used as subjective metric. Moreover, to assess general performance, diverse bitrates are tested. Experimental results showed that BD-rate based on VMAF was improved by about 51% compare to conventional HEVC. Especially, VMAF values were significantly improved in low bitrate. Also, when the method is subjectively tested, it had better subjective visual quality in similar bit rate.
Close-range photogrammetry with video cameras
NASA Technical Reports Server (NTRS)
Burner, A. W.; Snow, W. L.; Goad, W. K.
1985-01-01
Examples of photogrammetric measurements made with video cameras uncorrected for electronic and optical lens distortions are presented. The measurement and correction of electronic distortions of video cameras using both bilinear and polynomial interpolation are discussed. Examples showing the relative stability of electronic distortions over long periods of time are presented. Having corrected for electronic distortion, the data are further corrected for lens distortion using the plumb line method. Examples of close-range photogrammetric data taken with video cameras corrected for both electronic and optical lens distortion are presented.
Close-Range Photogrammetry with Video Cameras
NASA Technical Reports Server (NTRS)
Burner, A. W.; Snow, W. L.; Goad, W. K.
1983-01-01
Examples of photogrammetric measurements made with video cameras uncorrected for electronic and optical lens distortions are presented. The measurement and correction of electronic distortions of video cameras using both bilinear and polynomial interpolation are discussed. Examples showing the relative stability of electronic distortions over long periods of time are presented. Having corrected for electronic distortion, the data are further corrected for lens distortion using the plumb line method. Examples of close-range photogrammetric data taken with video cameras corrected for both electronic and optical lens distortion are presented.
Video sensor with range measurement capability
NASA Technical Reports Server (NTRS)
Howard, Richard T. (Inventor); Briscoe, Jeri M. (Inventor); Corder, Eric L. (Inventor); Broderick, David J. (Inventor)
2008-01-01
A video sensor device is provided which incorporates a rangefinder function. The device includes a single video camera and a fixed laser spaced a predetermined distance from the camera for, when activated, producing a laser beam. A diffractive optic element divides the beam so that multiple light spots are produced on a target object. A processor calculates the range to the object based on the known spacing and angles determined from the light spots on the video images produced by the camera.
A new method for digital video documentation in surgical procedures and minimally invasive surgery.
Wurnig, P N; Hollaus, P H; Wurnig, C H; Wolf, R K; Ohtsuka, T; Pridun, N S
2003-02-01
Documentation of surgical procedures is limited to the accuracy of description, which depends on the vocabulary and the descriptive prowess of the surgeon. Even analog video recording could not solve the problem of documentation satisfactorily due to the abundance of recorded material. By capturing the video digitally, most problems are solved in the circumstances described in this article. We developed a cheap and useful digital video capturing system that consists of conventional computer components. Video images and clips can be captured intraoperatively and are immediately available. The system is a commercial personal computer specially configured for digital video capturing and is connected by wire to the video tower. Filming was done with a conventional endoscopic video camera. A total of 65 open and endoscopic procedures were documented in an orthopedic and a thoracic surgery unit. The median number of clips per surgical procedure was 6 (range, 1-17), and the median storage volume was 49 MB (range, 3-360 MB) in compressed form. The median duration of a video clip was 4 min 25 s (range, 45 s to 21 min). Median time for editing a video clip was 12 min for an advanced user (including cutting, title for the movie, and compression). The quality of the clips renders them suitable for presentations. This digital video documentation system allows easy capturing of intraoperative video sequences in high quality. All possibilities of documentation can be performed. With the use of an endoscopic video camera, no compromises with respect to sterility and surgical elbowroom are necessary. The cost is much lower than commercially available systems, and setting changes can be performed easily without trained specialists.
ERIC Educational Resources Information Center
McDonald, Scott
2010-01-01
For decades teacher educators have used video to support developing preservice teachers, but new technologies open the possibility of a much more dynamic and real-time use for video of teaching. This article describes an initial attempt to leverage these technologies to develop a teacher learning community focused on evidence-based arguments about…
The Problem of Delayed Causation in a Video Game: Constant, Varied, and Filled Delays
ERIC Educational Resources Information Center
Young, Michael E.; Nguyen, Nam
2009-01-01
A first-person shooter video game was adapted for the study of causal decision making within dynamic environments. The video game included groups of three potential targets. Participants chose which of the three targets in each group was producing distal explosions. The actual source of the explosion effect varied in the delay between the firing…
ERIC Educational Resources Information Center
Lazarus, Jill; Roulet, Geoffrey
2013-01-01
This article discusses the integration of student-generated GeoGebra applets and Jing screencast videos to create a YouTube-like medium for sharing in mathematics. The value of combining dynamic mathematics software and screencast videos for facilitating communication and representations in a digital era is demonstrated herein. We share our…
Tian, Shu; Yin, Xu-Cheng; Wang, Zhi-Bin; Zhou, Fang; Hao, Hong-Wei
2015-01-01
The phacoemulsification surgery is one of the most advanced surgeries to treat cataract. However, the conventional surgeries are always with low automatic level of operation and over reliance on the ability of surgeons. Alternatively, one imaginative scene is to use video processing and pattern recognition technologies to automatically detect the cataract grade and intelligently control the release of the ultrasonic energy while operating. Unlike cataract grading in the diagnosis system with static images, complicated background, unexpected noise, and varied information are always introduced in dynamic videos of the surgery. Here we develop a Video-Based Intelligent Recognitionand Decision (VeBIRD) system, which breaks new ground by providing a generic framework for automatically tracking the operation process and classifying the cataract grade in microscope videos of the phacoemulsification cataract surgery. VeBIRD comprises a robust eye (iris) detector with randomized Hough transform to precisely locate the eye in the noise background, an effective probe tracker with Tracking-Learning-Detection to thereafter track the operation probe in the dynamic process, and an intelligent decider with discriminative learning to finally recognize the cataract grade in the complicated video. Experiments with a variety of real microscope videos of phacoemulsification verify VeBIRD's effectiveness.
Yin, Xu-Cheng; Wang, Zhi-Bin; Zhou, Fang; Hao, Hong-Wei
2015-01-01
The phacoemulsification surgery is one of the most advanced surgeries to treat cataract. However, the conventional surgeries are always with low automatic level of operation and over reliance on the ability of surgeons. Alternatively, one imaginative scene is to use video processing and pattern recognition technologies to automatically detect the cataract grade and intelligently control the release of the ultrasonic energy while operating. Unlike cataract grading in the diagnosis system with static images, complicated background, unexpected noise, and varied information are always introduced in dynamic videos of the surgery. Here we develop a Video-Based Intelligent Recognitionand Decision (VeBIRD) system, which breaks new ground by providing a generic framework for automatically tracking the operation process and classifying the cataract grade in microscope videos of the phacoemulsification cataract surgery. VeBIRD comprises a robust eye (iris) detector with randomized Hough transform to precisely locate the eye in the noise background, an effective probe tracker with Tracking-Learning-Detection to thereafter track the operation probe in the dynamic process, and an intelligent decider with discriminative learning to finally recognize the cataract grade in the complicated video. Experiments with a variety of real microscope videos of phacoemulsification verify VeBIRD's effectiveness. PMID:26693249
Video-Seismic coupling for debris flow study at Merapi Volcano, Indonesia
NASA Astrophysics Data System (ADS)
Budi Wibowo, Sandy; Lavigne, Franck; Mourot, Philippe; Sukatja, Bambang
2016-04-01
Previous lahar disasters caused at least 44.252 death toll worldwide from 1600 to 2010 of which 52 % was due to a single event in the late 20th century. The need of a better understanding of lahar flow behavior makes general public and stakeholders much more curious than before. However, the dynamics of lahar in motion is still poorly understood because data acquisition of active flows is difficult. This research presents debris-flow-type lahar on February 28, 2014 at Merapi volcano in Indonesia. The lahar dynamics was studied in the frame of the SEDIMER Project (Sediment-related Disasters following the 2010 centennial eruption of Merapi Volcano, Java, Indonesia) based on coupling between video and seismic data analysis. We installed a seismic station at Gendol river (1090 meters asl, 4.6 km south from the summit) consisting of two geophones placed 76 meters apart parallel to the river, a high definition camera on the edge of the river and two raingauges at east and west side of the river. The results showed that the behavior of this lahar changed continuously during the event. The lahar front moved at an average speed of 4.1 m/s at the observation site. Its maximum velocity reached 14.5 m/s with a peak discharge of 473 m3/s. The maximum depth of the flow reached 7 m. Almost 600 blocks of more than 1 m main axis were identified on the surface of the lahar during 36 minutes, which represents an average block discharge of 17 blocks per minute. Seismic frequency ranged from 10 to 150 Hz. However, there was a clear difference between upstream and downstream seismic characteristics. The interpretation related to this difference could be improved by the results of analysis of video recordings, especially to differentiate the debris flow and hyperconcentrated flow phase. The lahar video is accessible online to the broader community (https://www.youtube.com/watch?v=wlVssRoaPbw). Keywords: lahar, video, seismic signal, debris flow, hyperconcentrated flow, Merapi, Indonesia.
A hardware architecture for real-time shadow removal in high-contrast video
NASA Astrophysics Data System (ADS)
Verdugo, Pablo; Pezoa, Jorge E.; Figueroa, Miguel
2017-09-01
Broadcasting an outdoor sports event at daytime is a challenging task due to the high contrast that exists between areas in the shadow and light conditions within the same scene. Commercial cameras typically do not handle the high dynamic range of such scenes in a proper manner, resulting in broadcast streams with very little shadow detail. We propose a hardware architecture for real-time shadow removal in high-resolution video, which reduces the shadow effect and simultaneously improves shadow details. The algorithm operates only on the shadow portions of each video frame, thus improving the results and producing more realistic images than algorithms that operate on the entire frame, such as simplified Retinex and histogram shifting. The architecture receives an input in the RGB color space, transforms it into the YIQ space, and uses color information from both spaces to produce a mask of the shadow areas present in the image. The mask is then filtered using a connected components algorithm to eliminate false positives and negatives. The hardware uses pixel information at the edges of the mask to estimate the illumination ratio between light and shadow in the image, which is then used to correct the shadow area. Our prototype implementation simultaneously processes up to 7 video streams of 1920×1080 pixels at 60 frames per second on a Xilinx Kintex-7 XC7K325T FPGA.
NASA Astrophysics Data System (ADS)
Morita, Shinji; Yamazawa, Kazumasa; Yokoya, Naokazu
2003-01-01
This paper describes a new networked telepresence system which realizes virtual tours into a visualized dynamic real world without significant time delay. Our system is realized by the following three steps: (1) video-rate omnidirectional image acquisition, (2) transportation of an omnidirectional video stream via internet, and (3) real-time view-dependent perspective image generation from the omnidirectional video stream. Our system is applicable to real-time telepresence in the situation where the real world to be seen is far from an observation site, because the time delay from the change of user"s viewing direction to the change of displayed image is small and does not depend on the actual distance between both sites. Moreover, multiple users can look around from a single viewpoint in a visualized dynamic real world in different directions at the same time. In experiments, we have proved that the proposed system is useful for internet telepresence.
Extraction of Blebs in Human Embryonic Stem Cell Videos.
Guan, Benjamin X; Bhanu, Bir; Talbot, Prue; Weng, Nikki Jo-Hao
2016-01-01
Blebbing is an important biological indicator in determining the health of human embryonic stem cells (hESC). Especially, areas of a bleb sequence in a video are often used to distinguish two cell blebbing behaviors in hESC: dynamic and apoptotic blebbings. This paper analyzes various segmentation methods for bleb extraction in hESC videos and introduces a bio-inspired score function to improve the performance in bleb extraction. Full bleb formation consists of bleb expansion and retraction. Blebs change their size and image properties dynamically in both processes and between frames. Therefore, adaptive parameters are needed for each segmentation method. A score function derived from the change of bleb area and orientation between consecutive frames is proposed which provides adaptive parameters for bleb extraction in videos. In comparison to manual analysis, the proposed method provides an automated fast and accurate approach for bleb sequence extraction.
Techniques for animation of CFD results. [computational fluid dynamics
NASA Technical Reports Server (NTRS)
Horowitz, Jay; Hanson, Jeffery C.
1992-01-01
Video animation is becoming increasingly vital to the computational fluid dynamics researcher, not just for presentation, but for recording and comparing dynamic visualizations that are beyond the current capabilities of even the most powerful graphic workstation. To meet these needs, Lewis Research Center has recently established a facility to provide users with easy access to advanced video animation capabilities. However, producing animation that is both visually effective and scientifically accurate involves various technological and aesthetic considerations that must be understood both by the researcher and those supporting the visualization process. These considerations include: scan conversion, color conversion, and spatial ambiguities.
Meteor44 Video Meteor Photometry
NASA Technical Reports Server (NTRS)
Swift, Wesley R.; Suggs, Robert M.; Cooke, William J.
2004-01-01
Meteor44 is a software system developed at MSFC for the calibration and analysis of video meteor data. The dynamic range of the (8bit) video data is extended by approximately 4 magnitudes for both meteors and stellar images using saturation compensation. Camera and lens specific saturation compensation coefficients are derived from artificial variable star laboratory measurements. Saturation compensation significantly increases the number of meteors with measured intensity and improves the estimation of meteoroid mass distribution. Astrometry is automated to determine each image s plate coefficient using appropriate star catalogs. The images are simultaneously intensity calibrated from the contained stars to determine the photon sensitivity and the saturation level referenced above the atmosphere. The camera s spectral response is used to compensate for stellar color index and typical meteor spectra in order to report meteor light curves in traditional visual magnitude units. Recent efforts include improved camera calibration procedures, long focal length "streak" meteor photome&y and two-station track determination. Meteor44 has been used to analyze data from the 2001.2002 and 2003 MSFC Leonid observational campaigns as well as several lesser showers. The software is interactive and can be demonstrated using data from recent Leonid campaigns.
Mode extraction on wind turbine blades via phase-based video motion estimation
NASA Astrophysics Data System (ADS)
Sarrafi, Aral; Poozesh, Peyman; Niezrecki, Christopher; Mao, Zhu
2017-04-01
In recent years, image processing techniques are being applied more often for structural dynamics identification, characterization, and structural health monitoring. Although as a non-contact and full-field measurement method, image processing still has a long way to go to outperform other conventional sensing instruments (i.e. accelerometers, strain gauges, laser vibrometers, etc.,). However, the technologies associated with image processing are developing rapidly and gaining more attention in a variety of engineering applications including structural dynamics identification and modal analysis. Among numerous motion estimation and image-processing methods, phase-based video motion estimation is considered as one of the most efficient methods regarding computation consumption and noise robustness. In this paper, phase-based video motion estimation is adopted for structural dynamics characterization on a 2.3-meter long Skystream wind turbine blade, and the modal parameters (natural frequencies, operating deflection shapes) are extracted. Phase-based video processing adopted in this paper provides reliable full-field 2-D motion information, which is beneficial for manufacturing certification and model updating at the design stage. The phase-based video motion estimation approach is demonstrated through processing data on a full-scale commercial structure (i.e. a wind turbine blade) with complex geometry and properties, and the results obtained have a good correlation with the modal parameters extracted from accelerometer measurements, especially for the first four bending modes, which have significant importance in blade characterization.
Papenmeier, Frank; Huff, Markus
2010-02-01
Analyzing gaze behavior with dynamic stimulus material is of growing importance in experimental psychology; however, there is still a lack of efficient analysis tools that are able to handle dynamically changing areas of interest. In this article, we present DynAOI, an open-source tool that allows for the definition of dynamic areas of interest. It works automatically with animations that are based on virtual three-dimensional models. When one is working with videos of real-world scenes, a three-dimensional model of the relevant content needs to be created first. The recorded eye-movement data are matched with the static and dynamic objects in the model underlying the video content, thus creating static and dynamic areas of interest. A validation study asking participants to track particular objects demonstrated that DynAOI is an efficient tool for handling dynamic areas of interest.
Moghadam, Saeed Montazeri; Seyyedsalehi, Seyyed Ali
2018-05-31
Nonlinear components extracted from deep structures of bottleneck neural networks exhibit a great ability to express input space in a low-dimensional manifold. Sharing and combining the components boost the capability of the neural networks to synthesize and interpolate new and imaginary data. This synthesis is possibly a simple model of imaginations in human brain where the components are expressed in a nonlinear low dimensional manifold. The current paper introduces a novel Dynamic Deep Bottleneck Neural Network to analyze and extract three main features of videos regarding the expression of emotions on the face. These main features are identity, emotion and expression intensity that are laid in three different sub-manifolds of one nonlinear general manifold. The proposed model enjoying the advantages of recurrent networks was used to analyze the sequence and dynamics of information in videos. It is noteworthy to mention that this model also has also the potential to synthesize new videos showing variations of one specific emotion on the face of unknown subjects. Experiments on discrimination and recognition ability of extracted components showed that the proposed model has an average of 97.77% accuracy in recognition of six prominent emotions (Fear, Surprise, Sadness, Anger, Disgust, and Happiness), and 78.17% accuracy in the recognition of intensity. The produced videos revealed variations from neutral to the apex of an emotion on the face of the unfamiliar test subject which is on average 0.8 similar to reference videos in the scale of the SSIM method. Copyright © 2018 Elsevier Ltd. All rights reserved.
[How to produce a video to promote HIV testing in men who have sex with men?].
Menacho, Luis A; Blas, Magaly M
2015-01-01
The aim of the study was to describe the process of designing and producing a video to promote HIV testing in Peruvian men who have sex with men (MSM). The process involved the following steps: identification of the theories of behavior change; identifying key messages and video features; developing a script that would captivate the target audience; working with an experienced production company; and piloting the video. A video with everyday situations of risk associated with HIV infection was the one preferred by participants. Key messages identified, and theoretical constructs models chosen were used to create the video scenes. Participants identified with the main, 9 minute video which they considered to be clear and dynamic. It is necessary to work with the target population to design a video according to their preferences.
Full-frame video stabilization with motion inpainting.
Matsushita, Yasuyuki; Ofek, Eyal; Ge, Weina; Tang, Xiaoou; Shum, Heung-Yeung
2006-07-01
Video stabilization is an important video enhancement technology which aims at removing annoying shaky motion from videos. We propose a practical and robust approach of video stabilization that produces full-frame stabilized videos with good visual quality. While most previous methods end up with producing smaller size stabilized videos, our completion method can produce full-frame videos by naturally filling in missing image parts by locally aligning image data of neighboring frames. To achieve this, motion inpainting is proposed to enforce spatial and temporal consistency of the completion in both static and dynamic image areas. In addition, image quality in the stabilized video is enhanced with a new practical deblurring algorithm. Instead of estimating point spread functions, our method transfers and interpolates sharper image pixels of neighboring frames to increase the sharpness of the frame. The proposed video completion and deblurring methods enabled us to develop a complete video stabilizer which can naturally keep the original image quality in the stabilized videos. The effectiveness of our method is confirmed by extensive experiments over a wide variety of videos.
Subjective evaluation of H.265/HEVC based dynamic adaptive video streaming over HTTP (HEVC-DASH)
NASA Astrophysics Data System (ADS)
Irondi, Iheanyi; Wang, Qi; Grecos, Christos
2015-02-01
The Dynamic Adaptive Streaming over HTTP (DASH) standard is becoming increasingly popular for real-time adaptive HTTP streaming of internet video in response to unstable network conditions. Integration of DASH streaming techniques with the new H.265/HEVC video coding standard is a promising area of research. The performance of HEVC-DASH systems has been previously evaluated by a few researchers using objective metrics, however subjective evaluation would provide a better measure of the user's Quality of Experience (QoE) and overall performance of the system. This paper presents a subjective evaluation of an HEVC-DASH system implemented in a hardware testbed. Previous studies in this area have focused on using the current H.264/AVC (Advanced Video Coding) or H.264/SVC (Scalable Video Coding) codecs and moreover, there has been no established standard test procedure for the subjective evaluation of DASH adaptive streaming. In this paper, we define a test plan for HEVC-DASH with a carefully justified data set employing longer video sequences that would be sufficient to demonstrate the bitrate switching operations in response to various network condition patterns. We evaluate the end user's real-time QoE online by investigating the perceived impact of delay, different packet loss rates, fluctuating bandwidth, and the perceived quality of using different DASH video stream segment sizes on a video streaming session using different video sequences. The Mean Opinion Score (MOS) results give an insight into the performance of the system and expectation of the users. The results from this study show the impact of different network impairments and different video segments on users' QoE and further analysis and study may help in optimizing system performance.
Priority-based methods for reducing the impact of packet loss on HEVC encoded video streams
NASA Astrophysics Data System (ADS)
Nightingale, James; Wang, Qi; Grecos, Christos
2013-02-01
The rapid growth in the use of video streaming over IP networks has outstripped the rate at which new network infrastructure has been deployed. These bandwidth-hungry applications now comprise a significant part of all Internet traffic and present major challenges for network service providers. The situation is more acute in mobile networks where the available bandwidth is often limited. Work towards the standardisation of High Efficiency Video Coding (HEVC), the next generation video coding scheme, is currently on track for completion in 2013. HEVC offers the prospect of a 50% improvement in compression over the current H.264 Advanced Video Coding standard (H.264/AVC) for the same quality. However, there has been very little published research on HEVC streaming or the challenges of delivering HEVC streams in resource-constrained network environments. In this paper we consider the problem of adapting an HEVC encoded video stream to meet the bandwidth limitation in a mobile networks environment. Video sequences were encoded using the Test Model under Consideration (TMuC HM6) for HEVC. Network abstraction layers (NAL) units were packetized, on a one NAL unit per RTP packet basis, and transmitted over a realistic hybrid wired/wireless testbed configured with dynamically changing network path conditions and multiple independent network paths from the streamer to the client. Two different schemes for the prioritisation of RTP packets, based on the NAL units they contain, have been implemented and empirically compared using a range of video sequences, encoder configurations, bandwidths and network topologies. In the first prioritisation method the importance of an RTP packet was determined by the type of picture and the temporal switching point information carried in the NAL unit header. Packets containing parameter set NAL units and video coding layer (VCL) NAL units of the instantaneous decoder refresh (IDR) and the clean random access (CRA) pictures were given the highest priority followed by NAL units containing pictures used as reference pictures from which others can be predicted. The second method assigned a priority to each NAL unit based on the rate-distortion cost of the VCL coding units contained in the NAL unit. The sum of the rate-distortion costs of each coding unit contained in a NAL unit was used as the priority weighting. The preliminary results of extensive experiments have shown that all three schemes offered an improvement in PSNR, when comparing original and decoded received streams, over uncontrolled packet loss. Using the first method consistently delivered a significant average improvement of 0.97dB over the uncontrolled scenario while the second method provided a measurable, but less consistent, improvement across the range of testing conditions and encoder configurations.
Socio-phenomenology and conversation analysis: interpreting video lifeworld healthcare interactions.
Bickerton, Jane; Procter, Sue; Johnson, Barbara; Medina, Angel
2011-10-01
This article uses a socio-phenomenological methodology to develop knowledge and understanding of the healthcare consultation based on the concept of the lifeworld. It concentrates its attention on social action rather than strategic action and a systems approach. This article argues that patient-centred care is more effective when it is informed through a lifeworld conception of human mutual shared interaction. Videos offer an opportunity for a wide audience to experience the many kinds of conversations and dynamics that take place in consultations. Visual sociology used in this article provides a method to organize video emotional, knowledge and action conversations as well as dynamic typical consultation situations. These interactions are experienced through the video materials themselves unlike conversation analysis where video materials are first transcribed and then analysed. Both approaches have the potential to support intersubjective learning but this article argues that a video lifeworld schema is more accessible to health professionals and the general public. The typical interaction situations are constructed through the analysis of video materials of consultations in a London walk-in centre. Further studies are planned in the future to extend and replicate results in other healthcare services. This method of analysis focuses on the ways in which the everyday lifeworld informs face-to-face person-centred health care and supports social action as a significant factor underpinning strategic action and a systems approach to consultation practice. © 2011 Blackwell Publishing Ltd.
Video monitoring of oxygen saturation during controlled episodes of acute hypoxia.
Addison, Paul S; Foo, David M H; Jacquel, Dominique; Borg, Ulf
2016-08-01
A method for extracting video photoplethysmographic information from an RGB video stream is tested on data acquired during a porcine model of acute hypoxia. Cardiac pulsatile information was extracted from the acquired signals and processed to determine a continuously reported oxygen saturation (SvidO2). A high degree of correlation was found to exist between the video and a reference from a pulse oximeter. The calculated mean bias and accuracy across all eight desaturation episodes were -0.03% (range: -0.21% to 0.24%) and accuracy 4.90% (range: 3.80% to 6.19%) respectively. The results support the hypothesis that oxygen saturation trending can be evaluated accurately from a video system during acute hypoxia.
A Taxonomy of Asynchronous Instructional Video Styles
ERIC Educational Resources Information Center
Chorianopoulos, Konstantinos
2018-01-01
Many educational organizations are employing instructional videos in their pedagogy, but there is a limited understanding of the possible video formats. In practice, the presentation format of instructional videos ranges from direct recording of classroom teaching with a stationary camera, or screencasts with voiceover, to highly elaborate video…
Video game-based coordinative training improves ataxia in children with degenerative ataxia.
Ilg, Winfried; Schatton, Cornelia; Schicks, Julia; Giese, Martin A; Schöls, Ludger; Synofzik, Matthis
2012-11-13
Degenerative ataxias in children present a rare condition where effective treatments are lacking. Intensive coordinative training based on physiotherapeutic exercises improves degenerative ataxia in adults, but such exercises have drawbacks for children, often including a lack of motivation for high-frequent physiotherapy. Recently developed whole-body controlled video game technology might present a novel treatment strategy for highly interactive and motivational coordinative training for children with degenerative ataxias. We examined the effectiveness of an 8-week coordinative training for 10 children with progressive spinocerebellar ataxia. Training was based on 3 Microsoft Xbox Kinect video games particularly suitable to exercise whole-body coordination and dynamic balance. Training was started with a laboratory-based 2-week training phase and followed by 6 weeks training in children's home environment. Rater-blinded assessments were performed 2 weeks before laboratory-based training, immediately prior to and after the laboratory-based training period, as well as after home training. These assessments allowed for an intraindividual control design, where performance changes with and without training were compared. Ataxia symptoms were significantly reduced (decrease in Scale for the Assessment and Rating of Ataxia score, p = 0.0078) and balance capacities improved (dynamic gait index, p = 0.04) after intervention. Quantitative movement analysis revealed improvements in gait (lateral sway: p = 0.01; step length variability: p = 0.01) and in goal-directed leg placement (p = 0.03). Despite progressive cerebellar degeneration, children are able to improve motor performance by intensive coordination training. Directed training of whole-body controlled video games might present a highly motivational, cost-efficient, and home-based rehabilitation strategy to train dynamic balance and interaction with dynamic environments in a large variety of young-onset neurologic conditions. This study provides Class III evidence that directed training with Xbox Kinect video games can improve several signs of ataxia in adolescents with progressive ataxia as measured by SARA score, Dynamic Gait Index, and Activity-specific Balance Confidence Scale at 8 weeks of training.
Sun, J; Wang, T; Li, Z D; Shao, Y; Zhang, Z Y; Feng, H; Zou, D H; Chen, Y J
2017-12-01
To reconstruct a vehicle-bicycle-cyclist crash accident and analyse the injuries using 3D laser scanning technology, multi-rigid-body dynamics and optimized genetic algorithm, and to provide biomechanical basis for the forensic identification of death cause. The vehicle was measured by 3D laser scanning technology. The multi-rigid-body models of cyclist, bicycle and vehicle were developed based on the measurements. The value range of optimal variables was set. A multi-objective genetic algorithm and the nondominated sorting genetic algorithm were used to find the optimal solutions, which were compared to the record of the surveillance video around the accident scene. The reconstruction result of laser scanning on vehicle was satisfactory. In the optimal solutions found by optimization method of genetic algorithm, the dynamical behaviours of dummy, bicycle and vehicle corresponded to that recorded by the surveillance video. The injury parameters of dummy were consistent with the situation and position of the real injuries on the cyclist in accident. The motion status before accident, damage process by crash and mechanical analysis on the injury of the victim can be reconstructed using 3D laser scanning technology, multi-rigid-body dynamics and optimized genetic algorithm, which have application value in the identification of injury manner and analysis of death cause in traffic accidents. Copyright© by the Editorial Department of Journal of Forensic Medicine
An evaluation of dynamic lip-tooth characteristics during speech and smile in adolescents.
Ackerman, Marc B; Brensinger, Colleen; Landis, J Richard
2004-02-01
This retrospective study was conducted to measure lip-tooth characteristics of adolescents. Pretreatment video clips of 1242 consecutive patients were screened for Class-I skeletal and dental patterns. After all inclusion criteria were applied, the final sample consisted of 50 patients (27 boys, 23 girls) with a mean age of 12.5 years. The raw digital video stream of each patient was edited to select a single image frame representing the patient saying the syllable "chee" and a second single image representing the patient's posed social smile and saved as part of a 12-frame image sequence. Each animation image was analyzed using a SmileMesh computer application to measure the smile index (the ratio of the intercommissure width divided by the interlabial gap), intercommissure width (mm), interlabial gap (mm), percent incisor below the intercommissure line, and maximum incisor exposure (mm). The data were analyzed using SAS (version 8.1). All recorded differences in linear measures had to be > or = 2 mm. The results suggest that anterior tooth display at speech and smile should be recorded independently but evaluated as part of a dynamic range. Asking patients to say "cheese" and then smile is no longer a valid method to elicit the parameters of anterior tooth display. When planning the vertical positions of incisors during orthodontic treatment, the orthodontist should view the dynamics of anterior tooth display as a continuum delineated by the time points of rest, speech, posed social smile, and a Duchenne smile.
Dynamic code block size for JPEG 2000
NASA Astrophysics Data System (ADS)
Tsai, Ping-Sing; LeCornec, Yann
2008-02-01
Since the standardization of the JPEG 2000, it has found its way into many different applications such as DICOM (digital imaging and communication in medicine), satellite photography, military surveillance, digital cinema initiative, professional video cameras, and so on. The unified framework of the JPEG 2000 architecture makes practical high quality real-time compression possible even in video mode, i.e. motion JPEG 2000. In this paper, we present a study of the compression impact using dynamic code block size instead of fixed code block size as specified in the JPEG 2000 standard. The simulation results show that there is no significant impact on compression if dynamic code block sizes are used. In this study, we also unveil the advantages of using dynamic code block sizes.
Media and human capital development: Can video game playing make you smarter?
Suziedelyte, Agne
2015-04-01
According to the literature, video game playing can improve such cognitive skills as problem solving, abstract reasoning, and spatial logic. I test this hypothesis using The Child Development Supplement to the Panel Study of Income Dynamics. The endogeneity of video game playing is addressed by using panel data methods and controlling for an extensive list of child and family characteristics. To address the measurement error in video game playing, I instrument children's weekday time use with their weekend time use. After taking into account the endogeneity and measurement error, video game playing is found to positively affect children's problem solving ability. The effect of video game playing on problem solving ability is comparable to the effect of educational activities.
Media and human capital development: Can video game playing make you smarter?1
Suziedelyte, Agne
2015-01-01
According to the literature, video game playing can improve such cognitive skills as problem solving, abstract reasoning, and spatial logic. I test this hypothesis using The Child Development Supplement to the Panel Study of Income Dynamics. The endogeneity of video game playing is addressed by using panel data methods and controlling for an extensive list of child and family characteristics. To address the measurement error in video game playing, I instrument children's weekday time use with their weekend time use. After taking into account the endogeneity and measurement error, video game playing is found to positively affect children's problem solving ability. The effect of video game playing on problem solving ability is comparable to the effect of educational activities. PMID:25705064
Remote Visualization and Remote Collaboration On Computational Fluid Dynamics
NASA Technical Reports Server (NTRS)
Watson, Val; Lasinski, T. A. (Technical Monitor)
1995-01-01
A new technology has been developed for remote visualization that provides remote, 3D, high resolution, dynamic, interactive viewing of scientific data (such as fluid dynamics simulations or measurements). Based on this technology, some World Wide Web sites on the Internet are providing fluid dynamics data for educational or testing purposes. This technology is also being used for remote collaboration in joint university, industry, and NASA projects in computational fluid dynamics and wind tunnel testing. Previously, remote visualization of dynamic data was done using video format (transmitting pixel information) such as video conferencing or MPEG movies on the Internet. The concept for this new technology is to send the raw data (e.g., grids, vectors, and scalars) along with viewing scripts over the Internet and have the pixels generated by a visualization tool running on the viewer's local workstation. The visualization tool that is currently used is FAST (Flow Analysis Software Toolkit).
Teaching physics with Angry Birds: exploring the kinematics and dynamics of the game
NASA Astrophysics Data System (ADS)
Rodrigues, M.; Simeão Carvalho, P.
2013-07-01
In this paper, we present classroom strategies for teaching kinematics at middle and high school levels, using Rovio’s famous game Angry Birds and the video analyser software Tracker. We show how to take advantage of this entertaining video game, by recording appropriate motions of birds that students can explore by manipulating data, characterizing the red bird’s motion and fitting results to physical models. A dynamic approach is also addressed to link gravitational force to projectile trajectories.
Automated video-based assessment of surgical skills for training and evaluation in medical schools.
Zia, Aneeq; Sharma, Yachna; Bettadapura, Vinay; Sarin, Eric L; Ploetz, Thomas; Clements, Mark A; Essa, Irfan
2016-09-01
Routine evaluation of basic surgical skills in medical schools requires considerable time and effort from supervising faculty. For each surgical trainee, a supervisor has to observe the trainees in person. Alternatively, supervisors may use training videos, which reduces some of the logistical overhead. All these approaches however are still incredibly time consuming and involve human bias. In this paper, we present an automated system for surgical skills assessment by analyzing video data of surgical activities. We compare different techniques for video-based surgical skill evaluation. We use techniques that capture the motion information at a coarser granularity using symbols or words, extract motion dynamics using textural patterns in a frame kernel matrix, and analyze fine-grained motion information using frequency analysis. We were successfully able to classify surgeons into different skill levels with high accuracy. Our results indicate that fine-grained analysis of motion dynamics via frequency analysis is most effective in capturing the skill relevant information in surgical videos. Our evaluations show that frequency features perform better than motion texture features, which in-turn perform better than symbol-/word-based features. Put succinctly, skill classification accuracy is positively correlated with motion granularity as demonstrated by our results on two challenging video datasets.
Wang, Regina W. Y.; Chang, Yu-Ching; Chuang, Shang-Wen
2016-01-01
Neuromarketing has become popular and received a lot of attention. The quality of video commercials and the product information they convey to consumers is a hotly debated topic among advertising agencies and product advertisers. This study explored the impact of advertising narrative and the frequency of branding product exposures on the preference for the commercial and the branding product. We performed electroencephalography (EEG) experiments on 30 subjects while they watched video commercials. The behavioral data indicated that commercials with a structured narrative and containing multiple exposures of the branding products had a positive impact on the preference for the commercial and the branding product. The EEG spectral dynamics showed that the narratives of video commercials resulted in higher theta power of the left frontal, bilateral occipital region, and higher gamma power of the limbic system. The narratives also induced significant cognitive integration-related beta and gamma power of the bilateral temporal regions and the parietal region. It is worth noting that the video commercials with a single exposure of the branding products would be indicators of attention. These new findings suggest that the presence of a narrative structure in video commercials has a critical impact on the preference for branding products. PMID:27819348
Action Spotting and Recognition Based on a Spatiotemporal Orientation Analysis.
Derpanis, Konstantinos G; Sizintsev, Mikhail; Cannons, Kevin J; Wildes, Richard P
2013-03-01
This paper provides a unified framework for the interrelated topics of action spotting, the spatiotemporal detection and localization of human actions in video, and action recognition, the classification of a given video into one of several predefined categories. A novel compact local descriptor of video dynamics in the context of action spotting and recognition is introduced based on visual spacetime oriented energy measurements. This descriptor is efficiently computed directly from raw image intensity data and thereby forgoes the problems typically associated with flow-based features. Importantly, the descriptor allows for the comparison of the underlying dynamics of two spacetime video segments irrespective of spatial appearance, such as differences induced by clothing, and with robustness to clutter. An associated similarity measure is introduced that admits efficient exhaustive search for an action template, derived from a single exemplar video, across candidate video sequences. The general approach presented for action spotting and recognition is amenable to efficient implementation, which is deemed critical for many important applications. For action spotting, details of a real-time GPU-based instantiation of the proposed approach are provided. Empirical evaluation of both action spotting and action recognition on challenging datasets suggests the efficacy of the proposed approach, with state-of-the-art performance documented on standard datasets.
Wang, Regina W Y; Chang, Yu-Ching; Chuang, Shang-Wen
2016-11-07
Neuromarketing has become popular and received a lot of attention. The quality of video commercials and the product information they convey to consumers is a hotly debated topic among advertising agencies and product advertisers. This study explored the impact of advertising narrative and the frequency of branding product exposures on the preference for the commercial and the branding product. We performed electroencephalography (EEG) experiments on 30 subjects while they watched video commercials. The behavioral data indicated that commercials with a structured narrative and containing multiple exposures of the branding products had a positive impact on the preference for the commercial and the branding product. The EEG spectral dynamics showed that the narratives of video commercials resulted in higher theta power of the left frontal, bilateral occipital region, and higher gamma power of the limbic system. The narratives also induced significant cognitive integration-related beta and gamma power of the bilateral temporal regions and the parietal region. It is worth noting that the video commercials with a single exposure of the branding products would be indicators of attention. These new findings suggest that the presence of a narrative structure in video commercials has a critical impact on the preference for branding products.
Bandwidth auction for SVC streaming in dynamic multi-overlay
NASA Astrophysics Data System (ADS)
Xiong, Yanting; Zou, Junni; Xiong, Hongkai
2010-07-01
In this paper, we study the optimal bandwidth allocation for scalable video coding (SVC) streaming in multiple overlays. We model the whole bandwidth request and distribution process as a set of decentralized auction games between the competing peers. For the upstream peer, a bandwidth allocation mechanism is introduced to maximize the aggregate revenue. For the downstream peer, a dynamic bidding strategy is proposed. It achieves maximum utility and efficient resource usage by collaborating with a content-aware layer dropping/adding strategy. Also, the convergence of the proposed auction games is theoretically proved. Experimental results show that the auction strategies can adapt to dynamic join of competing peers and video layers.
Optimizing Educational Video through Comparative Trials in Clinical Environments
ERIC Educational Resources Information Center
Aronson, Ian David; Plass, Jan L.; Bania, Theodore C.
2012-01-01
Although video is increasingly used in public health education, studies generally do not implement randomized trials of multiple video segments in clinical environments. Therefore, the specific configurations of educational videos that will have the greatest impact on outcome measures ranging from increased knowledge of important public health…
Ranging Apparatus and Method Implementing Stereo Vision System
NASA Technical Reports Server (NTRS)
Li, Larry C. (Inventor); Cox, Brian J. (Inventor)
1997-01-01
A laser-directed ranging system for use in telerobotics applications and other applications involving physically handicapped individuals. The ranging system includes a left and right video camera mounted on a camera platform, and a remotely positioned operator. The position of the camera platform is controlled by three servo motors to orient the roll axis, pitch axis and yaw axis of the video cameras, based upon an operator input such as head motion. A laser is provided between the left and right video camera and is directed by the user to point to a target device. The images produced by the left and right video cameras are processed to eliminate all background images except for the spot created by the laser. This processing is performed by creating a digital image of the target prior to illumination by the laser, and then eliminating common pixels from the subsequent digital image which includes the laser spot. The horizontal disparity between the two processed images is calculated for use in a stereometric ranging analysis from which range is determined.
New feature of the neutron color image intensifier
NASA Astrophysics Data System (ADS)
Nittoh, Koichi; Konagai, Chikara; Noji, Takashi; Miyabe, Keisuke
2009-06-01
We developed prototype neutron color image intensifiers with high-sensitivity, wide dynamic range and long-life characteristics. In the prototype intensifier (Gd-Type 1), a terbium-activated Gd 2O 2S is used as the input-screen phosphor. In the upgraded model (Gd-Type 2), Gd 2O 3 and CsI:Na are vacuum deposited to form the phosphor layer, which improved the sensitivity and the spatial uniformity. A europium-activated Y 2O 2S multi-color scintillator, emitting red, green and blue photons with different intensities, is utilized as the output screen of the intensifier. By combining this image intensifier with a suitably tuned high-sensitive color CCD camera, higher sensitivity and wider dynamic range could be simultaneously attained than that of the conventional P20-phosphor-type image intensifier. The results of experiments at the JRR-3M neutron radiography irradiation port (flux: 1.5×10 8 n/cm 2/s) showed that these neutron color image intensifiers can clearly image dynamic phenomena with a 30 frame/s video picture. It is expected that the color image intensifier will be used as a new two-dimensional neutron sensor in new application fields.
Posterior Cricoarytenoid Muscle Dynamics in Canines and Humans
Chhetri, Dinesh K.; Neubauer, Juergen; Sofer, Elazar
2015-01-01
Objective The posterior cricoarytenoid (PCA) muscle is the sole abductor of the glottis and serves important functions during respiration, phonation, cough, and sniff. The present study examines vocal fold abduction dynamics during PCA muscle activation. Study Design Basic science study using an in vivo canine model and human subjects. Methods In four canines and five healthy humans vocal fold abduction time was measured using high speed video recording. In the canines, PCA muscle activation was achieved using graded stimulation of the PCA nerve branch. The human subjects performed coughing and sniffing tasks. High speed video and audio signals were concurrently recorded. Results In the canines the vocal fold moved posteriorly, laterally, and superiorly during abduction. Average time to reach 10%, 50% and 90% abduction was 23, 50, and 100 ms with low stimulation, 24, 58, and 129 ms with medium stimulation, and 21, 49, and 117 ms with high level stimulation. In the humans, 100% abduction times for coughing and sniffing tasks were 79 and 193 ms, respectively. Conclusion The PCA abduction times in canines are within the range in humans. The results also further support the notion that PCA muscles are fully active during cough. Level of Evidence N/A (Animal studies and basic research) PMID:24781959
Spatial constraints of stereopsis in video displays
NASA Technical Reports Server (NTRS)
Schor, Clifton
1989-01-01
Recent development in video technology, such as the liquid crystal displays and shutters, have made it feasible to incorporate stereoscopic depth into the 3-D representations on 2-D displays. However, depth has already been vividly portrayed in video displays without stereopsis using the classical artists' depth cues described by Helmholtz (1866) and the dynamic depth cues described in detail by Ittleson (1952). Successful static depth cues include overlap, size, linear perspective, texture gradients, and shading. Effective dynamic cues include looming (Regan and Beverly, 1979) and motion parallax (Rogers and Graham, 1982). Stereoscopic depth is superior to the monocular distance cues under certain circumstances. It is most useful at portraying depth intervals as small as 5 to 10 arc secs. For this reason it is extremely useful in user-video interactions such as telepresence. Objects can be manipulated in 3-D space, for example, while a person who controls the operations views a virtual image of the manipulated object on a remote 2-D video display. Stereopsis also provides structure and form information in camouflaged surfaces such as tree foliage. Motion parallax also reveals form; however, without other monocular cues such as overlap, motion parallax can yield an ambiguous perception. For example, a turning sphere, portrayed as solid by parallax can appear to rotate either leftward or rightward. However, only one direction of rotation is perceived when stereo-depth is included. If the scene is static, then stereopsis is the principal cue for revealing the camouflaged surface structure. Finally, dynamic stereopsis provides information about the direction of motion in depth (Regan and Beverly, 1979). Clearly there are many spatial constraints, including spatial frequency content, retinal eccentricity, exposure duration, target spacing, and disparity gradient, which - when properly adjusted - can greatly enhance stereodepth in video displays.
Impact of emotionality on memory and meta-memory in schizophrenia using video sequences.
Peters, Maarten J V; Hauschildt, Marit; Moritz, Steffen; Jelinek, Lena
2013-03-01
A vast amount of memory and meta-memory research in schizophrenia shows that these patients perform worse on memory accuracy and hold false information with strong conviction compared to healthy controls. So far, studies investigating these effects mainly used traditional static stimulus material like word lists or pictures. The question remains whether these memory and meta-memory effects are also present in (1) more near-life dynamic situations (i.e., using standardized videos) and (2) whether emotionality has an influence on memory and meta-memory deficits (i.e., response confidence) in schizophrenia compared to healthy controls. Twenty-seven schizophrenia patients and 24 healthy controls were administered a newly developed emotional video paradigm with five videos differing in emotionality (positive, two negative, neutral, and delusional related). After each video, a recognition task required participants to make old-new discriminations along with confidence ratings, investigating memory accuracy and meta-memory deficits in more dynamic settings. For all but the positively valenced video, patients recognized fewer correct items compared to healthy controls, and did not differ with regard to the number of false memories for related items. In line with prior findings, schizophrenia patients showed more high-confident responses for misses and false memories for related items but displayed underconfidence for hits when compared to healthy controls, independent of emotionality. Limited sample size and control group; combined valence and arousal indicator for emotionality; general psychopathology indicator. Emotionality differentially moderated memory accuracy, biases in schizophrenia patients compared to controls. Moreover, the meta-memory deficits identified in static paradigms also manifest in more dynamic settings near-life settings and seem to be independent of emotionality. Copyright © 2012 Elsevier Ltd. All rights reserved.
Fortune, Emma; Lugade, Vipul; Morrow, Melissa; Kaufman, Kenton
2014-01-01
A subject-specific step counting method with a high accuracy level at all walking speeds is needed to assess the functional level of impaired patients. The study aim was to validate step counts and cadence calculations from acceleration data by comparison to video data during dynamic activity. Custom-built activity monitors, each containing one tri-axial accelerometer, were placed on the ankles, thigh, and waist of 11 healthy adults. ICC values were greater than 0.98 for video inter-rater reliability of all step counts. The activity monitoring system (AMS) algorithm demonstrated a median (interquartile range; IQR) agreement of 92% (8%) with visual observations during walking/jogging trials at gait velocities ranging from 0.1 m/s to 4.8 m/s, while FitBits (ankle and waist), and a Nike Fuelband (wrist) demonstrated agreements of 92% (36%), 93% (22%), and 33% (35%), respectively. The algorithm results demonstrated high median (IQR) step detection sensitivity (95% (2%)), positive predictive value (PPV) (99% (1%)), and agreement (97% (3%)) during a laboratory-based simulated free-living protocol. The algorithm also showed high median (IQR) sensitivity, PPV, and agreement identifying walking steps (91% (5%), 98% (4%), and 96% (5%)), jogging steps (97% (6%), 100% (1%), and 95% (6%)), and less than 3% mean error in cadence calculations. PMID:24656871
Fortune, Emma; Lugade, Vipul; Morrow, Melissa; Kaufman, Kenton
2014-06-01
A subject-specific step counting method with a high accuracy level at all walking speeds is needed to assess the functional level of impaired patients. The study aim was to validate step counts and cadence calculations from acceleration data by comparison to video data during dynamic activity. Custom-built activity monitors, each containing one tri-axial accelerometer, were placed on the ankles, thigh, and waist of 11 healthy adults. ICC values were greater than 0.98 for video inter-rater reliability of all step counts. The activity monitoring system (AMS) algorithm demonstrated a median (interquartile range; IQR) agreement of 92% (8%) with visual observations during walking/jogging trials at gait velocities ranging from 0.1 to 4.8m/s, while FitBits (ankle and waist), and a Nike Fuelband (wrist) demonstrated agreements of 92% (36%), 93% (22%), and 33% (35%), respectively. The algorithm results demonstrated high median (IQR) step detection sensitivity (95% (2%)), positive predictive value (PPV) (99% (1%)), and agreement (97% (3%)) during a laboratory-based simulated free-living protocol. The algorithm also showed high median (IQR) sensitivity, PPV, and agreement identifying walking steps (91% (5%), 98% (4%), and 96% (5%)), jogging steps (97% (6%), 100% (1%), and 95% (6%)), and less than 3% mean error in cadence calculations. Copyright © 2014 IPEM. Published by Elsevier Ltd. All rights reserved.
Advanced Video Data-Acquisition System For Flight Research
NASA Technical Reports Server (NTRS)
Miller, Geoffrey; Richwine, David M.; Hass, Neal E.
1996-01-01
Advanced video data-acquisition system (AVDAS) developed to satisfy variety of requirements for in-flight video documentation. Requirements range from providing images for visualization of airflows around fighter airplanes at high angles of attack to obtaining safety-of-flight documentation. F/A-18 AVDAS takes advantage of very capable systems like NITE Hawk forward-looking infrared (FLIR) pod and recent video developments like miniature charge-couple-device (CCD) color video cameras and other flight-qualified video hardware.
Stochastic dynamics and the predictability of big hits in online videos.
Miotto, José M; Kantz, Holger; Altmann, Eduardo G
2017-03-01
The competition for the attention of users is a central element of the Internet. Crucial issues are the origin and predictability of big hits, the few items that capture a big portion of the total attention. We address these issues analyzing 10^{6} time series of videos' views from YouTube. We find that the average gain of views is linearly proportional to the number of views a video already has, in agreement with usual rich-get-richer mechanisms and Gibrat's law, but this fails to explain the prevalence of big hits. The reason is that the fluctuations around the average views are themselves heavy tailed. Based on these empirical observations, we propose a stochastic differential equation with Lévy noise as a model of the dynamics of videos. We show how this model is substantially better in estimating the probability of an ordinary item becoming a big hit, which is considerably underestimated in the traditional proportional-growth models.
Stochastic dynamics and the predictability of big hits in online videos
NASA Astrophysics Data System (ADS)
Miotto, José M.; Kantz, Holger; Altmann, Eduardo G.
2017-03-01
The competition for the attention of users is a central element of the Internet. Crucial issues are the origin and predictability of big hits, the few items that capture a big portion of the total attention. We address these issues analyzing 106 time series of videos' views from YouTube. We find that the average gain of views is linearly proportional to the number of views a video already has, in agreement with usual rich-get-richer mechanisms and Gibrat's law, but this fails to explain the prevalence of big hits. The reason is that the fluctuations around the average views are themselves heavy tailed. Based on these empirical observations, we propose a stochastic differential equation with Lévy noise as a model of the dynamics of videos. We show how this model is substantially better in estimating the probability of an ordinary item becoming a big hit, which is considerably underestimated in the traditional proportional-growth models.
ERIC Educational Resources Information Center
Kellems, Ryan O.; Edwards, Sean
2016-01-01
Practitioners are constantly searching for evidence-based practices that are effective in teaching academic skills to students with learning disabilities (LD). Video modeling (VM) and video prompting have become popular instructional interventions for many students across a wide range of different disability classifications, including those with…
Mario, Siervo; Hannah, Cameron; Jonathan, Wells C K; Jose, Lara
2014-12-01
Video-game playing is associated with an increased obesity risk. The association of video-game playing with body composition, physical activity and eating behaviour was investigated. A total of 45 young males (age range 18-27 years, BMI range 18.5-35.1 kg/m(2)) were recruited. Measurements of body composition and blood pressure were performed. The EPIC-FFQ questionnaire was used to assess dietary intake. A questionnaire battery was administered to assess physical activity, eating behaviour, sleep quality and frequency of video-game playing (hours/week). Subjects were categorised into frequent (>7 h/week) and non-frequent (≤7 h/week) players. Frequent video-game players had greater waist circumference and fat mass. Video-game playing was significantly associated with high added sugar and low fibre consumption. A higher level of dietary restraint was observed in non-frequent video-game users. These preliminary results identify frequent video-game playing as an important lifestyle behaviour which may have important implications for understanding obesity risk in young male adults.
A shower look-up table to trace the dynamics of meteoroid streams and their sources
NASA Astrophysics Data System (ADS)
Jenniskens, Petrus
2018-04-01
Meteor showers are caused by meteoroid streams from comets (and some primitive asteroids). They trace the comet population and its dynamical evolution, warn of dangerous long-period comets that can pass close to Earth's orbit, outline volumes of space with a higher satellite impact probability, and define how meteoroids evolve in the interplanetary medium. Ongoing meteoroid orbit surveys have mapped these showers in recent years, but the surveys are now running up against a more and more complicated scene. The IAU Working List of Meteor Showers has reached 956 entries to be investigated (per March 1, 2018). The picture is even more complicated with the discovery that radar-detected streams are often different, or differently distributed, than video-detected streams. Complicating matters even more, some meteor showers are active over many months, during which their radiant position gradually changes, which makes the use of mean orbits as a proxy for a meteoroid stream's identity meaningless. The dispersion of the stream in space and time is important to that identity and contains much information about its origin and dynamical evolution. To make sense of the meteor shower zoo, a Shower Look-Up Table was created that captures this dispersion. The Shower Look-Up Table has enabled the automated identification of showers in the ongoing CAMS video-based meteoroid orbit survey, results of which are presented now online in near-real time at http://cams.seti.org/FDL/. Visualization tools have been built that depict the streams in a planetarium setting. Examples will be presented that sample the range of meteoroid streams that this look-up table describes. Possibilities for further dynamical studies will be discussed.
The Effect of Normalization in Violence Video Classification Performance
NASA Astrophysics Data System (ADS)
Ali, Ashikin; Senan, Norhalina
2017-08-01
Basically, data pre-processing is an important part of data mining. Normalization is a pre-processing stage for any type of problem statement, especially in video classification. Challenging problems that arises in video classification is because of the heterogeneous content, large variations in video quality and complex semantic meanings of the concepts involved. Therefore, to regularize this problem, it is thoughtful to ensure normalization or basically involvement of thorough pre-processing stage aids the robustness of classification performance. This process is to scale all the numeric variables into certain range to make it more meaningful for further phases in available data mining techniques. Thus, this paper attempts to examine the effect of 2 normalization techniques namely Min-max normalization and Z-score in violence video classifications towards the performance of classification rate using Multi-layer perceptron (MLP) classifier. Using Min-Max Normalization range of [0,1] the result shows almost 98% of accuracy, meanwhile Min-Max Normalization range of [-1,1] accuracy is 59% and for Z-score the accuracy is 50%.
Baldwin, David M
2013-01-01
The objective of this single-arm interventional pilot study was to determine whether viewing an educational video about phosphorous and phosphorous control by patients on hemodialysis was associated with improved phosphorous values and improvement in knowledge and attitudes about the topics presented. An educational video was shown to 150 patients at 16 dialysis centers. The change in serum phosphate levels before and after the viewing of an educational video was evaluated. Mean phosphorous levels for patients were lower in the month after viewing the educational video compared to their values over the three months before the video was shown (6.35 versus 6.82 g/dL). This difference was statistically significant on a per patient basis (-0.47 g/dL, p = 0.0006). Of these patients, all with phosphorus levels outside of the normal range (3.5 to 5.5 mg/dL) before viewing the video, 28.4% had phosphorus levels within the normal range within a month after viewing the video. Patients on hemodialysis who watched an educational video had improved phosphorous levels in the month after viewing the video when compared to phosphorus levels over the three months before the video was shown. The video intervention has the advantages of being simple, low-cost, and easy to implement, and is associated with improved phosphorous levels in patients undergoing hemodialysis. The video increased patient compliance with recommended self-care regimens.
Determination of the static friction coefficient from circular motion
NASA Astrophysics Data System (ADS)
Molina-Bolívar, J. A.; Cabrerizo-Vílchez, M. A.
2014-07-01
This paper describes a physics laboratory exercise for determining the coefficient of static friction between two surfaces. The circular motion of a coin placed on the surface of a rotating turntable has been studied. For this purpose, the motion is recorded with a high-speed digital video camera recording at 240 frames s-1, and the videos are analyzed using Tracker video-analysis software, allowing the students to dynamically model the motion of the coin. The students have to obtain the static friction coefficient by comparing the centripetal and maximum static friction forces. The experiment only requires simple and inexpensive materials. The dynamics of circular motion and static friction forces are difficult for many students to understand. The proposed laboratory exercise addresses these topics, which are relevant to the physics curriculum.
Imaging multi-scale dynamics in vivo with spiral volumetric optoacoustic tomography
NASA Astrophysics Data System (ADS)
Deán-Ben, X. Luís.; Fehm, Thomas F.; Ford, Steven J.; Gottschalk, Sven; Razansky, Daniel
2017-03-01
Imaging dynamics in living organisms is essential for the understanding of biological complexity. While multiple imaging modalities are often required to cover both microscopic and macroscopic spatial scales, dynamic phenomena may also extend over different temporal scales, necessitating the use of different imaging technologies based on the trade-off between temporal resolution and effective field of view. Optoacoustic (photoacoustic) imaging has been shown to offer the exclusive capability to link multiple spatial scales ranging from organelles to entire organs of small animals. Yet, efficient visualization of multi-scale dynamics remained difficult with state-of-the-art systems due to inefficient trade-offs between image acquisition and effective field of view. Herein, we introduce a spiral volumetric optoacoustic tomography (SVOT) technique that provides spectrally-enriched high-resolution optical absorption contrast across multiple spatio-temporal scales. We demonstrate that SVOT can be used to monitor various in vivo dynamics, from video-rate volumetric visualization of cardiac-associated motion in whole organs to high-resolution imaging of pharmacokinetics in larger regions. The multi-scale dynamic imaging capability thus emerges as a powerful and unique feature of the optoacoustic technology that adds to the multiple advantages of this technology for structural, functional and molecular imaging.
An automated approach for tone mapping operator parameter adjustment in security applications
NASA Astrophysics Data System (ADS)
Krasula, LukáÅ.¡; Narwaria, Manish; Le Callet, Patrick
2014-05-01
High Dynamic Range (HDR) imaging has been gaining popularity in recent years. Different from the traditional low dynamic range (LDR), HDR content tends to be visually more appealing and realistic as it can represent the dynamic range of the visual stimuli present in the real world. As a result, more scene details can be faithfully reproduced. As a direct consequence, the visual quality tends to improve. HDR can be also directly exploited for new applications such as video surveillance and other security tasks. Since more scene details are available in HDR, it can help in identifying/tracking visual information which otherwise might be difficult with typical LDR content due to factors such as lack/excess of illumination, extreme contrast in the scene, etc. On the other hand, with HDR, there might be issues related to increased privacy intrusion. To display the HDR content on the regular screen, tone-mapping operators (TMO) are used. In this paper, we present the universal method for TMO parameters tuning, in order to maintain as many details as possible, which is desirable in security applications. The method's performance is verified on several TMOs by comparing the outcomes from tone-mapping with default and optimized parameters. The results suggest that the proposed approach preserves more information which could be of advantage for security surveillance but, on the other hand, makes us consider possible increase in privacy intrusion.
Fargier, Raphaël; Paulignan, Yves; Boulenger, Véronique; Monaghan, Padraic; Reboul, Anne; Nazir, Tatjana A
2012-07-01
Action words referring to face, arm or leg actions activate areas along the motor strip that also control the planning and execution of the actions specified by the words. This electroencephalogram (EEG) study aimed to test the learning profile of this language-induced motor activity. Participants were trained to associate novel verbal stimuli to videos of object-oriented hand and arm movements or animated visual images on two consecutive days. Each training session was preceded and followed by a test-session with isolated videos and verbal stimuli. We measured motor-related brain activity (reflected by a desynchronization in the μ frequency bands; 8-12 Hz range) localized at centro-parietal and fronto-central electrodes. We compared activity from viewing the videos to activity resulting from processing the language stimuli only. At centro-parietal electrodes, stable action-related μ suppression was observed during viewing of videos in each test-session of the two days. For processing of verbal stimuli associated with motor actions, a similar pattern of activity was evident only in the second test-session of Day 1. Over the fronto-central regions, μ suppression was observed in the second test-session of Day 2 for the videos and in the second test-session of Day 1 for the verbal stimuli. Whereas the centro-parietal μ suppression can be attributed to motor events actually experienced during training, the fronto-central μ suppression seems to serve as a convergence zone that mediates underspecified motor information. Consequently, sensory-motor reactivations through which concepts are comprehended seem to differ in neural dynamics from those implicated in their acquisition. Copyright © 2011 Elsevier Srl. All rights reserved.
Using PDV to Understand Damage in Rocket Motor Propellants
NASA Astrophysics Data System (ADS)
Tear, Gareth; Chapman, David; Ottley, Phillip; Proud, William; Gould, Peter; Cullis, Ian
2017-06-01
There is a continuing requirement to design and manufacture insensitive munition (IM) rocket motors for in-service use under a wide range of conditions, particularly due to shock initiation and detonation of damaged propellant spalled across the central bore of the rocket motor (XDT). High speed photography has been crucial in determining this behaviour, however attempts to model the dynamic behaviour are limited by the lack of precision particle and wave velocity data with which to validate against. In this work Photonic Doppler Velocimetery (PDV) has been combined with high speed video to give accurate point velocity and timing measurements of the rear surface of a propellant block impacted by a fragment travelling upto 1.4 km s-1. By combining traditional high speed video with PDV through a dichroic mirror, the point of velocity measurement within the debris cloud has been determined. This demonstrates a new capability to characterise the damage behaviour of a double base rocket motor propellant and hence validate the damage and fragmentation algorithms used in the numerical simulations.
Experimental investigation of the combustion products in an aluminised solid propellant
NASA Astrophysics Data System (ADS)
Liu, Zhu; Li, Shipeng; Liu, Mengying; Guan, Dian; Sui, Xin; Wang, Ningfei
2017-04-01
Aluminium is widely used as an important additive to improve ballistic and energy performance in solid propellants, but the unburned aluminium does not contribute to the specific impulse and has both thermal and momentum two-phase flow losses. So understanding of aluminium combustion behaviour during solid propellant burning is significant when improving internal ballistic performance. Recent developments and experimental results reported on such combustion behaviour are presented in this paper. A variety of experimental techniques ranging from quenching and dynamic measurement, to high-speed CCD video recording, were used to study aluminium combustion behaviour and the size distribution of the initial agglomerates. This experimental investigation also provides the size distribution of the condensed phase products. Results suggest that the addition of an organic fluoride compound to solid propellant will generate smaller diameter condensed phase products due to sublimation of AlF3. Lastly, a physico-chemical picture of the agglomeration process was also developed based on the results of high-speed CCD video analysis.
Video Game Learning Dynamics: Actionable Measures of Multidimensional Learning Trajectories
ERIC Educational Resources Information Center
Reese, Debbie Denise; Tabachnick, Barbara G.; Kosko, Robert E.
2015-01-01
Valid, accessible, reusable methods for instructional video game design and embedded assessment can provide actionable information enhancing individual and collective achievement. Cyberlearning through game-based, metaphor-enhanced learning objects (CyGaMEs) design and embedded assessment quantify player behavior to study knowledge discovery and…
Actin Filaments and Myosin I Alpha Cooperate with Microtubules for the Movement of LysosomesV⃞
Cordonnier, Marie-Neige; Dauzonne, Daniel; Louvard, Daniel; Coudrier, Evelyne
2001-01-01
An earlier report suggested that actin and myosin I alpha (MMIα), a myosin associated with endosomes and lysosomes, were involved in the delivery of internalized molecules to lysosomes. To determine whether actin and MMIα were involved in the movement of lysosomes, we analyzed by time-lapse video microscopy the dynamic of lysosomes in living mouse hepatoma cells (BWTG3 cells), producing green fluorescent protein actin or a nonfunctional domain of MMIα. In GFP-actin cells, lysosomes displayed a combination of rapid long-range directional movements dependent on microtubules, short random movements, and pauses, sometimes on actin filaments. We showed that the inhibition of the dynamics of actin filaments by cytochalasin D increased pauses of lysosomes on actin structures, while depolymerization of actin filaments using latrunculin A increased the mobility of lysosomes but impaired the directionality of their long-range movements. The production of a nonfunctional domain of MMIα impaired the intracellular distribution of lysosomes and the directionality of their long-range movements. Altogether, our observations indicate for the first time that both actin filaments and MMIα contribute to the movement of lysosomes in cooperation with microtubules and their associated molecular motors. PMID:11739797
Application of M-JPEG compression hardware to dynamic stimulus production.
Mulligan, J B
1997-01-01
Inexpensive circuit boards have appeared on the market which transform a normal micro-computer's disk drive into a video disk capable of playing extended video sequences in real time. This technology enables the performance of experiments which were previously impossible, or at least prohibitively expensive. The new technology achieves this capability using special-purpose hardware to compress and decompress individual video frames, enabling a video stream to be transferred over relatively low-bandwidth disk interfaces. This paper will describe the use of such devices for visual psychophysics and present the technical issues that must be considered when evaluating individual products.
A Standard-Compliant Virtual Meeting System with Active Video Object Tracking
NASA Astrophysics Data System (ADS)
Lin, Chia-Wen; Chang, Yao-Jen; Wang, Chih-Ming; Chen, Yung-Chang; Sun, Ming-Ting
2002-12-01
This paper presents an H.323 standard compliant virtual video conferencing system. The proposed system not only serves as a multipoint control unit (MCU) for multipoint connection but also provides a gateway function between the H.323 LAN (local-area network) and the H.324 WAN (wide-area network) users. The proposed virtual video conferencing system provides user-friendly object compositing and manipulation features including 2D video object scaling, repositioning, rotation, and dynamic bit-allocation in a 3D virtual environment. A reliable, and accurate scheme based on background image mosaics is proposed for real-time extracting and tracking foreground video objects from the video captured with an active camera. Chroma-key insertion is used to facilitate video objects extraction and manipulation. We have implemented a prototype of the virtual conference system with an integrated graphical user interface to demonstrate the feasibility of the proposed methods.
Perception of synchronization errors in haptic and visual communications
NASA Astrophysics Data System (ADS)
Kameyama, Seiji; Ishibashi, Yutaka
2006-10-01
This paper deals with a system which conveys the haptic sensation experimented by a user to a remote user. In the system, the user controls a haptic interface device with another remote haptic interface device while watching video. Haptic media and video of a real object which the user is touching are transmitted to another user. By subjective assessment, we investigate the allowable range and imperceptible range of synchronization error between haptic media and video. We employ four real objects and ask each subject whether the synchronization error is perceived or not for each object in the assessment. Assessment results show that we can more easily perceive the synchronization error in the case of haptic media ahead of video than in the case of the haptic media behind the video.
Serious Games: Video Games for Good?
ERIC Educational Resources Information Center
Sanford, Kathy; Starr, Lisa J.; Merkel, Liz; Bonsor Kurki, Sarah
2015-01-01
As video games become a ubiquitous part of today's culture internationally, as educators and parents we need to turn our attention to how video games are being understood and used in informal and formal settings. Serious games have developed as a genre of video games marketed for educating youth about a range of world issues. At face value this…
What Leadership Looks Like: Videos Help Aspiring Leaders Get the Picture
ERIC Educational Resources Information Center
Clark, Lynn V.
2012-01-01
Finding out what instructional leadership looks like is at the center of a new trend in leadership development: videos of practice. These range from minimally edited videos of a leader's own practice to highly edited clips that focus on successful leadership actions in authentic school settings. While videos of practice are widely used in teacher…
Dynamic Function Allocation in Fighter Cockpits.
1987-06-30
their ability to play the video game simulation used in this study. This was done in an attempt to conceptually match the subject’s skills to those of...highly trained Air Force pilots. 4 Apparatus Simulation. A single seat fighter cockpit environment was simulated using the F-15 Strike Eagle video game developed...simulator containing three color CRTs. The video game was presented on the CRT located in the HUD position. The subjects controlled the game through a
The Effects of Reviews in Video Tutorials
ERIC Educational Resources Information Center
van der Meij, H.; van der Meij, J.
2016-01-01
This study investigates how well a video tutorial for software training that is based on Demonstration-Based Teaching supports user motivation and performance. In addition, it is studied whether reviews significantly contribute to these measures. The Control condition employs a tutorial with instructional features added to a dynamic task…
Here's Another Nice Mess: Using Video in Reflective Dialogue Research Method
ERIC Educational Resources Information Center
Hepplewhite, K.
2014-01-01
This account discusses "reflective dialogues", a process utilising video to re-examine in-action decision-making with theatre practitioners who operate in community contexts. The reflexive discussions combine with observation, text and digital documentation to offer a sometimes "messy" (from Schön 1987) dynamic to the research…
Choosing Among Causal Agents in a Dynamic Environment
2009-07-30
Participants in a video game environment were required to make a series of decisions in which they must identify which of three targets was causing a...was higher but not when prior video game experience was controlled for. In contrast, women observed their targets for much longer before making a
How Physics is Used in Video Games
ERIC Educational Resources Information Center
Bourg, David M.
2004-01-01
Modern video games use physics to achieve realistic behaviour and special effects. Everything from billiard balls, to flying debris, to tactical fighter jets is simulated in games using fundamental principles of dynamics. This article explores several examples of how physics is used in games. Further, this article describes some of the more…
Empirical evaluation of H.265/HEVC-based dynamic adaptive video streaming over HTTP (HEVC-DASH)
NASA Astrophysics Data System (ADS)
Irondi, Iheanyi; Wang, Qi; Grecos, Christos
2014-05-01
Real-time HTTP streaming has gained global popularity for delivering video content over Internet. In particular, the recent MPEG-DASH (Dynamic Adaptive Streaming over HTTP) standard enables on-demand, live, and adaptive Internet streaming in response to network bandwidth fluctuations. Meanwhile, emerging is the new-generation video coding standard, H.265/HEVC (High Efficiency Video Coding) promises to reduce the bandwidth requirement by 50% at the same video quality when compared with the current H.264/AVC standard. However, little existing work has addressed the integration of the DASH and HEVC standards, let alone empirical performance evaluation of such systems. This paper presents an experimental HEVC-DASH system, which is a pull-based adaptive streaming solution that delivers HEVC-coded video content through conventional HTTP servers where the client switches to its desired quality, resolution or bitrate based on the available network bandwidth. Previous studies in DASH have focused on H.264/AVC, whereas we present an empirical evaluation of the HEVC-DASH system by implementing a real-world test bed, which consists of an Apache HTTP Server with GPAC, an MP4Client (GPAC) with open HEVC-based DASH client and a NETEM box in the middle emulating different network conditions. We investigate and analyze the performance of HEVC-DASH by exploring the impact of various network conditions such as packet loss, bandwidth and delay on video quality. Furthermore, we compare the Intra and Random Access profiles of HEVC coding with the Intra profile of H.264/AVC when the correspondingly encoded video is streamed with DASH. Finally, we explore the correlation among the quality metrics and network conditions, and empirically establish under which conditions the different codecs can provide satisfactory performance.
An introduction to electronic learning and its use to address challenges in surgical training.
Baran, Szczepan W; Johnson, Elizabeth J; Kehler, James
2009-06-01
The animal research community faces a shortage of surgical training opportunities along with an increasing demand for expertise in surgical techniques. One possible means of overcoming this challenge is the use of computer-based or electronic learning (e-learning) to disseminate material to a broad range of animal users. E-learning platforms can take many different forms, ranging from simple text documents that are posted online to complex virtual courses that incorporate dynamic video or audio content and in which students and instructors can interact in real time. The authors present an overview of e-learning and discuss its potential benefits as a supplement to hands-on rodent surgical training. They also discuss a few basic considerations in developing and implementing electronic courses.
A real-time optical tracking and measurement processing system for flying targets.
Guo, Pengyu; Ding, Shaowen; Zhang, Hongliang; Zhang, Xiaohu
2014-01-01
Optical tracking and measurement for flying targets is unlike the close range photography under a controllable observation environment, which brings extreme conditions like diverse target changes as a result of high maneuver ability and long cruising range. This paper first designed and realized a distributed image interpretation and measurement processing system to achieve resource centralized management, multisite simultaneous interpretation and adaptive estimation algorithm selection; then proposed a real-time interpretation method which contains automatic foreground detection, online target tracking, multiple features location, and human guidance. An experiment is carried out at performance and efficiency evaluation of the method by semisynthetic video. The system can be used in the field of aerospace tests like target analysis including dynamic parameter, transient states, and optical physics characteristics, with security control.
A Real-Time Optical Tracking and Measurement Processing System for Flying Targets
Guo, Pengyu; Ding, Shaowen; Zhang, Hongliang; Zhang, Xiaohu
2014-01-01
Optical tracking and measurement for flying targets is unlike the close range photography under a controllable observation environment, which brings extreme conditions like diverse target changes as a result of high maneuver ability and long cruising range. This paper first designed and realized a distributed image interpretation and measurement processing system to achieve resource centralized management, multisite simultaneous interpretation and adaptive estimation algorithm selection; then proposed a real-time interpretation method which contains automatic foreground detection, online target tracking, multiple features location, and human guidance. An experiment is carried out at performance and efficiency evaluation of the method by semisynthetic video. The system can be used in the field of aerospace tests like target analysis including dynamic parameter, transient states, and optical physics characteristics, with security control. PMID:24987748
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scott G. Bauer; Matthew O. Anderson; James R. Hanneman
2005-10-01
The proven value of DOD Unmanned Aerial Vehicles (UAVs) will ultimately transition to National and Homeland Security missions that require real-time aerial surveillance, situation awareness, force protection, and sensor placement. Public services first responders who routinely risk personal safety to assess and report a situation for emergency actions will likely be the first to benefit from these new unmanned technologies. ‘Packable’ or ‘Portable’ small class UAVs will be particularly useful to the first responder. They require the least amount of training, no fixed infrastructure, and are capable of being launched and recovered from the point of emergency. All UAVs requiremore » wireless communication technologies for real- time applications. Typically on a small UAV, a low bandwidth telemetry link is required for command and control (C2), and systems health monitoring. If the UAV is equipped with a real-time Electro-Optical or Infrared (EO/Ir) video camera payload, a dedicated high bandwidth analog/digital link is usually required for reliable high-resolution imagery. In most cases, both the wireless telemetry and real-time video links will be integrated into the UAV with unity gain omni-directional antennas. With limited on-board power and payload capacity, a small UAV will be limited with the amount of radio-frequency (RF) energy it transmits to the users. Therefore, ‘packable’ and ‘portable’ UAVs will have limited useful operational ranges for first responders. This paper will discuss the limitations of small UAV wireless communications. The discussion will present an approach of utilizing a dynamic ground based real-time tracking high gain directional antenna to provide extend range stand-off operation, potential RF channel reuse, and assured telemetry and data communications from low-powered UAV deployed wireless assets.« less
Evaluating the Accuracy and Quality of the Information in Kyphosis Videos Shared on YouTube.
Erdem, Mehmet Nuri; Karaca, Sinan
2018-04-16
A quality-control YouTube-based study using the recognized quality scoring systems. In this study, our aim was to confirm the accuracy and quality of the information in kyphosis videos shared on YouTube. The Internet is a widely and increasingly used source for obtaining medical information both by patients and clinicians. YouTube, in particular, manifests itself as a leading source with its ease of access to information and visual advantage for Internet users. The first 50 videos returned by the YouTube search engine in response to 'kyphosis' keyword query were included in the study and categorized under seven and six groups, based on their source and content. The popularity of the videos were evaluated with a new index called the video power index (VPI). The quality, educational quality and accuracy of the source of information were measured using the JAMA score, Global Quality Score (GQS) and Kyphosis Specific Score (KSS). Videos had a mean duration of 397 seconds and a mean number of views of 131,644, with a total viewing number of 6,582,221. The source (uploader) in 36% of the videos was a trainer and the content in 46% of the videos was exercise training. 72% of the videos were about postural kyphosis. Videos had a mean JAMA score of 1.36 (range: 1 to 4), GQS of 1.68 (range: 1 to 5) and KSS of 3.02 (range:0 to 32). The academic group had the highest scores and the lowest VPIs. Online information on kyphosis is low quality and its contents are of unknown source and accuracy. In order to keep the balance in sharing the right information with the patient, clinicians should possess knowledge about the online information related to their field, and should contribute to the development of optimal medical videos. 3.
NASA Astrophysics Data System (ADS)
Lazar, Aurel A.; White, John S.
1986-11-01
Theoretical analysis of an ILAN model of MAGNET, an integrated network testbed developed at Columbia University, shows that the bandwidth freed up by video and voice calls during periods of little movement in the images and silence periods in the speech signals could be utilized efficiently for graphics and data transmission. Based on these investigations, an architecture supporting adaptive protocols that are dynamically controlled by the requirements of a fluctuating load and changing user environment has been advanced. To further analyze the behavior of the network, a real-time packetized video system has been implemented. This system is embedded in the real time multimedia workstation EDDY that integrates video, voice and data traffic flows. Protocols supporting variable bandwidth, constant quality packetized video transport are descibed in detail.
Learned saliency transformations for gaze guidance
NASA Astrophysics Data System (ADS)
Vig, Eleonora; Dorr, Michael; Barth, Erhardt
2011-03-01
The saliency of an image or video region indicates how likely it is that the viewer of the image or video fixates that region due to its conspicuity. An intriguing question is how we can change the video region to make it more or less salient. Here, we address this problem by using a machine learning framework to learn from a large set of eye movements collected on real-world dynamic scenes how to alter the saliency level of the video locally. We derive saliency transformation rules by performing spatio-temporal contrast manipulations (on a spatio-temporal Laplacian pyramid) on the particular video region. Our goal is to improve visual communication by designing gaze-contingent interactive displays that change, in real time, the saliency distribution of the scene.
Zhang, Xutong; Cui, Lixian; Han, Zhuo Rachel; Yan, Jia
2017-03-01
The current study examined parent heart rate (HR) dynamic changing patterns and their links to observed negative parenting (i.e., emotional unavailability and psychological control) during a parent-child conflict resolution task among 150 parent-child dyads (child age ranged from 6 to 12 years, Mage = 8.54 ± 1.67). Parent HR was obtained from electrocardiogram (ECG) data collected during the parent-child conflict resolution task. Negative parenting was coded offline based on the video recording of the same task. Results revealed that emotionally sensitive parents during the task showed greater HR increases while discussing a conflict and greater HR decreases while resolving the conflict, whereas emotionally unavailable parents showed no changes in HR. However, parent psychological control was not associated with HR dynamics during the task. These findings indicated the physiological underpinnings of parent emotional sensitivity and responsiveness during parent-child interactions. The potential association between HR baseline levels and parenting behaviors was also discussed. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Sensor Management for Tactical Surveillance Operations
2007-11-01
active and passive sonar for submarine and tor- pedo detection, and mine avoidance. [range, bearing] range 1.8 km to 55 km Active or Passive AN/SLQ-501...finding (DF) unit [bearing, classification] maximum range 1100 km Passive Cameras (day- light/ night- vision) ( video & still) Record optical and...infrared still images or motion video of events for near-real time assessment or long term analysis and archiving. Range is limited by the image resolution
A new method for wireless video monitoring of bird nests
David I. King; Richard M. DeGraaf; Paul J. Champlin; Tracey B. Champlin
2001-01-01
Video monitoring of active bird nests is gaining popularity among researchers because it eliminates many of the biases associated with reliance on incidental observations of predation events or use of artificial nests, but the expense of video systems may be prohibitive. Also, the range and efficiency of current video monitoring systems may be limited by the need to...
Action recognition in depth video from RGB perspective: A knowledge transfer manner
NASA Astrophysics Data System (ADS)
Chen, Jun; Xiao, Yang; Cao, Zhiguo; Fang, Zhiwen
2018-03-01
Different video modal for human action recognition has becoming a highly promising trend in the video analysis. In this paper, we propose a method for human action recognition from RGB video to Depth video using domain adaptation, where we use learned feature from RGB videos to do action recognition for depth videos. More specifically, we make three steps for solving this problem in this paper. First, different from image, video is more complex as it has both spatial and temporal information, in order to better encode this information, dynamic image method is used to represent each RGB or Depth video to one image, based on this, most methods for extracting feature in image can be used in video. Secondly, as video can be represented as image, so standard CNN model can be used for training and testing for videos, beside, CNN model can be also used for feature extracting as its powerful feature expressing ability. Thirdly, as RGB videos and Depth videos are belong to two different domains, in order to make two different feature domains has more similarity, domain adaptation is firstly used for solving this problem between RGB and Depth video, based on this, the learned feature from RGB video model can be directly used for Depth video classification. We evaluate the proposed method on one complex RGB-D action dataset (NTU RGB-D), and our method can have more than 2% accuracy improvement using domain adaptation from RGB to Depth action recognition.
Prasad, Dilip K; Rajan, Deepu; Rachmawati, Lily; Rajabally, Eshan; Quek, Chai
2016-12-01
This paper addresses the problem of horizon detection, a fundamental process in numerous object detection algorithms, in a maritime environment. The maritime environment is characterized by the absence of fixed features, the presence of numerous linear features in dynamically changing objects and background and constantly varying illumination, rendering the typically simple problem of detecting the horizon a challenging one. We present a novel method called multi-scale consistence of weighted edge Radon transform, abbreviated as MuSCoWERT. It detects the long linear features consistent over multiple scales using multi-scale median filtering of the image followed by Radon transform on a weighted edge map and computing the histogram of the detected linear features. We show that MuSCoWERT has excellent performance, better than seven other contemporary methods, for 84 challenging maritime videos, containing over 33,000 frames, and captured using visible range and near-infrared range sensors mounted onboard, onshore, or on floating buoys. It has a median error of about 2 pixels (less than 0.2%) from the center of the actual horizon and a median angular error of less than 0.4 deg. We are also sharing a new challenging horizon detection dataset of 65 videos of visible, infrared cameras for onshore and onboard ship camera placement.
Dynamics of Two Interactive Bubbles in An Acoustic Field - Part II: Experiments
NASA Astrophysics Data System (ADS)
Ashgriz, Nasser; Barbat, Tiberiu; Liu, Ching-Shi
1996-11-01
The motion of two air bubbles levitated in water, in the presence of a high-frequency acoustic field is experimentally studied. The interaction force between them is named "secondary Bjerknes force" and may be significant in microgravity environments; in our experiments the buoyancy effect is compensated through the action of the "primary Bjerknes forces" - interaction between each bubble oscillation and external sound field. The stationary sound field is produced by a piezoceramic tranducer, in the range of 22-24 kHz. The experiments succesfully demonstrate the existence of three patterns of interaction between bubbles of various sizes: attraction, repulsion and oscillation. Bubbles attraction is quantitatively studied using a high speed video, for "large" bubbles (in the range 0.5-2 mm radius); bubbles repulsion and oscillations are only observed with a regular video, for "small" bubbles (around the resonance size at these frequencies, 0.12 mm). Velocities and accelerations of each bubble are computed from the time history of the motion. The theoretical equations of motion are completed with a drag force formula for single bubbles and solved numerically. Experimental results, for the case of two attracting bubbles, are in good agreement with the numerical model, especially for values of the mutual distance greater than 3 large bubble radii.
Statistical modelling of subdiffusive dynamics in the cytoplasm of living cells: A FARIMA approach
NASA Astrophysics Data System (ADS)
Burnecki, K.; Muszkieta, M.; Sikora, G.; Weron, A.
2012-04-01
Golding and Cox (Phys. Rev. Lett., 96 (2006) 098102) tracked the motion of individual fluorescently labelled mRNA molecules inside live E. coli cells. They found that in the set of 23 trajectories from 3 different experiments, the automatically recognized motion is subdiffusive and published an intriguing microscopy video. Here, we extract the corresponding time series from this video by image segmentation method and present its detailed statistical analysis. We find that this trajectory was not included in the data set already studied and has different statistical properties. It is best fitted by a fractional autoregressive integrated moving average (FARIMA) process with the normal-inverse Gaussian (NIG) noise and the negative memory. In contrast to earlier studies, this shows that the fractional Brownian motion is not the best model for the dynamics documented in this video.
VideoBeam portable laser communicator
NASA Astrophysics Data System (ADS)
Mecherle, G. Stephen; Holcomb, Terry L.
1999-01-01
A VideoBeamTM portable laser communicator has been developed which provides full duplex communication links consisting of high quality analog video and stereo audio. The 3.2-pound unit resembles a binocular-type form factor and has an operational range of over two miles (clear air) with excellent jam-resistance and low probability of interception characteristics. The VideoBeamTM unit is ideally suited for numerous military scenarios, surveillance/espionage, industrial precious mineral exploration, and campus video teleconferencing applications.
Video quality assesment using M-SVD
NASA Astrophysics Data System (ADS)
Tao, Peining; Eskicioglu, Ahmet M.
2007-01-01
Objective video quality measurement is a challenging problem in a variety of video processing application ranging from lossy compression to printing. An ideal video quality measure should be able to mimic the human observer. We present a new video quality measure, M-SVD, to evaluate distorted video sequences based on singular value decomposition. A computationally efficient approach is developed for full-reference (FR) video quality assessment. This measure is tested on the Video Quality Experts Group (VQEG) phase I FR-TV test data set. Our experiments show the graphical measure displays the amount of distortion as well as the distribution of error in all frames of the video sequence while the numerical measure has a good correlation with perceived video quality outperforms PSNR and other objective measures by a clear margin.
Acousto-optic RF signal acquisition system
NASA Astrophysics Data System (ADS)
Bloxham, Laurence H.
1990-09-01
This paper describes the architecture and performance of a prototype Acousto-Optic RF Signal Acquisition System designed to intercept, automatically identify, and track communication signals in the VHF band. The system covers 28.0 to 92.0 MHz with five manually selectable, dual conversion; 12.8 MHZ bandwidth front ends. An acousto-optic spectrum analyzer (AOSA) implemented using a tellurium dioxide (Te02) Bragg cell is used to channelize the 12.8 MHz pass band into 512 25 KHz channels. Polarization switching is used to suppress optical noise. Excellent isolation and dynamic range are achieved by using a linear array of 512 custom 40/50 micron fiber optic cables to collect the light at the focal plane of the AOSA and route the light to individual photodetectors. The photodetectors are operated in the photovoltaic mode to compress the greater than 60 dB input optical dynamic range into an easily processed electrical signal. The 512 signals are multiplexed and processed as a line in a video image by a customized digital image processing system. The image processor simultaneously analyzes the channelized signal data and produces a classical waterfall display.
A method of mobile video transmission based on J2ee
NASA Astrophysics Data System (ADS)
Guo, Jian-xin; Zhao, Ji-chun; Gong, Jing; Chun, Yang
2013-03-01
As 3G (3rd-generation) networks evolve worldwide, the rising demand for mobile video services and the enormous growth of video on the internet is creating major new revenue opportunities for mobile network operators and application developers. The text introduced a method of mobile video transmission based on J2ME, giving the method of video compressing, then describing the video compressing standard, and then describing the software design. The proposed mobile video method based on J2EE is a typical mobile multimedia application, which has a higher availability and a wide range of applications. The users can get the video through terminal devices such as phone.
YouTube®: An ally or an enemy in the promotion of living donor kidney transplantation?
Bert, Fabrizio; Gualano, Maria Rosaria; Scozzari, Gitana; Alesina, Marta; Amoroso, Antonio; Siliquini, Roberta
2018-03-01
The aim of the study is to evaluate the availability and accuracy of the existing Italian-language medical information about living donor kidney transplantation on YouTube®. For each video, several data were collected, and each video was classified as "useful," "moderately useful" and "not useful." Globally, the search resulted in 306 videos: 260 were excluded and 46 included in the analysis. The main message conveyed by the video was positive in 28 cases (60.9%), neutral in 16 (34.8%) and negative in 2 (4.4%). The mean amount of visualizations was 3103.5 (range: 17-90,133) and the mean amount of "likes" 2.7 (range: 0-28). Seven videos (15.2%) were classified as "useful," 21 (45.7%) as "moderately useful" and 18 (39.1%) as "not useful." This study showed that a very few videos in Italian about living donor kidney transplantation are available on YouTube, with only 15 percent of them containing useful information for the general population.
Flow visualization of CFD using graphics workstations
NASA Technical Reports Server (NTRS)
Lasinski, Thomas; Buning, Pieter; Choi, Diana; Rogers, Stuart; Bancroft, Gordon
1987-01-01
High performance graphics workstations are used to visualize the fluid flow dynamics obtained from supercomputer solutions of computational fluid dynamic programs. The visualizations can be done independently on the workstation or while the workstation is connected to the supercomputer in a distributed computing mode. In the distributed mode, the supercomputer interactively performs the computationally intensive graphics rendering tasks while the workstation performs the viewing tasks. A major advantage of the workstations is that the viewers can interactively change their viewing position while watching the dynamics of the flow fields. An overview of the computer hardware and software required to create these displays is presented. For complex scenes the workstation cannot create the displays fast enough for good motion analysis. For these cases, the animation sequences are recorded on video tape or 16 mm film a frame at a time and played back at the desired speed. The additional software and hardware required to create these video tapes or 16 mm movies are also described. Photographs illustrating current visualization techniques are discussed. Examples of the use of the workstations for flow visualization through animation are available on video tape.
ERIC Educational Resources Information Center
Tse, Tony; Vegh, Sandor; Shneiderman, Ben; Marchionini, Gary
1999-01-01
The purpose of this exploratory study was to develop research methods to compare the effectiveness of two video browsing interface designs, or surrogates--one static (storyboard) and one dynamic (slide show)--on two distinct information seeking tasks (gist determination and object recognition). (AEF)
Midcarpal instability: a diagnostic role for dynamic ultrasound?
Toms, A; Chojnowski, A; Cahir, J
2009-06-01
The aim of this study was to describe the technique of dynamic ultrasound (US) examination of the triquetral clunk, and to illustrate the range of findings in four patients with midcarpal instability (MCI). Four patients were identified (3 men, 1 woman). The case notes, plain radiographs, MRI and dynamic US for each patient were reviewed. Digital video files recording the dynamic US of the triquetral clunks were analysed for the following features of abnormal triquetral mobility: direction and speed of triquetral snap, amount of anteroposterior translocation, and flexion or extension during the snap. Five different triquetral clunks were recorded in 4 patients. In four out of five cases the clunk occurred during ulnar translocation of the wrist, and in one during radial translocation. Anteroposterior translocation was anterior (3.4 - 4.7 mm) in three of the clunks and posterior (1 - 10 mm) in two. The degree of flexion or extension varied between 1 and 16 degrees . The snapping phase of the clunk lasted between 0.17 and 0.25 seconds. Dynamic US can be used to confirm the diagnosis of midcarpal instability by identifying a triquetral catch-up clunk. Quantification of carpal mobility with US may lead to further insights into the mechanics of MCI.
Jin, Meihua; Jung, Ji-Young; Lee, Jung-Ryun
2016-10-12
With the arrival of the era of Internet of Things (IoT), Wi-Fi Direct is becoming an emerging wireless technology that allows one to communicate through a direct connection between the mobile devices anytime, anywhere. In Wi-Fi Direct-based IoT networks, all devices are categorized by group of owner (GO) and client. Since portability is emphasized in Wi-Fi Direct devices, it is essential to control the energy consumption of a device very efficiently. In order to avoid unnecessary power consumed by GO, Wi-Fi Direct standard defines two power-saving methods: Opportunistic and Notice of Absence (NoA) power-saving methods. In this paper, we suggest an algorithm to enhance the energy efficiency of Wi-Fi Direct power-saving, considering the characteristics of multimedia video traffic. Proposed algorithm utilizes the statistical distribution for the size of video frames and adjusts the lengths of awake intervals in a beacon interval dynamically. In addition, considering the inter-dependency among video frames, the proposed algorithm ensures that a video frame having high priority is transmitted with higher probability than other frames having low priority. Simulation results show that the proposed method outperforms the traditional NoA method in terms of average delay and energy efficiency.
Jin, Meihua; Jung, Ji-Young; Lee, Jung-Ryun
2016-01-01
With the arrival of the era of Internet of Things (IoT), Wi-Fi Direct is becoming an emerging wireless technology that allows one to communicate through a direct connection between the mobile devices anytime, anywhere. In Wi-Fi Direct-based IoT networks, all devices are categorized by group of owner (GO) and client. Since portability is emphasized in Wi-Fi Direct devices, it is essential to control the energy consumption of a device very efficiently. In order to avoid unnecessary power consumed by GO, Wi-Fi Direct standard defines two power-saving methods: Opportunistic and Notice of Absence (NoA) power-saving methods. In this paper, we suggest an algorithm to enhance the energy efficiency of Wi-Fi Direct power-saving, considering the characteristics of multimedia video traffic. Proposed algorithm utilizes the statistical distribution for the size of video frames and adjusts the lengths of awake intervals in a beacon interval dynamically. In addition, considering the inter-dependency among video frames, the proposed algorithm ensures that a video frame having high priority is transmitted with higher probability than other frames having low priority. Simulation results show that the proposed method outperforms the traditional NoA method in terms of average delay and energy efficiency. PMID:27754315
DOT National Transportation Integrated Search
2009-05-01
The evaluation of three Video Detection Systems (VDS) at an instrumented signalized intersection in Rantoul : Illinois, at both stop bar and advance detection zones, was performed under a wide range of lighting and : weather conditions. The evaluated...
Trotman, Carroll-Ann; Phillips, Ceib; Faraway, Julian J.; Hartman, Terry; van Aalst, John A.
2013-01-01
Objective To determine whether a systematic evaluation of facial soft tissues of patients with cleft lip and palate, using facial video images and objective three-dimensional measurements of movement, change surgeons’ treatment plans for lip revision surgery. Design Prospective longitudinal study. Setting The University of North Carolina School of Dentistry. Patients, Participants A group of patients with repaired cleft lip and palate (n = 21), a noncleft control group (n = 37), and surgeons experienced in cleft care. Interventions Lip revision. Main Outcome Measures (1) facial photographic images; (2) facial video images during animations; (3) objective three-dimensional measurements of upper lip movement based on z scores; and (4) objective dynamic and visual three-dimensional measurement of facial soft tissue movement. Results With the use of the video images plus objective three-dimensional measures, changes were made to the problem list of the surgical treatment plan for 86% of the patients (95% confidence interval, 0.64 to 0.97) and the surgical goals for 71% of the patients (95% confidence interval, 0.48 to 0.89). The surgeon group varied in the percentage of patients for whom the problem list was modified, ranging from 24% (95% confidence interval, 8% to 47%) to 48% (95% confidence interval, 26% to 70%) of patients, and the percentage for whom the surgical goals were modified, ranging from 14% (94% confidence interval, 3% to 36%) to 48% (95% confidence interval, 26% to 70%) of patients. Conclusions For all surgeons, the additional assessment components of the systematic valuation resulted in a change in clinical decision making for some patients. PMID:23855676
NV-CMOS HD camera for day/night imaging
NASA Astrophysics Data System (ADS)
Vogelsong, T.; Tower, J.; Sudol, Thomas; Senko, T.; Chodelka, D.
2014-06-01
SRI International (SRI) has developed a new multi-purpose day/night video camera with low-light imaging performance comparable to an image intensifier, while offering the size, weight, ruggedness, and cost advantages enabled by the use of SRI's NV-CMOS HD digital image sensor chip. The digital video output is ideal for image enhancement, sharing with others through networking, video capture for data analysis, or fusion with thermal cameras. The camera provides Camera Link output with HD/WUXGA resolution of 1920 x 1200 pixels operating at 60 Hz. Windowing to smaller sizes enables operation at higher frame rates. High sensitivity is achieved through use of backside illumination, providing high Quantum Efficiency (QE) across the visible and near infrared (NIR) bands (peak QE <90%), as well as projected low noise (<2h+) readout. Power consumption is minimized in the camera, which operates from a single 5V supply. The NVCMOS HD camera provides a substantial reduction in size, weight, and power (SWaP) , ideal for SWaP-constrained day/night imaging platforms such as UAVs, ground vehicles, fixed mount surveillance, and may be reconfigured for mobile soldier operations such as night vision goggles and weapon sights. In addition the camera with the NV-CMOS HD imager is suitable for high performance digital cinematography/broadcast systems, biofluorescence/microscopy imaging, day/night security and surveillance, and other high-end applications which require HD video imaging with high sensitivity and wide dynamic range. The camera comes with an array of lens mounts including C-mount and F-mount. The latest test data from the NV-CMOS HD camera will be presented.
Behavioral responses of silverback gorillas (Gorilla gorilla gorilla) to videos.
Maloney, Margaret A; Leighty, Katherine A; Kuhar, Christopher W; Bettinger, Tamara L
2011-01-01
This study examined the impact of video presentations on the behavior of 4 silverback, western lowland gorillas (Gorilla gorilla gorilla). On each of 5 occasions, gorillas viewed 6 types of videos (blue screen, humans, an all-male or mixed-sex group engaged in low activity, and an all-male or mixed-sex group engaged in agonistic behavior). The study recorded behavioral responses and watching rates. All gorillas preferred dynamic over static videos; 3 watched videos depicting gorillas significantly more than those depicting humans. Among the gorilla videos, the gorillas clearly preferred watching the mixed-sex group engaged in agonistic behavior; yet, this did not lead to an increase in aggression or behavior indicating agitation. Further, habituation to videos depicting gorillas did not occur. This supports the effectiveness of this form of enrichment, particularly for a nonhuman animal needing to be separated temporarily due to illness, shipment quarantine, social restructuring, or exhibit modification. Copyright © The Walt Disney Company®
Content-Aware Video Adaptation under Low-Bitrate Constraint
NASA Astrophysics Data System (ADS)
Hsiao, Ming-Ho; Chen, Yi-Wen; Chen, Hua-Tsung; Chou, Kuan-Hung; Lee, Suh-Yin
2007-12-01
With the development of wireless network and the improvement of mobile device capability, video streaming is more and more widespread in such an environment. Under the condition of limited resource and inherent constraints, appropriate video adaptations have become one of the most important and challenging issues in wireless multimedia applications. In this paper, we propose a novel content-aware video adaptation in order to effectively utilize resource and improve visual perceptual quality. First, the attention model is derived from analyzing the characteristics of brightness, location, motion vector, and energy features in compressed domain to reduce computation complexity. Then, through the integration of attention model, capability of client device and correlational statistic model, attractive regions of video scenes are derived. The information object- (IOB-) weighted rate distortion model is used for adjusting the bit allocation. Finally, the video adaptation scheme dynamically adjusts video bitstream in frame level and object level. Experimental results validate that the proposed scheme achieves better visual quality effectively and efficiently.
Development of a video tampering dataset for forensic investigation.
Ismael Al-Sanjary, Omar; Ahmed, Ahmed Abdullah; Sulong, Ghazali
2016-09-01
Forgery is an act of modifying a document, product, image or video, among other media. Video tampering detection research requires an inclusive database of video modification. This paper aims to discuss a comprehensive proposal to create a dataset composed of modified videos for forensic investigation, in order to standardize existing techniques for detecting video tampering. The primary purpose of developing and designing this new video library is for usage in video forensics, which can be consciously associated with reliable verification using dynamic and static camera recognition. To the best of the author's knowledge, there exists no similar library among the research community. Videos were sourced from YouTube and by exploring social networking sites extensively by observing posted videos and rating their feedback. The video tampering dataset (VTD) comprises a total of 33 videos, divided among three categories in video tampering: (1) copy-move, (2) splicing, and (3) swapping-frames. Compared to existing datasets, this is a higher number of tampered videos, and with longer durations. The duration of every video is 16s, with a 1280×720 resolution, and a frame rate of 30 frames per second. Moreover, all videos possess the same formatting quality (720p(HD).avi). Both temporal and spatial video features were considered carefully during selection of the videos, and there exists complete information related to the doctored regions in every modified video in the VTD dataset. This database has been made publically available for research on splicing, Swapping frames, and copy-move tampering, and, as such, various video tampering detection issues with ground truth. The database has been utilised by many international researchers and groups of researchers. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Overview of the H.264/AVC video coding standard
NASA Astrophysics Data System (ADS)
Luthra, Ajay; Topiwala, Pankaj N.
2003-11-01
H.264/MPEG-4 AVC is the latest coding standard jointly developed by the Video Coding Experts Group (VCEG) of ITU-T and Moving Picture Experts Group (MPEG) of ISO/IEC. It uses state of the art coding tools and provides enhanced coding efficiency for a wide range of applications including video telephony, video conferencing, TV, storage (DVD and/or hard disk based), streaming video, digital video creation, digital cinema and others. In this paper an overview of this standard is provided. Some comparisons with the existing standards, MPEG-2 and MPEG-4 Part 2, are also provided.
Yong, Y K; Moheimani, S O R; Kenton, B J; Leang, K K
2012-12-01
Recent interest in high-speed scanning probe microscopy for high-throughput applications including video-rate atomic force microscopy and probe-based nanofabrication has sparked attention on the development of high-bandwidth flexure-guided nanopositioning systems (nanopositioners). Such nanopositioners are designed to move samples with sub-nanometer resolution with positioning bandwidth in the kilohertz range. State-of-the-art designs incorporate uniquely designed flexure mechanisms driven by compact and stiff piezoelectric actuators. This paper surveys key advances in mechanical design and control of dynamic effects and nonlinearities, in the context of high-speed nanopositioning. Future challenges and research topics are also discussed.
The robot's eyes - Stereo vision system for automated scene analysis
NASA Technical Reports Server (NTRS)
Williams, D. S.
1977-01-01
Attention is given to the robot stereo vision system which maintains the image produced by solid-state detector television cameras in a dynamic random access memory called RAPID. The imaging hardware consists of sensors (two solid-state image arrays using a charge injection technique), a video-rate analog-to-digital converter, the RAPID memory, and various types of computer-controlled displays, and preprocessing equipment (for reflexive actions, processing aids, and object detection). The software is aimed at locating objects and transversibility. An object-tracking algorithm is discussed and it is noted that tracking speed is in the 50-75 pixels/s range.
Detection and localization of copy-paste forgeries in digital videos.
Singh, Raahat Devender; Aggarwal, Naveen
2017-12-01
Amidst the continual march of technology, we find ourselves relying on digital videos to proffer visual evidence in several highly sensitive areas such as journalism, politics, civil and criminal litigation, and military and intelligence operations. However, despite being an indispensable source of information with high evidentiary value, digital videos are also extremely vulnerable to conscious manipulations. Therefore, in a situation where dependence on video evidence is unavoidable, it becomes crucial to authenticate the contents of this evidence before accepting them as an accurate depiction of reality. Digital videos can suffer from several kinds of manipulations, but perhaps, one of the most consequential forgeries is copy-paste forgery, which involves insertion/removal of objects into/from video frames. Copy-paste forgeries alter the information presented by the video scene, which has a direct effect on our basic understanding of what that scene represents, and so, from a forensic standpoint, the challenge of detecting such forgeries is especially significant. In this paper, we propose a sensor pattern noise based copy-paste detection scheme, which is an improved and forensically stronger version of an existing noise-residue based technique. We also study a demosaicing artifact based image forensic scheme to estimate the extent of its viability in the domain of video forensics. Furthermore, we suggest a simplistic clustering technique for the detection of copy-paste forgeries, and determine if it possess the capabilities desired of a viable and efficacious video forensic scheme. Finally, we validate these schemes on a set of realistically tampered MJPEG, MPEG-2, MPEG-4, and H.264/AVC encoded videos in a diverse experimental set-up by varying the strength of post-production re-compressions and transcodings, bitrates, and sizes of the tampered regions. Such an experimental set-up is representative of a neutral testing platform and simulates a real-world forgery scenario where the forensic investigator has no control over any of the variable parameters of the tampering process. When tested in such an experimental set-up, the four forensic schemes achieved varying levels of detection accuracies and exhibited different scopes of applicabilities. For videos compressed using QFs in the range 70-100, the existing noise residue based technique generated average detection accuracy in the range 64.5%-82.0%, while the proposed sensor pattern noise based scheme generated average accuracy in the range 89.9%-98.7%. For the aforementioned range of QFs, average accuracy rates achieved by the suggested clustering technique and the demosaicing artifact based approach were in the range 79.1%-90.1% and 83.2%-93.3%, respectively. Copyright © 2017 Elsevier B.V. All rights reserved.
STS-107 Debris Characterization Using Re-entry Imaging
NASA Technical Reports Server (NTRS)
Raiche, George A.
2009-01-01
Analysis of amateur video of the early reentry phases of the Columbia accident is discussed. With poor video quality and little theoretical guidance, the analysis team estimated mass and acceleration ranges for the debris shedding events observed in the video. Camera calibration and optical performance issues are also described.
Teaching Reading: 3-5 Workshop
ERIC Educational Resources Information Center
Annenberg Media, 2005
2005-01-01
This video workshop with auxiliary classroom videos will show intermediate elementary teachers how to help their students transition from "learning to read" to "reading to learn." Eight half-hour workshop video programs feature leading experts who discuss current research on learning to read and teaching a diverse range of students. The research…
Teaching Shakespeare with YouTube
ERIC Educational Resources Information Center
Desmet, Christy
2009-01-01
YouTube, the video sharing website that allows viewers to upload video content ranging from cute dog tricks to rare rock videos, also supports a lively community devoted to the performance of Shakespeare and Shakespearean adaptations. YouTube is also a popular site for student producers of Shakespeare performances, parodies, and other artistic…
Fulldome Video: An Emerging Technology for Education
ERIC Educational Resources Information Center
Law, Linda E.
2006-01-01
This article talks about fulldome video, a new technology which has been adopted fairly extensively by the larger, well-funded planetariums. Fulldome video, also called immersive projection, can help teach subjects ranging from geology to history to chemistry. The rapidly advancing progress of projection technology has provided high-resolution…
Occupational Therapy and Video Modeling for Children with Autism
ERIC Educational Resources Information Center
Becker, Emily Ann; Watry-Christian, Meghan; Simmons, Amanda; Van Eperen, Ashleigh
2016-01-01
This review explores the evidence in support of using video modeling for teaching children with autism. The process of implementing video modeling, the use of various perspectives, and a wide range of target skills are addressed. Additionally, several helpful clinician resources including handheld device applications, books, and websites are…
Video personalization for usage environment
NASA Astrophysics Data System (ADS)
Tseng, Belle L.; Lin, Ching-Yung; Smith, John R.
2002-07-01
A video personalization and summarization system is designed and implemented incorporating usage environment to dynamically generate a personalized video summary. The personalization system adopts the three-tier server-middleware-client architecture in order to select, adapt, and deliver rich media content to the user. The server stores the content sources along with their corresponding MPEG-7 metadata descriptions. Our semantic metadata is provided through the use of the VideoAnnEx MPEG-7 Video Annotation Tool. When the user initiates a request for content, the client communicates the MPEG-21 usage environment description along with the user query to the middleware. The middleware is powered by the personalization engine and the content adaptation engine. Our personalization engine includes the VideoSue Summarization on Usage Environment engine that selects the optimal set of desired contents according to user preferences. Afterwards, the adaptation engine performs the required transformations and compositions of the selected contents for the specific usage environment using our VideoEd Editing and Composition Tool. Finally, two personalization and summarization systems are demonstrated for the IBM Websphere Portal Server and for the pervasive PDA devices.
Kim, Changsun; Cha, Hyunmin; Kang, Bo Seung; Choi, Hyuk Joong; Lim, Tae Ho; Oh, Jaehoon
2016-06-01
Our aim was to prove the feasibility of the remote interpretation of real-time transmitted ultrasound videos of dynamic and static organs using a smartphone with control of the image quality given a limited internet connection speed. For this study, 100 cases of echocardiography videos (dynamic organ)-50 with an ejection fraction (EF) of ≥50 s and 50 with EF <50 %-and 100 cases of suspected pediatric appendicitis (static organ)-50 with signs of acute appendicitis and 50 with no findings of appendicitis-were consecutively selected. Twelve reviewers reviewed the original videos using the liquid crystal display (LCD) monitor of an ultrasound machine and using a smartphone, to which the images were transmitted from the ultrasound machine. The resolution of the transmitted echocardiography videos was reduced by approximately 20 % to increase the frame rate of transmission given the limited internet speed. The differences in diagnostic performance between the two devices when evaluating left ventricular (LV) systolic function by measuring the EF and when evaluating the presence of acute appendicitis were investigated using a five-point Likert scale. The average areas under the receiver operating characteristic curves for each reviewer's interpretations using the LCD monitor and smartphone were respectively 0.968 (0.949-0.986) and 0.963 (0.945-0.982) (P = 0.548) for echocardiography and 0.972 (0.954-0.989) and 0.966 (0.947-0.984) (P = 0.175) for abdominal ultrasonography. We confirmed the feasibility of remotely interpreting ultrasound images using smartphones, specifically for evaluating LV function and diagnosing pediatric acute appendicitis; the images were transferred from the ultrasound machine using image quality-controlled telesonography.
Cognitive behaviour therapy via interactive video.
Manchanda, M; McLaren, P
1998-01-01
Interactive video has been identified as a potential delivery medium for psychotherapy. Interactive video may restrict the range of both verbal and non-verbal communication and consequently impede the development of a therapeutic relationship, thus influencing the process and outcome of therapy. A single case study explored the feasibility of the provision of congnitive behaviour therapy using interactive video with a client diagnosed as having mixed anxiety and depressive disorder. A range of outcome measures were included together with an independent psychiatric assessment prior to, and on completion of, therapy. Different levels of outcome were also examined: clinical, social, user views and administration. Outcome measures indicated a reduction in psychopathology and some modification of dysfunctional attitudes, with no apparent impairment of the working alliance.
Mar, Pamela; Spears, Robert; Reeb, Jeffrey; Thompson, Sarah B; Myers, Paul; Burke, Rita V
2018-02-22
Eight million American children under the age of 5 attend daycare and more than another 50 million American children are in school or daycare settings. Emergency planning requirements for daycare licensing vary by state. Expert opinions were used to create a disaster preparedness video designed for daycare providers to cover a broad spectrum of scenarios. Various stakeholders (17) devised the outline for an educational pre-disaster video for child daycare providers using the Delphi technique. Fleiss κ values were obtained for consensus data. A 20-minute video was created, addressing the physical, psychological, and legal needs of children during and after a disaster. Viewers completed an anonymous survey to evaluate topic comprehension. A consensus was attempted on all topics, ranging from elements for inclusion to presentation format. The Fleiss κ value of 0.07 was obtained. Fifty-seven of the total 168 video viewers completed the 10-question survey, with comprehension scores ranging from 72% to 100%. Evaluation of caregivers that viewed our video supports understanding of video contents. Ultimately, the technique used to create and disseminate the resources may serve as a template for others providing pre-disaster planning education. (Disaster Med Public Health Preparedness. 2018;page 1 of 5).
Stellefson, Michael; Chaney, Beth; Ochipa, Kathleen; Chaney, Don; Haider, Zeerak; Hanik, Bruce; Chavarria, Enmanuel; Bernhardt, Jay M
2014-05-01
The aim of the present study is to conduct a social media content analysis of chronic obstructive pulmonary disease (COPD) patient education videos on YouTube. A systematic search protocol was used to locate 223 videos. Two independent coders evaluated each video to determine topics covered, media source(s) of posted videos, information quality as measured by HONcode guidelines for posting trustworthy health information on the Internet, and viewer exposure/engagement metrics. Over half the videos (n = 113, 50.7%) included information on medication management, with far fewer videos on smoking cessation (n = 40, 17.9%). Most videos were posted by a health agency or organization (n = 128, 57.4%), and the majority of videos were rated as high quality (n = 154, 69.1%). HONcode adherence differed by media source (Fisher's exact test = 20.52, p = 0.01), however with user-generated content receiving the lowest quality scores. Overall level of user engagement as measured by number of "likes," "favorites," "dislikes," and user comments was low (median range = 0-3, interquartile range = 0-16) across all sources of media. Study findings suggest that COPD education via YouTube has the potential to reach and inform patients; however, existing video content and quality varies significantly. Future interventions should help direct individuals with COPD to engage with high-quality patient education videos on YouTube that are posted by reputable health organizations and qualified medical professionals. Patients should be educated to avoid and/or critically view low-quality videos posted by individual YouTube users who are not health professionals.
Development of a video decision aid to inform parents on potential outcomes of extreme prematurity.
Guillén, Ú; Suh, S; Wang, E; Stickelman, V; Kirpalani, H
2016-11-01
The objective of the study is to develop and validate a video-based parental decision aid about the outcomes of extremely premature infants. Thirty-one clinicians and 30 parents of extremely premature infants (<26 weeks gestation) previously underwent semi-structured interviews to assess perceptions of antenatal counseling. Interviewees recommended a video. A video was iteratively developed, with final validation by three groups: clinicians (n=16), parents with a history of extreme prematurity (n=14) and healthy 'naïve' women without prior knowledge of prematurity (n=13). Two iterations of the video were created. Following a simulated counseling session, an eight-question survey and the State-Trait Anxiety Inventory (STAI) were administered to parents and 'naïve' participants to assess usefulness and stress provocation. The final 10-min video shows six children/parent dyads of former 23 to 25 week premature children with a wide range of outcomes. This video was well accepted by clinicians as well as parent and 'naïve' participants, who perceived it as 'balanced' with a 'neutral' message. The video was felt to provide useful information and insight on prematurity. The final version of the video did not induce anxiety: parents STAI-S 36.1±12.1; 'naïve' 30.2±8.9. A short video showing the range of outcomes of extreme prematurity has been produced. It is well accepted and does not increase levels of anxiety as measured by the STAI. This video may be a useful and non-stress-inducing aid at the time of counseling parents facing extreme prematurity.
YouTube as a source of COPD patient education: A social media content analysis
Stellefson, Michael; Chaney, Beth; Ochipa, Kathleen; Chaney, Don; Haider, Zeerak; Hanik, Bruce; Chavarria, Enmanuel; Bernhardt, Jay M.
2014-01-01
Objective Conduct a social media content analysis of COPD patient education videos on YouTube. Methods A systematic search protocol was used to locate 223 videos. Two independent coders evaluated each video to determine topics covered, media source(s) of posted videos, information quality as measured by HONcode guidelines for posting trustworthy health information on the Internet, and viewer exposure/engagement metrics. Results Over half the videos (n=113, 50.7%) included information on medication management, with far fewer videos on smoking cessation (n=40, 17.9%). Most videos were posted by a health agency or organization (n=128, 57.4%), and the majority of videos were rated as high quality (n=154, 69.1%). HONcode adherence differed by media source (Fisher’s Exact Test=20.52, p=.01), with user-generated content (UGC) receiving the lowest quality scores. Overall level of user engagement as measured by number of “likes,” “favorites,” “dislikes,” and user comments was low (mdn range = 0–3, interquartile (IQR) range = 0–16) across all sources of media. Conclusion Study findings suggest that COPD education via YouTube has the potential to reach and inform patients, however, existing video content and quality varies significantly. Future interventions should help direct individuals with COPD to increase their engagement with high-quality patient education videos on YouTube that are posted by reputable health organizations and qualified medical professionals. Patients should be educated to avoid and/or critically view low-quality videos posted by individual YouTube users who are not health professionals. PMID:24659212
ERIC Educational Resources Information Center
Frisby, Brandi N.; Kaufmann, Renee; Beck, Anna-Carrie
2016-01-01
Instructors incorporate technological tools into the classroom to address short attention spans, appeal to technologically savvy students, and to increase engagement. This study used both quantitative descriptive and qualitative embedded assessment data to examine the use of three popular tools (i.e. Twitter, Facebook, and video chatting) in…
The Impact of Using Youtube in EFL Classroom on Enhancing EFL Students' Content Learning
ERIC Educational Resources Information Center
Alwehaibi, Huda Omar
2015-01-01
Information technology has opened up prospects for rich and innovative approaches to tackle educational issues and provide solutions to the increasing demands for learning resources. YouTube, a video-sharing website that allows users to upload, view, and share video clips, offers access to new and dynamic opportunities for effective and…
ERIC Educational Resources Information Center
Downes, Stephen
2008-01-01
Founded in 2005 by three former PayPal employees, YouTube has revolutionized the Internet, marking a change from the static Internet to the dynamic Internet. In this edition of Places to Go, Stephen Downes discusses how the rise of a ubiquitous media format--Flash video--has made YouTube's success possible and argues that Flash video has important…
Teaching "How Science Works" by Making and Sharing Videos
ERIC Educational Resources Information Center
Ingram, Neil
2010-01-01
"Science.tv" is a website where teachers and pupils can find quality video clips on a variety of scientific topics. It enables pupils to share research ideas and adds a dynamic new dimension to practical work. It has the potential to become an innovative way of incorporating "How science works" into secondary science curricula by encouraging…
The development of augmented video system on postcards
NASA Astrophysics Data System (ADS)
Chen, Chien-Hsu; Chou, Yin-Ju
2013-03-01
This study focuses on development of augmented video system on traditional picture postcards. The system will provide users to print out the augmented reality marker on the sticker to stick on the picture postcard, and it also allows users to record their real time image and video to augment on that stick marker. According dynamic image, users can share travel moods, greeting, and travel experience to their friends. Without changing in the traditional picture postcards, we develop augmented video system on them by augmented reality (AR) technology. It not only keeps the functions of traditional picture postcards, but also enhances user's experience to keep the user's memories and emotional expression by augmented digital media information on them.
Addison, Paul S; Jacquel, Dominique; Foo, David M H; Borg, Ulf R
2017-11-09
The robust monitoring of heart rate from the video-photoplethysmogram (video-PPG) during challenging conditions requires new analysis techniques. The work reported here extends current research in this area by applying a motion tolerant algorithm to extract high quality video-PPGs from a cohort of subjects undergoing marked heart rate changes during a hypoxic challenge, and exhibiting a full range of skin pigmentation types. High uptimes in reported video-based heart rate (HR vid ) were targeted, while retaining high accuracy in the results. Ten healthy volunteers were studied during a double desaturation hypoxic challenge. Video-PPGs were generated from the acquired video image stream and processed to generate heart rate. HR vid was compared to the pulse rate posted by a reference pulse oximeter device (HR p ). Agreement between video-based heart rate and that provided by the pulse oximeter was as follows: Bias = - 0.21 bpm, RMSD = 2.15 bpm, least squares fit gradient = 1.00 (Pearson R = 0.99, p < 0.0001), with a 98.78% reporting uptime. The difference between the HR vid and HR p exceeded 5 and 10 bpm, for 3.59 and 0.35% of the reporting time respectively, and at no point did these differences exceed 25 bpm. Excellent agreement was found between the HR vid and HR p in a study covering the whole range of skin pigmentation types (Fitzpatrick scales I-VI), using standard room lighting and with moderate subject motion. Although promising, further work should include a larger cohort with multiple subjects per Fitzpatrick class combined with a more rigorous motion and lighting protocol.
TANDIR: projectile warning system using uncooled bolometric technology
NASA Astrophysics Data System (ADS)
Horovitz-Limor, Z.; Zahler, M.
2007-04-01
Following the demand for affordable, various range and light-weight protection against ATGM's, Elisra develops a cost-effective passive IR system for ground vehicles. The system is based on wide FOV uncooled bolometric sensors with full azimuth coverage and a lightweight processing & control unit. The system design is based on the harsh environmental conditions. The basic algorithm discriminates the target from its clutter and predicts the time to impact (TTI) and the target aiming direction with relation to vehicle. The current detector format is 320*240 pixels and frame rate is 60 Hz, Spectral response is on Far Infrared (8-14μ). The digital video output has 14bit resolution & wide dynamic range. Future goal is to enhance detection performance by using large format uncooled detector (640X480) with improved sensitivity and higher frame rates (up to 120HZ).
Very High-Speed Digital Video Capability for In-Flight Use
NASA Technical Reports Server (NTRS)
Corda, Stephen; Tseng, Ting; Reaves, Matthew; Mauldin, Kendall; Whiteman, Donald
2006-01-01
digital video camera system has been qualified for use in flight on the NASA supersonic F-15B Research Testbed aircraft. This system is capable of very-high-speed color digital imaging at flight speeds up to Mach 2. The components of this system have been ruggedized and shock-mounted in the aircraft to survive the severe pressure, temperature, and vibration of the flight environment. The system includes two synchronized camera subsystems installed in fuselage-mounted camera pods (see Figure 1). Each camera subsystem comprises a camera controller/recorder unit and a camera head. The two camera subsystems are synchronized by use of an MHub(TradeMark) synchronization unit. Each camera subsystem is capable of recording at a rate up to 10,000 pictures per second (pps). A state-of-the-art complementary metal oxide/semiconductor (CMOS) sensor in the camera head has a maximum resolution of 1,280 1,024 pixels at 1,000 pps. Exposure times of the electronic shutter of the camera range from 1/200,000 of a second to full open. The recorded images are captured in a dynamic random-access memory (DRAM) and can be downloaded directly to a personal computer or saved on a compact flash memory card. In addition to the high-rate recording of images, the system can display images in real time at 30 pps. Inter Range Instrumentation Group (IRIG) time code can be inserted into the individual camera controllers or into the M-Hub unit. The video data could also be used to obtain quantitative, three-dimensional trajectory information. The first use of this system was in support of the Space Shuttle Return to Flight effort. Data were needed to help in understanding how thermally insulating foam is shed from a space shuttle external fuel tank during launch. The cameras captured images of simulated external tank debris ejected from a fixture mounted under the centerline of the F-15B aircraft. Digital video was obtained at subsonic and supersonic flight conditions, including speeds up to Mach 2 and altitudes up to 50,000 ft (15.24 km). The digital video was used to determine the structural survivability of the debris in a real flight environment and quantify the aerodynamic trajectories of the debris.
Quantitative Spatial and Temporal Analysis of Fluorescein Angiography Dynamics in the Eye
Hui, Flora; Nguyen, Christine T. O.; Bedggood, Phillip A.; He, Zheng; Fish, Rebecca L.; Gurrell, Rachel; Vingrys, Algis J.; Bui, Bang V.
2014-01-01
Purpose We describe a novel approach to analyze fluorescein angiography to investigate fluorescein flow dynamics in the rat posterior retina as well as identify abnormal areas following laser photocoagulation. Methods Experiments were undertaken in adult Long Evans rats. Using a rodent retinal camera, videos were acquired at 30 frames per second for 30 seconds following intravenous introduction of sodium fluorescein in a group of control animals (n = 14). Videos were image registered and analyzed using principle components analysis across all pixels in the field. This returns fluorescence intensity profiles from which, the half-rise (time to 50% brightness), half-fall (time for 50% decay) back to an offset (plateau level of fluorescence). We applied this analysis to video fluorescein angiography data collected 30 minutes following laser photocoagulation in a separate group of rats (n = 7). Results Pixel-by-pixel analysis of video angiography clearly delineates differences in the temporal profiles of arteries, veins and capillaries in the posterior retina. We find no difference in half-rise, half-fall or offset amongst the four quadrants (inferior, nasal, superior, temporal). We also found little difference with eccentricity. By expressing the parameters at each pixel as a function of the number of standard deviation from the average of the entire field, we could clearly identify the spatial extent of the laser injury. Conclusions This simple registration and analysis provides a way to monitor the size of vascular injury, to highlight areas of subtle vascular leakage and to quantify vascular dynamics not possible using current fluorescein angiography approaches. This can be applied in both laboratory and clinical settings for in vivo dynamic fluorescent imaging of vasculature. PMID:25365578
Bringing "Scientific Expeditions" Into the Schools
NASA Technical Reports Server (NTRS)
Watson, Val; Lasinski, T. A. (Technical Monitor)
1995-01-01
Two new technologies, the FASTexpedition and Remote FAST, have been developed that provide remote, 3D, high resolution, dynamic, interactive viewing of scientific data (such as simulations or measurements of fluid dynamics). The FASTexpedition permits one to access scientific data from the World Wide Web, take guided expeditions through the data, and continue with self controlled expeditions through the data. Remote FAST permits collaborators at remote sites to simultaneously view an analysis of scientific data being controlled by one of the collaborators. Control can be transferred between sites. These technologies are now being used for remote collaboration in joint university, industry, and NASA projects in computational fluid dynamics (CFD) and wind tunnel testing. Also, NASA Ames Research Center has initiated a project to make scientific data and guided expeditions through the data available as FASTexpeditions on the World Wide Web for educational purposes. Previously, remote visualiZation of dynamic data was done using video format (transmitting pixel information) such as video conferencing or MPEG movies on the Internet. The concept for this new technology is to send the raw data (e.g., grids, vectors, and scalars) along with viewing scripts over the Internet and have the pixels generated by a visualization tool running on the viewer's local workstation. The visualization tool that is currently used is FAST (Flow Analysis Software Toolkit). The advantages of this new technology over using video format are: 1. The visual is much higher in resolution (1280xl024 pixels with 24 bits of color) than typical video format transmitted over the network. 2. The form of the visualization can be controlled interactively (because the viewer is interactively controlling the visualization tool running on his workstation). 3. A rich variety of guided expeditions through the data can be included easily. 4. A capability is provided for other sites to see a visual analysis of one site as the analysis is interactively performed. Control of the analysis can be passed from site to site. 5. The scenes can be viewed in 3D using stereo vision. 6. The network bandwidth used for the visualization using this new technology is much smaller than when using video format. (The measured peak bandwidth used was 1 Kbit/sec whereas the measured bandwidth for a small video picture was 500 Kbits/sec.)
Design and fabrication of an autonomous rendezvous and docking sensor using off-the-shelf hardware
NASA Technical Reports Server (NTRS)
Grimm, Gary E.; Bryan, Thomas C.; Howard, Richard T.; Book, Michael L.
1991-01-01
NASA Marshall Space Flight Center (MSFC) has developed and tested an engineering model of an automated rendezvous and docking sensor system composed of a video camera ringed with laser diodes at two wavelengths and a standard remote manipulator system target that has been modified with retro-reflective tape and 830 and 780 mm optical filters. TRW has provided additional engineering analysis, design, and manufacturing support, resulting in a robust, low cost, automated rendezvous and docking sensor design. We have addressed the issue of space qualification using off-the-shelf hardware components. We have also addressed the performance problems of increased signal to noise ratio, increased range, increased frame rate, graceful degradation through component redundancy, and improved range calibration. Next year, we will build a breadboard of this sensor. The phenomenology of the background scene of a target vehicle as viewed against earth and space backgrounds under various lighting conditions will be simulated using the TRW Dynamic Scene Generator Facility (DSGF). Solar illumination angles of the target vehicle and candidate docking target ranging from eclipse to full sun will be explored. The sensor will be transportable for testing at the MSFC Flight Robotics Laboratory (EB24) using the Dynamic Overhead Telerobotic Simulator (DOTS).
ERIC Educational Resources Information Center
Boger, Claire
2011-01-01
The rapid advancement in the capabilities of computer technologies has made it easier to design and deploy dynamic visualizations in web-based learning environments; yet, the implementation of these dynamic visuals has been met with mixed results. While many guidelines exist to assist instructional designers in the design and application of…
Review of intelligent video surveillance with single camera
NASA Astrophysics Data System (ADS)
Liu, Ying; Fan, Jiu-lun; Wang, DianWei
2012-01-01
Intelligent video surveillance has found a wide range of applications in public security. This paper describes the state-of- the-art techniques in video surveillance system with single camera. This can serve as a starting point for building practical video surveillance systems in developing regions, leveraging existing ubiquitous infrastructure. In addition, this paper discusses the gap between existing technologies and the requirements in real-world scenario, and proposes potential solutions to reduce this gap.
Joint Attributes and Event Analysis for Multimedia Event Detection.
Ma, Zhigang; Chang, Xiaojun; Xu, Zhongwen; Sebe, Nicu; Hauptmann, Alexander G
2017-06-15
Semantic attributes have been increasingly used the past few years for multimedia event detection (MED) with promising results. The motivation is that multimedia events generally consist of lower level components such as objects, scenes, and actions. By characterizing multimedia event videos with semantic attributes, one could exploit more informative cues for improved detection results. Much existing work obtains semantic attributes from images, which may be suboptimal for video analysis since these image-inferred attributes do not carry dynamic information that is essential for videos. To address this issue, we propose to learn semantic attributes from external videos using their semantic labels. We name them video attributes in this paper. In contrast with multimedia event videos, these external videos depict lower level contents such as objects, scenes, and actions. To harness video attributes, we propose an algorithm established on a correlation vector that correlates them to a target event. Consequently, we could incorporate video attributes latently as extra information into the event detector learnt from multimedia event videos in a joint framework. To validate our method, we perform experiments on the real-world large-scale TRECVID MED 2013 and 2014 data sets and compare our method with several state-of-the-art algorithms. The experiments show that our method is advantageous for MED.
Outcomes and Perceptions of Annotated Video Feedback Following Psychomotor Skill Laboratories
ERIC Educational Resources Information Center
Truskowski, S.; VanderMolen, J.
2017-01-01
This study sought to explore the effectiveness of annotated video technology for providing feedback to occupational therapy students learning transfers, range of motion and manual muscle testing. Fifty-seven first-year occupational therapy students were split into two groups. One received annotated video feedback during a transfer lab and…
3rd-generation MW/LWIR sensor engine for advanced tactical systems
NASA Astrophysics Data System (ADS)
King, Donald F.; Graham, Jason S.; Kennedy, Adam M.; Mullins, Richard N.; McQuitty, Jeffrey C.; Radford, William A.; Kostrzewa, Thomas J.; Patten, Elizabeth A.; McEwan, Thomas F.; Vodicka, James G.; Wootan, John J.
2008-04-01
Raytheon has developed a 3rd-Generation FLIR Sensor Engine (3GFSE) for advanced U.S. Army systems. The sensor engine is based around a compact, productized detector-dewar assembly incorporating a 640 x 480 staring dual-band (MW/LWIR) focal plane array (FPA) and a dual-aperture coldshield mechanism. The capability to switch the coldshield aperture and operate at either of two widely-varying f/#s will enable future multi-mode tactical systems to more fully exploit the many operational advantages offered by dual-band FPAs. RVS has previously demonstrated high-performance dual-band MW/LWIR FPAs in 640 x 480 and 1280 x 720 formats with 20 μm pitch. The 3GFSE includes compact electronics that operate the dual-band FPA and variable-aperture mechanism, and perform 14-bit analog-to-digital conversion of the FPA output video. Digital signal processing electronics perform "fixed" two-point non-uniformity correction (NUC) of the video from both bands and optional dynamic scene-based NUC; advanced enhancement processing of the output video is also supported. The dewar-electronics assembly measures approximately 4.75 x 2.25 x 1.75 inches. A compact, high-performance linear cooler and cooler electronics module provide the necessary FPA cooling over a military environmental temperature range. 3GFSE units are currently being assembled and integrated at RVS, with the first units planned for delivery to the US Army.
Resolving occlusion and segmentation errors in multiple video object tracking
NASA Astrophysics Data System (ADS)
Cheng, Hsu-Yung; Hwang, Jenq-Neng
2009-02-01
In this work, we propose a method to integrate the Kalman filter and adaptive particle sampling for multiple video object tracking. The proposed framework is able to detect occlusion and segmentation error cases and perform adaptive particle sampling for accurate measurement selection. Compared with traditional particle filter based tracking methods, the proposed method generates particles only when necessary. With the concept of adaptive particle sampling, we can avoid degeneracy problem because the sampling position and range are dynamically determined by parameters that are updated by Kalman filters. There is no need to spend time on processing particles with very small weights. The adaptive appearance for the occluded object refers to the prediction results of Kalman filters to determine the region that should be updated and avoids the problem of using inadequate information to update the appearance under occlusion cases. The experimental results have shown that a small number of particles are sufficient to achieve high positioning and scaling accuracy. Also, the employment of adaptive appearance substantially improves the positioning and scaling accuracy on the tracking results.
Video-rate functional photoacoustic microscopy at depths
NASA Astrophysics Data System (ADS)
Wang, Lidai; Maslov, Konstantin; Xing, Wenxin; Garcia-Uribe, Alejandro; Wang, Lihong V.
2012-10-01
We report the development of functional photoacoustic microscopy capable of video-rate high-resolution in vivo imaging in deep tissue. A lightweight photoacoustic probe is made of a single-element broadband ultrasound transducer, a compact photoacoustic beam combiner, and a bright-field light delivery system. Focused broadband ultrasound detection provides a 44-μm lateral resolution and a 28-μm axial resolution based on the envelope (a 15-μm axial resolution based on the raw RF signal). Due to the efficient bright-field light delivery, the system can image as deep as 4.8 mm in vivo using low excitation pulse energy (28 μJ per pulse, 0.35 mJ/cm2 on the skin surface). The photoacoustic probe is mounted on a fast-scanning voice-coil scanner to acquire 40 two-dimensional (2-D) B-scan images per second over a 9-mm range. High-resolution anatomical imaging is demonstrated in the mouse ear and brain. Via fast dual-wavelength switching, oxygen dynamics of mouse cardio-vasculature is imaged in realtime as well.
Longitudinal effects of violent video games on aggression in Japan and the United States.
Anderson, Craig A; Sakamoto, Akira; Gentile, Douglas A; Ihori, Nobuko; Shibuya, Akiko; Yukawa, Shintaro; Naito, Mayumi; Kobayashi, Kumiko
2008-11-01
Youth worldwide play violent video games many hours per week. Previous research suggests that such exposure can increase physical aggression. We tested whether high exposure to violent video games increases physical aggression over time in both high- (United States) and low- (Japan) violence cultures. We hypothesized that the amount of exposure to violent video games early in a school year would predict changes in physical aggressiveness assessed later in the school year, even after statistically controlling for gender and previous physical aggressiveness. In 3 independent samples, participants' video game habits and physically aggressive behavior tendencies were assessed at 2 points in time, separated by 3 to 6 months. One sample consisted of 181 Japanese junior high students ranging in age from 12 to 15 years. A second Japanese sample consisted of 1050 students ranging in age from 13 to 18 years. The third sample consisted of 364 United States 3rd-, 4th-, and 5th-graders ranging in age from 9 to 12 years. RESULTS. Habitual violent video game play early in the school year predicted later aggression, even after controlling for gender and previous aggressiveness in each sample. Those who played a lot of violent video games became relatively more physically aggressive. Multisample structure equation modeling revealed that this longitudinal effect was of a similar magnitude in the United States and Japan for similar-aged youth and was smaller (but still significant) in the sample that included older youth. These longitudinal results confirm earlier experimental and cross-sectional studies that had suggested that playing violent video games is a significant risk factor for later physically aggressive behavior and that this violent video game effect on youth generalizes across very different cultures. As a whole, the research strongly suggests reducing the exposure of youth to this risk factor.
A 50Mbit/Sec. CMOS Video Linestore System
NASA Astrophysics Data System (ADS)
Jeung, Yeun C.
1988-10-01
This paper reports the architecture, design and test results of a CMOS single chip programmable video linestore system which has 16-bit data words with 1024 bit depth. The delay is fully programmable from 9 to 1033 samples by a 10 bit binary control word. The large 16 bit data word width makes the chip useful for a wide variety of digital video signal processing applications such as DPCM coding, High-Definition TV, and Video scramblers/descramblers etc. For those applications, the conventional large fixed-length shift register or static RAM scheme is not very popular because of its lack of versatility, high power consumption, and required support circuitry. The very high throughput of 50Mbit/sec is made possible by a highly parallel, pipelined dynamic memory architecture implemented in a 2-um N-well CMOS technology. The basic cell of the programmable video linestore chip is an four transistor dynamic RAM element. This cell comprises the majority of the chip's real estate, consumes no static power, and gives good noise immunity to the simply designed sense amplifier. The chip design was done using Bellcore's version of the MULGA virtual grid symbolic layout system. The chip contains approximately 90,000 transistors in an area of 6.5 x 7.5 square mm and the I/Os are TTL compatible. The chip is packaged in a 68-pin leadless ceramic chip carrier package.
A sensor and video based ontology for activity recognition in smart environments.
Mitchell, D; Morrow, Philip J; Nugent, Chris D
2014-01-01
Activity recognition is used in a wide range of applications including healthcare and security. In a smart environment activity recognition can be used to monitor and support the activities of a user. There have been a range of methods used in activity recognition including sensor-based approaches, vision-based approaches and ontological approaches. This paper presents a novel approach to activity recognition in a smart home environment which combines sensor and video data through an ontological framework. The ontology describes the relationships and interactions between activities, the user, objects, sensors and video data.
Portable Airborne Laser System Measures Forest-Canopy Height
NASA Technical Reports Server (NTRS)
Nelson, Ross
2005-01-01
(PALS) is a combination of laser ranging, video imaging, positioning, and data-processing subsystems designed for measuring the heights of forest canopies along linear transects from tens to thousands of kilometers long. Unlike prior laser ranging systems designed to serve the same purpose, the PALS is not restricted to use aboard a single aircraft of a specific type: the PALS fits into two large suitcases that can be carried to any convenient location, and the PALS can be installed in almost any local aircraft for hire, thereby making it possible to sample remote forests at relatively low cost. The initial cost and the cost of repairing the PALS are also lower because the PALS hardware consists mostly of commercial off-the-shelf (COTS) units that can easily be replaced in the field. The COTS units include a laser ranging transceiver, a charge-coupled-device camera that images the laser-illuminated targets, a differential Global Positioning System (dGPS) receiver capable of operation within the Wide Area Augmentation System, a video titler, a video cassette recorder (VCR), and a laptop computer equipped with two serial ports. The VCR and computer are powered by batteries; the other units are powered at 12 VDC from the 28-VDC aircraft power system via a low-pass filter and a voltage converter. The dGPS receiver feeds location and time data, at an update rate of 0.5 Hz, to the video titler and the computer. The laser ranging transceiver, operating at a sampling rate of 2 kHz, feeds its serial range and amplitude data stream to the computer. The analog video signal from the CCD camera is fed into the video titler wherein the signal is annotated with position and time information. The titler then forwards the annotated signal to the VCR for recording on 8-mm tapes. The dGPS and laser range and amplitude serial data streams are processed by software that displays the laser trace and the dGPS information as they are fed into the computer, subsamples the laser range and amplitude data, interleaves the subsampled data with the dGPS information, and records the resulting interleaved data stream.
NASA Astrophysics Data System (ADS)
Khalifa, Aly A.; Aly, Hussein A.; El-Sherif, Ashraf F.
2016-02-01
Near infrared (NIR) dynamic scene projection systems are used to perform hardware in-the-loop (HWIL) testing of a unit under test operating in the NIR band. The common and complex requirement of a class of these units is a dynamic scene that is spatio-temporal variant. In this paper we apply and investigate active external modulation of NIR laser in different ranges of temporal frequencies. We use digital micromirror devices (DMDs) integrated as the core of a NIR projection system to generate these dynamic scenes. We deploy the spatial pattern to the DMD controller to simultaneously yield the required amplitude by pulse width modulation (PWM) of the mirror elements as well as the spatio-temporal pattern. Desired modulation and coding of high stable, high power visible (Red laser at 640 nm) and NIR (Diode laser at 976 nm) using the combination of different optical masks based on DMD were achieved. These spatial versatile active coding strategies for both low and high frequencies in the range of kHz for irradiance of different targets were generated by our system and recorded using VIS-NIR fast cameras. The temporally-modulated laser pulse traces were measured using array of fast response photodetectors. Finally using a high resolution spectrometer, we evaluated the NIR dynamic scene projection system response in terms of preserving the wavelength and band spread of the NIR source after projection.
Obeidat, Shadi; Badin, Shadi; Khawaja, Imran
2010-04-01
Dynamic Y stents are used in tracheobronchial obstruction, tracheal stenosis, and tracheomalacia. Placement may be difficult and is usually accomplished using a rigid grasping forceps (under fluoroscopic guidance) or a rigid bronchoscope. We report using a new stent placement technique on an elderly patient with a central tracheobronchial tumor. It included using a flexible bronchoscope, video laryngoscope, and laryngeal mask airway. The new technique we used has the advantages of continuous direct endoscopic visualization during stent advancement and manipulation, and securing the airways with a laryngeal mask airway at the same time. This technique eliminates the need for intraoperative fluoroscopy.
Vigilance on the move: video game-based measurement of sustained attention.
Szalma, J L; Schmidt, T N; Teo, G W L; Hancock, P A
2014-01-01
Vigilance represents the capacity to sustain attention to any environmental source of information over prolonged periods on watch. Most stimuli used in vigilance research over the previous six decades have been relatively simple and often purport to represent important aspects of detection and discrimination tasks in real-world settings. Such displays are most frequently composed of single stimulus presentations in discrete trials against a uniform, often uncluttered background. The present experiment establishes a dynamic, first-person perspective vigilance task in motion using a video-game environment. 'Vigilance on the move' is thus a new paradigm for the study of sustained attention. We conclude that the stress of vigilance extends to the new paradigm, but whether the performance decrement emerges depends upon specific task parameters. The development of the task, the issues to be resolved and the pattern of performance, perceived workload and stress associated with performing such dynamic vigilance are reported. The present experiment establishes a dynamic, first-person perspective movement-based vigilance task using a video-game environment. 'Vigilance on the move' is thus a new paradigm for the evaluation of sustained attention in operational environments in which individuals move as they monitor their environment. Issues addressed in task development are described.
ERIC Educational Resources Information Center
Carrein, Cindy; Bernaud, Jean-Luc
2010-01-01
This study investigated the effects of nonverbal self-disclosure within the dynamic of aptitude-treatment interaction. Participants (N = 94) watched a video of a career counseling session aimed at helping the jobseeker to find employment. The video was then edited to display 3 varying degrees of nonverbal self-disclosure. In conjunction with the…
ERIC Educational Resources Information Center
Gawlik, Christina L.
2009-01-01
Online assessments afford many advantages for teachers and students. Okolo (2006) stated, "As the power, sophistication, and availability of technology have increased in the classroom, online assessments have become a viable tool for providing the type of frequent and dynamic assessment information that educators need to guide instructional…
ERIC Educational Resources Information Center
Onorato, P.; Mascheretti, P.; DeAmbrosis, A.
2012-01-01
In this paper, we describe how simple experiments realizable by using easily found and low-cost materials allow students to explore quantitatively the magnetic interaction thanks to the help of an Open Source Physics tool, the Tracker Video Analysis software. The static equilibrium of a "column" of permanents magnets is carefully investigated by…
On continuous user authentication via typing behavior.
Roth, Joseph; Liu, Xiaoming; Metaxas, Dimitris
2014-10-01
We hypothesize that an individual computer user has a unique and consistent habitual pattern of hand movements, independent of the text, while typing on a keyboard. As a result, this paper proposes a novel biometric modality named typing behavior (TB) for continuous user authentication. Given a webcam pointing toward a keyboard, we develop real-time computer vision algorithms to automatically extract hand movement patterns from the video stream. Unlike the typical continuous biometrics, such as keystroke dynamics (KD), TB provides a reliable authentication with a short delay, while avoiding explicit key-logging. We collect a video database where 63 unique subjects type static text and free text for multiple sessions. For one typing video, the hands are segmented in each frame and a unique descriptor is extracted based on the shape and position of hands, as well as their temporal dynamics in the video sequence. We propose a novel approach, named bag of multi-dimensional phrases, to match the cross-feature and cross-temporal pattern between a gallery sequence and probe sequence. The experimental results demonstrate a superior performance of TB when compared with KD, which, together with our ultrareal-time demo system, warrant further investigation of this novel vision application and biometric modality.
2016-09-01
Characteristics of Silver Carp (Hypophthalmichthys molitrix) Using Video Analyses and Principles of Projectile Physics by Glenn R. Parsons, Ehlana Stell...2002) estimated maximum swim speeds of videotaped, captive, and free-ranging dolphins, Delphinidae, by timed sequential analyses of video frames... videos to estimate the swim speeds and leap characteristics of carp as they exit the waters’ surface. We used both direct estimates of swim speeds as
Lee, I-Jui; Chen, Chien-Hsu; Lin, Ling-Yi
2016-01-01
Autism spectrum disorders (ASD) are characterized by a reduced ability to understand the emotional expressions on other people's faces. Increasing evidence indicates that children with ASD might not recognize or understand crucial nonverbal behaviors, which likely causes them to ignore nonverbal gestures and social cues, like facial expressions, that usually aid social interaction. In this study, we used software technology to create half-static and dynamic video materials to teach adolescents with ASD how to become aware of six basic facial expressions observed in real situations. This intervention system provides a half-way point via a dynamic video of a specific element within a static-surrounding frame to strengthen the ability of the six adolescents with ASD to attract their attention on the relevant dynamic facial expressions and ignore irrelevant ones. Using a multiple baseline design across participants, we found that the intervention learning system provided a simple yet effective way for adolescents with ASD to attract their attention on the nonverbal facial cues; the intervention helped them better understand and judge others' facial emotions. We conclude that the limited amount of information with structured and specific close-up visual social cues helped the participants improve judgments of the emotional meaning of the facial expressions of others.
NASA Astrophysics Data System (ADS)
Kottmann, R.; Ratmeyer, V.; Pop Ristov, A.; Boetius, A.
2012-04-01
More and more seagoing scientific expeditions use video-controlled research platforms such as Remote Operating Vehicles (ROV), Autonomous Underwater Vehicles (AUV), and towed camera systems. These produce many hours of video material which contains detailed and scientifically highly valuable footage of the biological, chemical, geological, and physical aspects of the oceans. Many of the videos contain unique observations of unknown life-forms which are rare, and which cannot be sampled and studied otherwise. To make such video material online accessible and to create a collaborative annotation environment the "Video Annotation and processing platform" (V-App) was developed. A first solely web-based installation for ROV videos is setup at the German Center for Marine Environmental Sciences (available at http://videolib.marum.de). It allows users to search and watch videos with a standard web browser based on the HTML5 standard. Moreover, V-App implements social web technologies allowing a distributed world-wide scientific community to collaboratively annotate videos anywhere at any time. It has several features fully implemented among which are: • User login system for fine grained permission and access control • Video watching • Video search using keywords, geographic position, depth and time range and any combination thereof • Video annotation organised in themes (tracks) such as biology and geology among others in standard or full screen mode • Annotation keyword management: Administrative users can add, delete, and update single keywords for annotation or upload sets of keywords from Excel-sheets • Download of products for scientific use This unique web application system helps making costly ROV videos online available (estimated cost range between 5.000 - 10.000 Euros per hour depending on the combination of ship and ROV). Moreover, with this system each expert annotation adds instantaneous available and valuable knowledge to otherwise uncharted material.
Automatic attention-based prioritization of unconstrained video for compression
NASA Astrophysics Data System (ADS)
Itti, Laurent
2004-06-01
We apply a biologically-motivated algorithm that selects visually-salient regions of interest in video streams to multiply-foveated video compression. Regions of high encoding priority are selected based on nonlinear integration of low-level visual cues, mimicking processing in primate occipital and posterior parietal cortex. A dynamic foveation filter then blurs (foveates) every frame, increasingly with distance from high-priority regions. Two variants of the model (one with continuously-variable blur proportional to saliency at every pixel, and the other with blur proportional to distance from three independent foveation centers) are validated against eye fixations from 4-6 human observers on 50 video clips (synthetic stimuli, video games, outdoors day and night home video, television newscast, sports, talk-shows, etc). Significant overlap is found between human and algorithmic foveations on every clip with one variant, and on 48 out of 50 clips with the other. Substantial compressed file size reductions by a factor 0.5 on average are obtained for foveated compared to unfoveated clips. These results suggest a general-purpose usefulness of the algorithm in improving compression ratios of unconstrained video.
The Role of Theory and Technology in Learning Video Production: The Challenge of Change
ERIC Educational Resources Information Center
Shewbridge, William; Berge, Zane L.
2004-01-01
The video production field has evolved beyond being exclusively relevant to broadcast television. The convergence of low-cost consumer cameras and desktop computer editing has led to new applications of video in a wide range of areas, including the classroom. This presents educators with an opportunity to rethink how students learn video…
High-Speed Video Analysis in a Conceptual Physics Class
ERIC Educational Resources Information Center
Desbien, Dwain M.
2011-01-01
The use of probe ware and computers has become quite common in introductory physics classrooms. Video analysis is also becoming more popular and is available to a wide range of students through commercially available and/or free software. Video analysis allows for the study of motions that cannot be easily measured in the traditional lab setting…
Color infrared video mapping of upland and wetland communities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mackey, H.E. Jr.; Jensen, J.R.; Hodgson, M.E.
1987-01-01
Color infrared images were obtained using a video remote sensing system at 3000 and 5000 feet over a variety of terrestrial and wetland sites on the Savannah River Plant near Aiken, SC. The terrestrial sites ranged from secondary successional old field areas to even aged pine stands treated with varying levels of sewage sludge. The wetland sites ranged from marsh and macrophyte areas to mature cypress-tupelo swamp forests. The video data were collected in three spectral channels, 0.5-0.6 ..mu..m, 0.6-0.7 ..mu..m, and 0.7-1.1 ..mu..m at a 12.5 mm focal length. The data were converted to digital form and processed withmore » standard techniques. Comparisons of the video images were made with aircraft multispectral scanner (MSS) data collected previously from the same sites. The analyses of the video data indicated that this technique may present a low cost alternative for evaluation of vegetation and landcover types for environmental monitoring and assessment.« less
Common and Innovative Visuals: A sparsity modeling framework for video.
Abdolhosseini Moghadam, Abdolreza; Kumar, Mrityunjay; Radha, Hayder
2014-05-02
Efficient video representation models are critical for many video analysis and processing tasks. In this paper, we present a framework based on the concept of finding the sparsest solution to model video frames. To model the spatio-temporal information, frames from one scene are decomposed into two components: (i) a common frame, which describes the visual information common to all the frames in the scene/segment, and (ii) a set of innovative frames, which depicts the dynamic behaviour of the scene. The proposed approach exploits and builds on recent results in the field of compressed sensing to jointly estimate the common frame and the innovative frames for each video segment. We refer to the proposed modeling framework by CIV (Common and Innovative Visuals). We show how the proposed model can be utilized to find scene change boundaries and extend CIV to videos from multiple scenes. Furthermore, the proposed model is robust to noise and can be used for various video processing applications without relying on motion estimation and detection or image segmentation. Results for object tracking, video editing (object removal, inpainting) and scene change detection are presented to demonstrate the efficiency and the performance of the proposed model.
Variable Shadow Screens for Imaging Optical Devices
NASA Technical Reports Server (NTRS)
Lu, Ed; Chretien, Jean L.
2004-01-01
Variable shadow screens have been proposed for reducing the apparent brightnesses of very bright light sources relative to other sources within the fields of view of diverse imaging optical devices, including video and film cameras and optical devices for imaging directly into the human eye. In other words, variable shadow screens would increase the effective dynamic ranges of such devices. Traditionally, imaging sensors are protected against excessive brightness by use of dark filters and/or reduction of iris diameters. These traditional means do not increase dynamic range; they reduce the ability to view or image dimmer features of an image because they reduce the brightness of all parts of an image by the same factor. On the other hand, a variable shadow screen would darken only the excessively bright parts of an image. For example, dim objects in a field of view that included the setting Sun or bright headlights could be seen more readily in a picture taken through a variable shadow screen than in a picture of the same scene taken through a dark filter or a narrowed iris. The figure depicts one of many potential variations of the basic concept of the variable shadow screen. The shadow screen would be a normally transparent liquid-crystal matrix placed in front of a focal-plane array of photodetectors in a charge-coupled-device video camera. The shadow screen would be placed far enough from the focal plane so as not to disrupt the focal-plane image to an unacceptable degree, yet close enough so that the out-of-focus shadows cast by the screen would still be effective in darkening the brightest parts of the image. The image detected by the photodetector array itself would be used as feedback to drive the variable shadow screen: The video output of the camera would be processed by suitable analog and/or digital electronic circuitry to generate a negative partial version of the image to be impressed on the shadow screen. The parts of the shadow screen in front of those parts of the image with brightness below a specified threshold would be left transparent; the parts of the shadow screen in front of those parts of the image where the brightness exceeded the threshold would be darkened by an amount that would increase with the excess above the threshold.
Effects of age on associating virtual and embodied toys.
Okita, Sandra Y
2004-08-01
Technologies such as videos, toys, and video games are used as tools in delivering education to young children. Do children spontaneously transfer between virtual and real-world mediums as they learn? Fifty-six children learned facts about a toy dog presented through varying levels of technology and interactivity (e.g., video game, stuffed animal, picture books). They then met a similar dog character in a new embodiment (e.g., as a stuffed animal if first met the dog as video character). Would children spontaneously generalize the facts they learned about the dog character across mediums (dynamic and static environments)? Results indicate that younger children were more likely to generalize facts across mediums. Specific aspects of the level of technology and interactivity had little effect.
Infant Attention to Dynamic Audiovisual Stimuli: Look Duration from 3 to 9 Months of Age
ERIC Educational Resources Information Center
Reynolds, Greg D.; Zhang, Dantong; Guy, Maggie W.
2013-01-01
The goal of this study was to examine developmental change in visual attention to dynamic visual and audiovisual stimuli in 3-, 6-, and 9-month-old infants. Infant look duration was measured during exposure to dynamic geometric patterns and Sesame Street video clips under three different stimulus modality conditions: unimodal visual, synchronous…
Wisniewska, Danuta M; Ratcliffe, John M; Beedholm, Kristian; Christensen, Christian B; Johnson, Mark; Koblitz, Jens C; Wahlberg, Magnus; Madsen, Peter T
2015-01-01
Toothed whales use sonar to detect, locate, and track prey. They adjust emitted sound intensity, auditory sensitivity and click rate to target range, and terminate prey pursuits with high-repetition-rate, low-intensity buzzes. However, their narrow acoustic field of view (FOV) is considered stable throughout target approach, which could facilitate prey escape at close-range. Here, we show that, like some bats, harbour porpoises can broaden their biosonar beam during the terminal phase of attack but, unlike bats, maintain the ability to change beamwidth within this phase. Based on video, MRI, and acoustic-tag recordings, we propose this flexibility is modulated by the melon and implemented to accommodate dynamic spatial relationships with prey and acoustic complexity of surroundings. Despite independent evolution and different means of sound generation and transmission, whales and bats adaptively change their FOV, suggesting that beamwidth flexibility has been an important driver in the evolution of echolocation for prey tracking. DOI: http://dx.doi.org/10.7554/eLife.05651.001 PMID:25793440
Wisniewska, Danuta M; Ratcliffe, John M; Beedholm, Kristian; Christensen, Christian B; Johnson, Mark; Koblitz, Jens C; Wahlberg, Magnus; Madsen, Peter T
2015-03-20
Toothed whales use sonar to detect, locate, and track prey. They adjust emitted sound intensity, auditory sensitivity and click rate to target range, and terminate prey pursuits with high-repetition-rate, low-intensity buzzes. However, their narrow acoustic field of view (FOV) is considered stable throughout target approach, which could facilitate prey escape at close-range. Here, we show that, like some bats, harbour porpoises can broaden their biosonar beam during the terminal phase of attack but, unlike bats, maintain the ability to change beamwidth within this phase. Based on video, MRI, and acoustic-tag recordings, we propose this flexibility is modulated by the melon and implemented to accommodate dynamic spatial relationships with prey and acoustic complexity of surroundings. Despite independent evolution and different means of sound generation and transmission, whales and bats adaptively change their FOV, suggesting that beamwidth flexibility has been an important driver in the evolution of echolocation for prey tracking.
Ensemble of Chaotic and Naive Approaches for Performance Enhancement in Video Encryption.
Chandrasekaran, Jeyamala; Thiruvengadam, S J
2015-01-01
Owing to the growth of high performance network technologies, multimedia applications over the Internet are increasing exponentially. Applications like video conferencing, video-on-demand, and pay-per-view depend upon encryption algorithms for providing confidentiality. Video communication is characterized by distinct features such as large volume, high redundancy between adjacent frames, video codec compliance, syntax compliance, and application specific requirements. Naive approaches for video encryption encrypt the entire video stream with conventional text based cryptographic algorithms. Although naive approaches are the most secure for video encryption, the computational cost associated with them is very high. This research work aims at enhancing the speed of naive approaches through chaos based S-box design. Chaotic equations are popularly known for randomness, extreme sensitivity to initial conditions, and ergodicity. The proposed methodology employs two-dimensional discrete Henon map for (i) generation of dynamic and key-dependent S-box that could be integrated with symmetric algorithms like Blowfish and Data Encryption Standard (DES) and (ii) generation of one-time keys for simple substitution ciphers. The proposed design is tested for randomness, nonlinearity, avalanche effect, bit independence criterion, and key sensitivity. Experimental results confirm that chaos based S-box design and key generation significantly reduce the computational cost of video encryption with no compromise in security.
Ensemble of Chaotic and Naive Approaches for Performance Enhancement in Video Encryption
Chandrasekaran, Jeyamala; Thiruvengadam, S. J.
2015-01-01
Owing to the growth of high performance network technologies, multimedia applications over the Internet are increasing exponentially. Applications like video conferencing, video-on-demand, and pay-per-view depend upon encryption algorithms for providing confidentiality. Video communication is characterized by distinct features such as large volume, high redundancy between adjacent frames, video codec compliance, syntax compliance, and application specific requirements. Naive approaches for video encryption encrypt the entire video stream with conventional text based cryptographic algorithms. Although naive approaches are the most secure for video encryption, the computational cost associated with them is very high. This research work aims at enhancing the speed of naive approaches through chaos based S-box design. Chaotic equations are popularly known for randomness, extreme sensitivity to initial conditions, and ergodicity. The proposed methodology employs two-dimensional discrete Henon map for (i) generation of dynamic and key-dependent S-box that could be integrated with symmetric algorithms like Blowfish and Data Encryption Standard (DES) and (ii) generation of one-time keys for simple substitution ciphers. The proposed design is tested for randomness, nonlinearity, avalanche effect, bit independence criterion, and key sensitivity. Experimental results confirm that chaos based S-box design and key generation significantly reduce the computational cost of video encryption with no compromise in security. PMID:26550603
Tiny videos: a large data set for nonparametric video retrieval and frame classification.
Karpenko, Alexandre; Aarabi, Parham
2011-03-01
In this paper, we present a large database of over 50,000 user-labeled videos collected from YouTube. We develop a compact representation called "tiny videos" that achieves high video compression rates while retaining the overall visual appearance of the video as it varies over time. We show that frame sampling using affinity propagation-an exemplar-based clustering algorithm-achieves the best trade-off between compression and video recall. We use this large collection of user-labeled videos in conjunction with simple data mining techniques to perform related video retrieval, as well as classification of images and video frames. The classification results achieved by tiny videos are compared with the tiny images framework [24] for a variety of recognition tasks. The tiny images data set consists of 80 million images collected from the Internet. These are the largest labeled research data sets of videos and images available to date. We show that tiny videos are better suited for classifying scenery and sports activities, while tiny images perform better at recognizing objects. Furthermore, we demonstrate that combining the tiny images and tiny videos data sets improves classification precision in a wider range of categories.
Hierarchical structure for audio-video based semantic classification of sports video sequences
NASA Astrophysics Data System (ADS)
Kolekar, M. H.; Sengupta, S.
2005-07-01
A hierarchical structure for sports event classification based on audio and video content analysis is proposed in this paper. Compared to the event classifications in other games, those of cricket are very challenging and yet unexplored. We have successfully solved cricket video classification problem using a six level hierarchical structure. The first level performs event detection based on audio energy and Zero Crossing Rate (ZCR) of short-time audio signal. In the subsequent levels, we classify the events based on video features using a Hidden Markov Model implemented through Dynamic Programming (HMM-DP) using color or motion as a likelihood function. For some of the game-specific decisions, a rule-based classification is also performed. Our proposed hierarchical structure can easily be applied to any other sports. Our results are very promising and we have moved a step forward towards addressing semantic classification problems in general.
Kuo, Chung-Feng Jeffrey; Wang, Hsing-Won; Hsiao, Shang-Wun; Peng, Kai-Ching; Chou, Ying-Liang; Lai, Chun-Yu; Hsu, Chien-Tung Max
2014-01-01
Physicians clinically use laryngeal video stroboscope as an auxiliary instrument to test glottal diseases, and read vocal fold images and voice quality for diagnosis. As the position of vocal fold varies in each person, the proportion of the vocal fold size as presented in the vocal fold image is different, making it impossible to directly estimate relevant glottis physiological parameters, such as the length, area, perimeter, and opening angle of the glottis. Hence, this study designs an innovative laser projection marking module for the laryngeal video stroboscope to provide reference parameters for image scaling conversion. This innovative laser projection marking module to be installed on the laryngeal video stroboscope using laser beams to project onto the glottis plane, in order to provide reference parameters for scaling conversion of images of laryngeal video stroboscope. Copyright © 2013 Elsevier Ltd. All rights reserved.
Use of videotape for off-line viewing of computer-assisted radionuclide cardiology studies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thrall, J.H.; Pitt, B.; Marx, R.S.
1978-02-01
Videotape offers an inexpensive method for off-line viewing of dynamic radionuclide cardiac studies. Two approaches to videotaping have been explored and demonstrated to be feasible. In the first, a video camera in conjunction with a cassette-type recorder is used to record from the computer display scope. Alternatively, for computer systems already linked to video display units, the video signal can be routed directly to the recorder. Acceptance and use of tracer cardiology studies will be enhanced by increased availability of the studies for clinical review. Videotape offers an inexpensive flexible means of achieving this.
Millimeter-wave detection using resonant tunnelling diodes
NASA Technical Reports Server (NTRS)
Mehdi, I.; Kidner, C.; East, J. R.; Haddad, G. I.
1990-01-01
A lattice-matched InGaAs/InAlAs resonant tunnelling diode is studied as a video detector in the millimeter-wave range. Tangential signal sensitivity and video resistance measurements are made as a function of bias and frequency. A tangential signal sensitivity of -37 dBm (1 MHz amplifier bandwidth) with a corresponding video resistance of 350 ohms at 40 GHz has been measured. These results appear to be the first millimeter-wave tangential signal sensitivity and video resistance results for a resonant tunnelling diode.
2017-01-01
This study was conducted to evaluate the performance and reach of YouTube videos on physical examinations made by Spanish university students. We analyzed performance metrics for 4 videos on physical examinations in Spanish that were created by medical students at Miguel Hernández University (Elche, Spain) and are available on YouTube, on the following topics: the head and neck (7:30), the cardiovascular system (7:38), the respiratory system (13:54), and the abdomen (11:10). We used the Analytics application offered by the YouTube platform to analyze the reach of the videos from the upload date (February 17, 2015) to July 28, 2017 (2 years, 5 months, and 11 days). The total number of views, length of watch-time, and the mean view duration for the 4 videos were, respectively: 164,403 views (mean, 41,101 views; range, 12,389 to 94,573 views), 425,888 minutes (mean, 106,472 minutes; range, 37,889 to 172,840 minutes), and 2:56 minutes (range, 1:49 to 4:03 minutes). Mexico was the most frequent playback location, followed by Spain, Colombia, and Venezuela. Uruguay, Ecuador, Mexico, and Puerto Rico had the most views per 100,000 population. Spanish-language tutorials are an alternative tool for teaching physical examination skills to students whose first language is not English. The videos were especially popular in Uruguay, Ecuador, and Mexico. PMID:29278903
Interfacial Dynamics of Condensing Vapor Bubbles in an Ultrasonic Acoustic Field
NASA Astrophysics Data System (ADS)
Boziuk, Thomas; Smith, Marc; Glezer, Ari
2016-11-01
Enhancement of vapor condensation in quiescent subcooled liquid using ultrasonic actuation is investigated experimentally. The vapor bubbles are formed by direct injection from a pressurized steam reservoir through nozzles of varying characteristic diameters, and are advected within an acoustic field of programmable intensity. While kHz-range acoustic actuation typically couples to capillary instability of the vapor-liquid interface, ultrasonic (MHz-range) actuation leads to the formation of a liquid spout that penetrates into the vapor bubble and significantly increases its surface area and therefore condensation rate. Focusing of the ultrasonic beam along the spout leads to ejection of small-scale droplets from that are propelled towards the vapor liquid interface and result in localized acceleration of the condensation. High-speed video of Schlieren images is used to investigate the effects of the ultrasonic actuation on the thermal boundary layer on the liquid side of the vapor-liquid interface and its effect on the condensation rate, and the liquid motion during condensation is investigated using high-magnification PIV measurements. High-speed image processing is used to assess the effect of the actuation on the dynamics and temporal variation in characteristic scale (and condensation rate) of the vapor bubbles.
Power-rate-distortion analysis for wireless video communication under energy constraint
NASA Astrophysics Data System (ADS)
He, Zhihai; Liang, Yongfang; Ahmad, Ishfaq
2004-01-01
In video coding and streaming over wireless communication network, the power-demanding video encoding operates on the mobile devices with limited energy supply. To analyze, control, and optimize the rate-distortion (R-D) behavior of the wireless video communication system under the energy constraint, we need to develop a power-rate-distortion (P-R-D) analysis framework, which extends the traditional R-D analysis by including another dimension, the power consumption. Specifically, in this paper, we analyze the encoding mechanism of typical video encoding systems and develop a parametric video encoding architecture which is fully scalable in computational complexity. Using dynamic voltage scaling (DVS), a hardware technology recently developed in CMOS circuits design, the complexity scalability can be translated into the power consumption scalability of the video encoder. We investigate the rate-distortion behaviors of the complexity control parameters and establish an analytic framework to explore the P-R-D behavior of the video encoding system. Both theoretically and experimentally, we show that, using this P-R-D model, the encoding system is able to automatically adjust its complexity control parameters to match the available energy supply of the mobile device while maximizing the picture quality. The P-R-D model provides a theoretical guideline for system design and performance optimization in wireless video communication under energy constraint, especially over the wireless video sensor network.
Biological Response to the Dynamic Spectral-Polarized Underwater Light Field
2012-09-30
deployment of a comprehensive optical suite including underwater video- polarimetry (full Stokes vector video-imaging camera custom-built Cummings; and...During field operations, we couple polarimetry measurements of live, free-swimming animals in their environments with a full suite of optical...Seibel, Ahmed). We also restrain live, awake animals to take polarimetry measurements (in the field and laboratory) under a complete set of
DOE Office of Scientific and Technical Information (OSTI.GOV)
Muir, Ryan D.; Sullivan, Shane Z.; Oglesbee, Robert A.
Digital lock-in amplification (LIA) with synchronous digitization (SD) is shown to provide significant signal to noise (S/N) and linear dynamic range advantages in beam-scanning microscopy measurements using pulsed laser sources. Direct comparisons between SD-LIA and conventional LIA in homodyne second harmonic generation measurements resulted in S/N enhancements consistent with theoretical models. SD-LIA provided notably larger S/N enhancements in the limit of low light intensities, through the smooth transition between photon counting and signal averaging developed in previous work. Rapid beam scanning instrumentation with up to video rate acquisition speeds minimized photo-induced sample damage. The corresponding increased allowance for higher lasermore » power without sample damage is advantageous for increasing the observed signal content.« less
NASA Technical Reports Server (NTRS)
Graves, Sharon S.; Burner, Alpheus W.; Edwards, John W.; Schuster, David M.
2001-01-01
The techniques used to acquire, reduce, and analyze dynamic deformation measurements of an aeroelastic semispan wind tunnel model are presented. Single-camera, single-view video photogrammetry (also referred to as videogrammetric model deformation, or VMD) was used to determine dynamic aeroelastic deformation of the semispan 'Models for Aeroelastic Validation Research Involving Computation' (MAVRIC) model in the Transonic Dynamics Tunnel at the NASA Langley Research Center. Dynamic deformation was determined from optical retroreflective tape targets at five semispan locations located on the wing from the root to the tip. Digitized video images from a charge coupled device (CCD) camera were recorded and processed to automatically determine target image plane locations that were then corrected for sensor, lens, and frame grabber spatial errors. Videogrammetric dynamic data were acquired at a 60-Hz rate for time records of up to 6 seconds during portions of this flutter/Limit Cycle Oscillation (LCO) test at Mach numbers from 0.3 to 0.96. Spectral analysis of the deformation data is used to identify dominant frequencies in the wing motion. The dynamic data will be used to separate aerodynamic and structural effects and to provide time history deflection data for Computational Aeroelasticity code evaluation and validation.
Space-Based Range Safety and Future Space Range Applications
NASA Technical Reports Server (NTRS)
Whiteman, Donald E.; Valencia, Lisa M.; Simpson, James C.
2005-01-01
The National Aeronautics and Space Administration (NASA) Space-Based Telemetry and Range Safety (STARS) study is a multiphase project to demonstrate the performance, flexibility and cost savings that can be realized by using space-based assets for the Range Safety [global positioning system (GPS) metric tracking data, flight termination command and range safety data relay] and Range User (telemetry) functions during vehicle launches and landings. Phase 1 included flight testing S-band Range Safety and Range User hardware in 2003 onboard a high-dynamic aircraft platform at Dryden Flight Research Center (Edwards, California, USA) using the NASA Tracking and Data Relay Satellite System (TDRSS) as the communications link. The current effort, Phase 2, includes hardware and packaging upgrades to the S-band Range Safety system and development of a high data rate Ku-band Range User system. The enhanced Phase 2 Range Safety Unit (RSU) provided real-time video for three days during the historic Global Flyer (Scaled Composites, Mojave, California, USA) flight in March, 2005. Additional Phase 2 testing will include a sounding rocket test of the Range Safety system and aircraft flight testing of both systems. Future testing will include a flight test on a launch vehicle platform. This paper discusses both Range Safety and Range User developments and testing with emphasis on the Range Safety system. The operational concept of a future space-based range is also discussed.
Space-Based Range Safety and Future Space Range Applications
NASA Technical Reports Server (NTRS)
Whiteman, Donald E.; Valencia, Lisa M.; Simpson, James C.
2005-01-01
The National Aeronautics and Space Administration Space-Based Telemetry and Range Safety study is a multiphase project to demonstrate the performance, flexibility and cost savings that can be realized by using space-based assets for the Range Safety (global positioning system metric tracking data, flight termination command and range safety data relay) and Range User (telemetry) functions during vehicle launches and landings. Phase 1 included flight testing S-band Range Safety and Range User hardware in 2003 onboard a high-dynamic aircraft platform at Dryden Flight Research Center (Edwards, California) using the NASA Tracking and Data Relay Satellite System as the communications link. The current effort, Phase 2, includes hardware and packaging upgrades to the S-band Range Safety system and development of a high data rate Ku-band Range User system. The enhanced Phase 2 Range Safety Unit provided real-time video for three days during the historic GlobalFlyer (Scaled Composites, Mojave, California) flight in March, 2005. Additional Phase 2 testing will include a sounding rocket test of the Range Safety system and aircraft flight testing of both systems. Future testing will include a flight test on a launch vehicle platform. This report discusses both Range Safety and Range User developments and testing with emphasis on the Range Safety system. The operational concept of a future space-based range is also discussed.
Effects of Commercial Web Videos on Students' Attitude toward Learning Technology
ERIC Educational Resources Information Center
Tai, Yaming; Ting, Yu-Liang
2015-01-01
This study values the broad range of web videos produced by businesses to introduce new technologies while also promoting their products. When the promoted technology is related to the topic taught in a school course, it may be beneficial for students to watch such videos. However, most students view the web as a source for entertainment, and may…
3D reconstruction of a tree stem using video images and pulse distances
N. E. Clark
2002-01-01
This paper demonstrates how a 3D tree stem model can be reconstructed using video imagery combined with laser pulse distance measurements. Perspective projection is used to place the data collected with the portable video laser-rangefinding device into a real world coordinate system. This hybrid methodology uses a relatively small number of range measurements (compared...
Saito, Toshikuni; Suzuki, Naoki; Hattori, Asaki; Suzuki, Shigeyuki; Hayashibe, Mitsuhiro; Otake, Yoshito
2006-01-01
We have been developing a DSVC (Dynamic Spatial Video Camera) system to measure and observe human locomotion quantitatively and freely. A 4D (four-dimensional) human model with detailed skeletal structure, joint, muscle, and motor functionality has been built. The purpose of our research was to estimate skeletal movements from body surface shapes using DSVC and the 4D human model. For this purpose, we constructed a body surface model of a subject and resized the standard 4D human model to match with geometrical features of the subject's body surface model. Software that integrates the DSVC system and the 4D human model, and allows dynamic skeletal state analysis from body surface movement data was also developed. We practically applied the developed system in dynamic skeletal state analysis of a lower limb in motion and were able to visualize the motion using geometrically resized standard 4D human model.
YoTube: Searching Action Proposal Via Recurrent and Static Regression Networks
NASA Astrophysics Data System (ADS)
Zhu, Hongyuan; Vial, Romain; Lu, Shijian; Peng, Xi; Fu, Huazhu; Tian, Yonghong; Cao, Xianbin
2018-06-01
In this paper, we present YoTube-a novel network fusion framework for searching action proposals in untrimmed videos, where each action proposal corresponds to a spatialtemporal video tube that potentially locates one human action. Our method consists of a recurrent YoTube detector and a static YoTube detector, where the recurrent YoTube explores the regression capability of RNN for candidate bounding boxes predictions using learnt temporal dynamics and the static YoTube produces the bounding boxes using rich appearance cues in a single frame. Both networks are trained using rgb and optical flow in order to fully exploit the rich appearance, motion and temporal context, and their outputs are fused to produce accurate and robust proposal boxes. Action proposals are finally constructed by linking these boxes using dynamic programming with a novel trimming method to handle the untrimmed video effectively and efficiently. Extensive experiments on the challenging UCF-101 and UCF-Sports datasets show that our proposed technique obtains superior performance compared with the state-of-the-art.
NASA Astrophysics Data System (ADS)
Romo, Jaime E., Jr.
Optical microscopy, the most common technique for viewing living microorganisms, is limited in resolution by Abbe's criterion. Recent microscopy techniques focus on circumnavigating the light diffraction limit by using different methods to obtain the topography of the sample. Systems like the AFM and SEM provide images with fields of view in the nanometer range with high resolvable detail, however these techniques are expensive, and limited in their ability to document live cells. The Dino-Lite digital microscope coupled with the Zeiss Axiovert 25 CFL microscope delivers a cost-effective method for recording live cells. Fields of view ranging from 8 microns to 300 microns with fair resolution provide a reliable method for discovering native cell structures at the nanoscale. In this report, cultured HeLa cells are recorded using different optical configurations resulting in documentation of cell dynamics at high magnification and resolution.
Heterogeneity image patch index and its application to consumer video summarization.
Dang, Chinh T; Radha, Hayder
2014-06-01
Automatic video summarization is indispensable for fast browsing and efficient management of large video libraries. In this paper, we introduce an image feature that we refer to as heterogeneity image patch (HIP) index. The proposed HIP index provides a new entropy-based measure of the heterogeneity of patches within any picture. By evaluating this index for every frame in a video sequence, we generate a HIP curve for that sequence. We exploit the HIP curve in solving two categories of video summarization applications: key frame extraction and dynamic video skimming. Under the key frame extraction frame-work, a set of candidate key frames is selected from abundant video frames based on the HIP curve. Then, a proposed patch-based image dissimilarity measure is used to create affinity matrix of these candidates. Finally, a set of key frames is extracted from the affinity matrix using a min–max based algorithm. Under video skimming, we propose a method to measure the distance between a video and its skimmed representation. The video skimming problem is then mapped into an optimization framework and solved by minimizing a HIP-based distance for a set of extracted excerpts. The HIP framework is pixel-based and does not require semantic information or complex camera motion estimation. Our simulation results are based on experiments performed on consumer videos and are compared with state-of-the-art methods. It is shown that the HIP approach outperforms other leading methods, while maintaining low complexity.
ATLAS-SOHO: Satellite Arrival and Uncrating, Uncrating of the Propulsion Unit and Electric Module
NASA Technical Reports Server (NTRS)
1995-01-01
The SOHO satellite, part of the International Solar-Terrestrial Physics Program (ISTP), is a solar observatory designed to study the structure, chemical composition, and dynamics of the solar interior. It will also observe the structure (density, temperature and velocity fields), dynamics and composition of the outer solar atmosphere, and the solar wind and its relation to the solar atmosphere. The spacecraft was launched on December 2, 1995. This video shows the unloading of the satellite from the transport plane at the Kennedy Space Station and the lowering to an awaiting flatbed truck. The video also shows the uncrating of the satellite, the propulsion unit and the electric module in a clean room.
Dynamic biometric identification from multiple views using the GLBP-TOP method.
Wang, Yu; Shen, Xuanjing; Chen, Haipeng; Zhai, Yujie
2014-01-01
To realize effective and rapid dynamic biometric identification with low computational complexity, a video-based facial texture program that extracts local binary patterns from three orthogonal planes in the frequency domain of the Gabor transform (GLBP-TOP) was proposed. Firstly, each normalized face was transformed by Gabor wavelet to get the enhanced Gabor magnitude map, and then the LBP-TOP operator was applied to the maps to extract video texture. Finally, weighted Chi square statistics based on the Fisher Criterion were used to realize the identification. The proposed algorithm was proved effective through the biometric experiments using the Honda/UCSD database, and was robust against changes of illumination and expressions.
Mezher, Ahmad Mohamad; Igartua, Mónica Aguilar; de la Cruz Llopis, Luis J; Pallarès Segarra, Esteve; Tripp-Barba, Carolina; Urquiza-Aguiar, Luis; Forné, Jordi; Sanvicente Gargallo, Emilio
2015-04-17
The prevention of accidents is one of the most important goals of ad hoc networks in smart cities. When an accident happens, dynamic sensors (e.g., citizens with smart phones or tablets, smart vehicles and buses, etc.) could shoot a video clip of the accident and send it through the ad hoc network. With a video message, the level of seriousness of the accident could be much better evaluated by the authorities (e.g., health care units, police and ambulance drivers) rather than with just a simple text message. Besides, other citizens would be rapidly aware of the incident. In this way, smart dynamic sensors could participate in reporting a situation in the city using the ad hoc network so it would be possible to have a quick reaction warning citizens and emergency units. The deployment of an efficient routing protocol to manage video-warning messages in mobile Ad hoc Networks (MANETs) has important benefits by allowing a fast warning of the incident, which potentially can save lives. To contribute with this goal, we propose a multipath routing protocol to provide video-warning messages in MANETs using a novel game-theoretical approach. As a base for our work, we start from our previous work, where a 2-players game-theoretical routing protocol was proposed to provide video-streaming services over MANETs. In this article, we further generalize the analysis made for a general number of N players in the MANET. Simulations have been carried out to show the benefits of our proposal, taking into account the mobility of the nodes and the presence of interfering traffic. Finally, we also have tested our approach in a vehicular ad hoc network as an incipient start point to develop a novel proposal specifically designed for VANETs.
Light-Directed Ranging System Implementing Single Camera System for Telerobotics Applications
NASA Technical Reports Server (NTRS)
Wells, Dennis L. (Inventor); Li, Larry C. (Inventor); Cox, Brian J. (Inventor)
1997-01-01
A laser-directed ranging system has utility for use in various fields, such as telerobotics applications and other applications involving physically handicapped individuals. The ranging system includes a single video camera and a directional light source such as a laser mounted on a camera platform, and a remotely positioned operator. In one embodiment, the position of the camera platform is controlled by three servo motors to orient the roll axis, pitch axis and yaw axis of the video cameras, based upon an operator input such as head motion. The laser is offset vertically and horizontally from the camera, and the laser/camera platform is directed by the user to point the laser and the camera toward a target device. The image produced by the video camera is processed to eliminate all background images except for the spot created by the laser. This processing is performed by creating a digital image of the target prior to illumination by the laser, and then eliminating common pixels from the subsequent digital image which includes the laser spot. A reference point is defined at a point in the video frame, which may be located outside of the image area of the camera. The disparity between the digital image of the laser spot and the reference point is calculated for use in a ranging analysis to determine range to the target.
Cervinka, Miroslav; Cervinková, Zuzana; Novák, Jan; Spicák, Jan; Rudolf, Emil; Peychl, Jan
2004-06-01
Alternatives and their teaching are an essential part of the curricula at the Faculty of Medicine. Dynamic screen-based video recordings are the most important type of alternative models employed for teaching purposes. Currently, the majority of teaching materials for this purpose are based on PowerPoint presentations, which are very popular because of their high versatility and visual impact. Furthermore, current developments in the field of image capturing devices and software enable the use of digitised video streams, tailored precisely to the specific situation. Here, we demonstrate that with reasonable financial resources, it is possible to prepare video sequences and to introduce them into the PowerPoint presentation, thereby shaping the teaching process according to individual students' needs and specificities.
Fast and predictable video compression in software design and implementation of an H.261 codec
NASA Astrophysics Data System (ADS)
Geske, Dagmar; Hess, Robert
1998-09-01
The use of software codecs for video compression becomes commonplace in several videoconferencing applications. In order to reduce conflicts with other applications used at the same time, mechanisms for resource reservation on endsystems need to determine an upper bound for computing time used by the codec. This leads to the demand for predictable execution times of compression/decompression. Since compression schemes as H.261 inherently depend on the motion contained in the video, an adaptive admission control is required. This paper presents a data driven approach based on dynamical reduction of the number of processed macroblocks in peak situations. Besides the absolute speed is a point of interest. The question, whether and how software compression of high quality video is feasible on today's desktop computers, is examined.
Goodwin, Shikha Jain; Dziobek, Derek
2016-09-01
Ever since video games were available to the general public, they have intrigued brain researchers for many reasons. There is an enormous amount of diversity in the video game research, ranging from types of video games used, the amount of time spent playing video games, the definition of video gamer versus non-gamer to the results obtained after playing video games. In this paper, our goal is to provide a critical discussion of these issues, along with some steps towards generalization using the discussion of an article published by Clemenson and Stark (2005) as the starting point. The authors used a distinction between 2D versus 3D video games to compare their effects on the learning and memory in humans. The primary hypothesis of the authors is that the exploration of virtual environments while playing video games is a human correlate of environment enrichment. Authors found that video gamers performed better than the non-video gamers, and if non-gamers are trained on playing video gamers, 3D games provide better environment enrichment compared to 2D video games, as indicated by better memory scores. The end goal of standardization in video games is to be able to translate the field so that the results can be used for greater good.
Goodwin, Shikha Jain; Dziobek, Derek
2016-01-01
Ever since video games were available to the general public, they have intrigued brain researchers for many reasons. There is an enormous amount of diversity in the video game research, ranging from types of video games used, the amount of time spent playing video games, the definition of video gamer versus non-gamer to the results obtained after playing video games. In this paper, our goal is to provide a critical discussion of these issues, along with some steps towards generalization using the discussion of an article published by Clemenson and Stark (2005) as the starting point. The authors used a distinction between 2D versus 3D video games to compare their effects on the learning and memory in humans. The primary hypothesis of the authors is that the exploration of virtual environments while playing video games is a human correlate of environment enrichment. Authors found that video gamers performed better than the non-video gamers, and if non-gamers are trained on playing video gamers, 3D games provide better environment enrichment compared to 2D video games, as indicated by better memory scores. The end goal of standardization in video games is to be able to translate the field so that the results can be used for greater good. PMID:27747256
ERIC Educational Resources Information Center
Mizell, Al P.; And Others
Distance learning involves students and faculty engaged in interactive instructional settings when they are at different locations. Compressed video is the live transmission of two-way auditory and visual signals at the same time between sites at different locations. The use of compressed video has expanded in recent years, ranging from use by the…
NASA Astrophysics Data System (ADS)
Işık, Şahin; Özkan, Kemal; Günal, Serkan; Gerek, Ömer Nezih
2018-03-01
Change detection with background subtraction process remains to be an unresolved issue and attracts research interest due to challenges encountered on static and dynamic scenes. The key challenge is about how to update dynamically changing backgrounds from frames with an adaptive and self-regulated feedback mechanism. In order to achieve this, we present an effective change detection algorithm for pixelwise changes. A sliding window approach combined with dynamic control of update parameters is introduced for updating background frames, which we called sliding window-based change detection. Comprehensive experiments on related test videos show that the integrated algorithm yields good objective and subjective performance by overcoming illumination variations, camera jitters, and intermittent object motions. It is argued that the obtained method makes a fair alternative in most types of foreground extraction scenarios; unlike case-specific methods, which normally fail for their nonconsidered scenarios.
Bubble and Drop Nonlinear Dynamics experiment
NASA Technical Reports Server (NTRS)
2003-01-01
The Bubble and Drop Nonlinear Dynamics (BDND) experiment was designed to improve understanding of how the shape and behavior of bubbles respond to ultrasound pressure. By understanding this behavior, it may be possible to counteract complications bubbles cause during materials processing on the ground. This 12-second sequence came from video downlinked from STS-94, July 5 1997, MET:3/19:15 (approximate). The BDND guest investigator was Gary Leal of the University of California, Santa Barbara. The experiment was part of the space research investigations conducted during the Microgravity Science Laboratory-1R mission (STS-94, July 1-17 1997). Advanced fluid dynamics experiments will be a part of investigations plarned for the International Space Station. (189KB JPEG, 1293 x 1460 pixels; downlinked video, higher quality not available) The MPG from which this composite was made is available at http://mix.msfc.nasa.gov/ABSTRACTS/MSFC-0300163.html.
Analysis of Video-Based Microscopic Particle Trajectories Using Kalman Filtering
Wu, Pei-Hsun; Agarwal, Ashutosh; Hess, Henry; Khargonekar, Pramod P.; Tseng, Yiider
2010-01-01
Abstract The fidelity of the trajectories obtained from video-based particle tracking determines the success of a variety of biophysical techniques, including in situ single cell particle tracking and in vitro motility assays. However, the image acquisition process is complicated by system noise, which causes positioning error in the trajectories derived from image analysis. Here, we explore the possibility of reducing the positioning error by the application of a Kalman filter, a powerful algorithm to estimate the state of a linear dynamic system from noisy measurements. We show that the optimal Kalman filter parameters can be determined in an appropriate experimental setting, and that the Kalman filter can markedly reduce the positioning error while retaining the intrinsic fluctuations of the dynamic process. We believe the Kalman filter can potentially serve as a powerful tool to infer a trajectory of ultra-high fidelity from noisy images, revealing the details of dynamic cellular processes. PMID:20550894
Dynamic strain distribution of FRP plate under blast loading
NASA Astrophysics Data System (ADS)
Saburi, T.; Yoshida, M.; Kubota, S.
2017-02-01
The dynamic strain distribution of a fiber re-enforced plastic (FRP) plate under blast loading was investigated using a Digital Image Correlation (DIC) image analysis method. The testing FRP plates were mounted in parallel to each other on a steel frame. 50 g of composition C4 explosive was used as a blast loading source and set in the center of the FRP plates. The dynamic behavior of the FRP plate under blast loading were observed by two high-speed video cameras. The set of two high-speed video image sequences were used to analyze the FRP three-dimensional strain distribution by means of DIC method. A point strain profile extracted from the analyzed strain distribution data was compared with a directly observed strain profile using a strain gauge and it was shown that the strain profile under the blast loading by DIC method is quantitatively accurate.
NASA Technical Reports Server (NTRS)
1995-01-01
George Nauck of ENCORE!!! invented and markets the Advanced Range Performance (ARPM) Video Golf System for measuring the result of a golf swing. After Nauck requested their assistance, Marshall Space Flight Center scientists suggested video and image processing/computing technology, and provided leads on commercial companies that dealt with the pertinent technologies. Nauck contracted with Applied Research Inc. to develop a prototype. The system employs an elevated camera, which sits behind the tee and follows the flight of the ball down range, catching the point of impact and subsequent roll. Instant replay of the video on a PC monitor at the tee allows measurement of the carry and roll. The unit measures distance and deviation from the target line, as well as distance from the target when one is selected. The information serves as an immediate basis for making adjustments or as a record of skill level progress for golfers.
Biological Response to the Dynamic Spectral-Polarized Underwater Light Field
2013-09-30
Z39-18 2 optical suite including underwater video- polarimetry (full Stokes vector video-imaging camera custom-built Cummings; and “SALSA” (Bossa...operations, we couple polarimetry measurements of live, free-swimming animals in their environments with a full suite of optical measurements...Ahmed). We also restrain live, awake animals to take polarimetry measurements (in the field and laboratory) under a complete set of viewing angles and
Dynamic Emotional Faces Generalise Better to a New Expression but not to a New View.
Liu, Chang Hong; Chen, Wenfeng; Ward, James; Takahashi, Nozomi
2016-08-08
Prior research based on static images has found limited improvement for recognising previously learnt faces in a new expression after several different facial expressions of these faces had been shown during the learning session. We investigated whether non-rigid motion of facial expression facilitates the learning process. In Experiment 1, participants remembered faces that were either presented in short video clips or still images. To assess the effect of exposure to expression variation, each face was either learnt through a single expression or three different expressions. Experiment 2 examined whether learning faces in video clips could generalise more effectively to a new view. The results show that faces learnt from video clips generalised effectively to a new expression with exposure to a single expression, whereas faces learnt from stills showed poorer generalisation with exposure to either single or three expressions. However, although superior recognition performance was demonstrated for faces learnt through video clips, dynamic facial expression did not create better transfer of learning to faces tested in a new view. The data thus fail to support the hypothesis that non-rigid motion enhances viewpoint invariance. These findings reveal both benefits and limitations of exposures to moving expressions for expression-invariant face recognition.
Dynamic Emotional Faces Generalise Better to a New Expression but not to a New View
Liu, Chang Hong; Chen, Wenfeng; Ward, James; Takahashi, Nozomi
2016-01-01
Prior research based on static images has found limited improvement for recognising previously learnt faces in a new expression after several different facial expressions of these faces had been shown during the learning session. We investigated whether non-rigid motion of facial expression facilitates the learning process. In Experiment 1, participants remembered faces that were either presented in short video clips or still images. To assess the effect of exposure to expression variation, each face was either learnt through a single expression or three different expressions. Experiment 2 examined whether learning faces in video clips could generalise more effectively to a new view. The results show that faces learnt from video clips generalised effectively to a new expression with exposure to a single expression, whereas faces learnt from stills showed poorer generalisation with exposure to either single or three expressions. However, although superior recognition performance was demonstrated for faces learnt through video clips, dynamic facial expression did not create better transfer of learning to faces tested in a new view. The data thus fail to support the hypothesis that non-rigid motion enhances viewpoint invariance. These findings reveal both benefits and limitations of exposures to moving expressions for expression-invariant face recognition. PMID:27499252
NASA Astrophysics Data System (ADS)
Yang, Yongchao; Dorn, Charles; Mancini, Tyler; Talken, Zachary; Nagarajaiah, Satish; Kenyon, Garrett; Farrar, Charles; Mascareñas, David
2017-03-01
Enhancing the spatial and temporal resolution of vibration measurements and modal analysis could significantly benefit dynamic modelling, analysis, and health monitoring of structures. For example, spatially high-density mode shapes are critical for accurate vibration-based damage localization. In experimental or operational modal analysis, higher (frequency) modes, which may be outside the frequency range of the measurement, contain local structural features that can improve damage localization as well as the construction and updating of the modal-based dynamic model of the structure. In general, the resolution of vibration measurements can be increased by enhanced hardware. Traditional vibration measurement sensors such as accelerometers have high-frequency sampling capacity; however, they are discrete point-wise sensors only providing sparse, low spatial sensing resolution measurements, while dense deployment to achieve high spatial resolution is expensive and results in the mass-loading effect and modification of structure's surface. Non-contact measurement methods such as scanning laser vibrometers provide high spatial and temporal resolution sensing capacity; however, they make measurements sequentially that requires considerable acquisition time. As an alternative non-contact method, digital video cameras are relatively low-cost, agile, and provide high spatial resolution, simultaneous, measurements. Combined with vision based algorithms (e.g., image correlation or template matching, optical flow, etc.), video camera based measurements have been successfully used for experimental and operational vibration measurement and subsequent modal analysis. However, the sampling frequency of most affordable digital cameras is limited to 30-60 Hz, while high-speed cameras for higher frequency vibration measurements are extremely costly. This work develops a computational algorithm capable of performing vibration measurement at a uniform sampling frequency lower than what is required by the Shannon-Nyquist sampling theorem for output-only modal analysis. In particular, the spatio-temporal uncoupling property of the modal expansion of structural vibration responses enables a direct modal decoupling of the temporally-aliased vibration measurements by existing output-only modal analysis methods, yielding (full-field) mode shapes estimation directly. Then the signal aliasing properties in modal analysis is exploited to estimate the modal frequencies and damping ratios. The proposed method is validated by laboratory experiments where output-only modal identification is conducted on temporally-aliased acceleration responses and particularly the temporally-aliased video measurements of bench-scale structures, including a three-story building structure and a cantilever beam.
Khan, Zulfiqar Hasan; Gu, Irene Yu-Hua
2013-12-01
This paper proposes a novel Bayesian online learning and tracking scheme for video objects on Grassmann manifolds. Although manifold visual object tracking is promising, large and fast nonplanar (or out-of-plane) pose changes and long-term partial occlusions of deformable objects in video remain a challenge that limits the tracking performance. The proposed method tackles these problems with the main novelties on: 1) online estimation of object appearances on Grassmann manifolds; 2) optimal criterion-based occlusion handling for online updating of object appearances; 3) a nonlinear dynamic model for both the appearance basis matrix and its velocity; and 4) Bayesian formulations, separately for the tracking process and the online learning process, that are realized by employing two particle filters: one is on the manifold for generating appearance particles and another on the linear space for generating affine box particles. Tracking and online updating are performed in an alternating fashion to mitigate the tracking drift. Experiments using the proposed tracker on videos captured by a single dynamic/static camera have shown robust tracking performance, particularly for scenarios when target objects contain significant nonplanar pose changes and long-term partial occlusions. Comparisons with eight existing state-of-the-art/most relevant manifold/nonmanifold trackers with evaluations have provided further support to the proposed scheme.
Men's Preferences for Women's Femininity in Dynamic Cross-Modal Stimuli
O'Connor, Jillian J. M.; Fraccaro, Paul J.; Pisanski, Katarzyna; Tigue, Cara C.; Feinberg, David R.
2013-01-01
Men generally prefer feminine women's faces and voices over masculine women's faces and voices, and these cross-modal preferences are positively correlated. Men's preferences for female facial and vocal femininity have typically been investigated independently by presenting soundless still images separately from audio-only vocal recordings. For the first time ever, we presented men with short video clips in which dynamic faces and voices were simultaneously manipulated in femininity/masculinity. Men preferred feminine men's faces over masculine men's faces, and preferred masculine men's voices over feminine men's voices. We found that men preferred feminine women's faces and voices over masculine women's faces and voices. Men's attractiveness ratings of both feminine and masculine faces were increased by the addition of vocal femininity. Also, men's attractiveness ratings of feminine and masculine voices were increased by the addition of facial femininity present in the video. Men's preferences for vocal and facial femininity were significantly and positively correlated when stimuli were female, but not when they were male. Our findings complement other evidence for cross-modal femininity preferences among male raters, and show that preferences observed in studies using still images and/or independently presented vocal stimuli are also observed when dynamic faces and voices are displayed simultaneously in video format. PMID:23936037
A gaze-contingent display to study contrast sensitivity under natural viewing conditions
NASA Astrophysics Data System (ADS)
Dorr, Michael; Bex, Peter J.
2011-03-01
Contrast sensitivity has been extensively studied over the last decades and there are well-established models of early vision that were derived by presenting the visual system with synthetic stimuli such as sine-wave gratings near threshold contrasts. Natural scenes, however, contain a much wider distribution of orientations, spatial frequencies, and both luminance and contrast values. Furthermore, humans typically move their eyes two to three times per second under natural viewing conditions, but most laboratory experiments require subjects to maintain central fixation. We here describe a gaze-contingent display capable of performing real-time contrast modulations of video in retinal coordinates, thus allowing us to study contrast sensitivity when dynamically viewing dynamic scenes. Our system is based on a Laplacian pyramid for each frame that efficiently represents individual frequency bands. Each output pixel is then computed as a locally weighted sum of pyramid levels to introduce local contrast changes as a function of gaze. Our GPU implementation achieves real-time performance with more than 100 fps on high-resolution video (1920 by 1080 pixels) and a synthesis latency of only 1.5ms. Psychophysical data show that contrast sensitivity is greatly decreased in natural videos and under dynamic viewing conditions. Synthetic stimuli therefore only poorly characterize natural vision.
Video Extrapolation Method Based on Time-Varying Energy Optimization and CIP.
Sakaino, Hidetomo
2016-09-01
Video extrapolation/prediction methods are often used to synthesize new videos from images. For fluid-like images and dynamic textures as well as moving rigid objects, most state-of-the-art video extrapolation methods use non-physics-based models that learn orthogonal bases from a number of images but at high computation cost. Unfortunately, data truncation can cause image degradation, i.e., blur, artifact, and insufficient motion changes. To extrapolate videos that more strictly follow physical rules, this paper proposes a physics-based method that needs only a few images and is truncation-free. We utilize physics-based equations with image intensity and velocity: optical flow, Navier-Stokes, continuity, and advection equations. These allow us to use partial difference equations to deal with the local image feature changes. Image degradation during extrapolation is minimized by updating model parameters, where a novel time-varying energy balancer model that uses energy based image features, i.e., texture, velocity, and edge. Moreover, the advection equation is discretized by high-order constrained interpolation profile for lower quantization error than can be achieved by the previous finite difference method in long-term videos. Experiments show that the proposed energy based video extrapolation method outperforms the state-of-the-art video extrapolation methods in terms of image quality and computation cost.
Estimation of low back moments from video analysis: a validation study.
Coenen, Pieter; Kingma, Idsart; Boot, Cécile R L; Faber, Gert S; Xu, Xu; Bongers, Paulien M; van Dieën, Jaap H
2011-09-02
This study aimed to develop, compare and validate two versions of a video analysis method for assessment of low back moments during occupational lifting tasks since for epidemiological studies and ergonomic practice relatively cheap and easily applicable methods to assess low back loads are needed. Ten healthy subjects participated in a protocol comprising 12 lifting conditions. Low back moments were assessed using two variants of a video analysis method and a lab-based reference method. Repeated measures ANOVAs showed no overall differences in peak moments between the two versions of the video analysis method and the reference method. However, two conditions showed a minor overestimation of one of the video analysis method moments. Standard deviations were considerable suggesting that errors in the video analysis were random. Furthermore, there was a small underestimation of dynamic components and overestimation of the static components of the moments. Intraclass correlations coefficients for peak moments showed high correspondence (>0.85) of the video analyses with the reference method. It is concluded that, when a sufficient number of measurements can be taken, the video analysis method for assessment of low back loads during lifting tasks provides valid estimates of low back moments in ergonomic practice and epidemiological studies for lifts up to a moderate level of asymmetry. Copyright © 2011 Elsevier Ltd. All rights reserved.
Rosen, Hannah; Gilly, William; Bell, Lauren; Abernathy, Kyler; Marshall, Greg
2015-01-15
Dosidicus gigas (Humboldt or jumbo flying squid) is an economically and ecologically influential species, yet little is known about its natural behaviors because of difficulties in studying this active predator in its oceanic environment. By using an animal-borne video package, National Geographic's Crittercam, we were able to observe natural behaviors in free-swimming D. gigas in the Gulf of California with a focus on color-generating (chromogenic) behaviors. We documented two dynamic displays without artificial lighting at depths of up to 70 m. One dynamic pattern, termed 'flashing' is characterized by a global oscillation (2-4 Hz) of body color between white and red. Flashing was almost always observed when other squid were visible in the video frame, and this behavior presumably represents intraspecific signaling. Amplitude and frequency of flashing can be modulated, and the phase relationship with another squid can also be rapidly altered. Another dynamic display termed 'flickering' was observed whenever flashing was not occurring. This behavior is characterized by irregular wave-like activity in neighboring patches of chromatophores, and the resulting patterns mimic reflections of down-welled light in the water column, suggesting that this behavior may provide a dynamic type of camouflage. Rapid and global pauses in flickering, often before a flashing episode, indicate that flickering is under inhibitory neural control. Although flashing and flickering have not been described in other squid, functional similarities are evident with other species. © 2015. Published by The Company of Biologists Ltd.
Wingenbach, Tanja S H; Ashwin, Chris; Brosnan, Mark
2018-01-01
There has been much research on sex differences in the ability to recognise facial expressions of emotions, with results generally showing a female advantage in reading emotional expressions from the face. However, most of the research to date has used static images and/or 'extreme' examples of facial expressions. Therefore, little is known about how expression intensity and dynamic stimuli might affect the commonly reported female advantage in facial emotion recognition. The current study investigated sex differences in accuracy of response (Hu; unbiased hit rates) and response latencies for emotion recognition using short video stimuli (1sec) of 10 different facial emotion expressions (anger, disgust, fear, sadness, surprise, happiness, contempt, pride, embarrassment, neutral) across three variations in the intensity of the emotional expression (low, intermediate, high) in an adolescent and adult sample (N = 111; 51 male, 60 female) aged between 16 and 45 (M = 22.2, SD = 5.7). Overall, females showed more accurate facial emotion recognition compared to males and were faster in correctly recognising facial emotions. The female advantage in reading expressions from the faces of others was unaffected by expression intensity levels and emotion categories used in the study. The effects were specific to recognition of emotions, as males and females did not differ in the recognition of neutral faces. Together, the results showed a robust sex difference favouring females in facial emotion recognition using video stimuli of a wide range of emotions and expression intensity variations.
Sex differences in facial emotion recognition across varying expression intensity levels from videos
2018-01-01
There has been much research on sex differences in the ability to recognise facial expressions of emotions, with results generally showing a female advantage in reading emotional expressions from the face. However, most of the research to date has used static images and/or ‘extreme’ examples of facial expressions. Therefore, little is known about how expression intensity and dynamic stimuli might affect the commonly reported female advantage in facial emotion recognition. The current study investigated sex differences in accuracy of response (Hu; unbiased hit rates) and response latencies for emotion recognition using short video stimuli (1sec) of 10 different facial emotion expressions (anger, disgust, fear, sadness, surprise, happiness, contempt, pride, embarrassment, neutral) across three variations in the intensity of the emotional expression (low, intermediate, high) in an adolescent and adult sample (N = 111; 51 male, 60 female) aged between 16 and 45 (M = 22.2, SD = 5.7). Overall, females showed more accurate facial emotion recognition compared to males and were faster in correctly recognising facial emotions. The female advantage in reading expressions from the faces of others was unaffected by expression intensity levels and emotion categories used in the study. The effects were specific to recognition of emotions, as males and females did not differ in the recognition of neutral faces. Together, the results showed a robust sex difference favouring females in facial emotion recognition using video stimuli of a wide range of emotions and expression intensity variations. PMID:29293674
Rutz, Christian; Bluff, Lucas A; Weir, Alex A S; Kacelnik, Alex
2007-11-02
New Caledonian crows (Corvus moneduloides) are renowned for using tools for extractive foraging, but the ecological context of this unusual behavior is largely unknown. We developed miniaturized, animal-borne video cameras to record the undisturbed behavior and foraging ecology of wild, free-ranging crows. Our video recordings enabled an estimate of the species' natural foraging efficiency and revealed that tool use, and choice of tool materials, are more diverse than previously thought. Video tracking has potential for studying the behavior and ecology of many other bird species that are shy or live in inaccessible habitats.
The effects of videotape modeling on staff acquisition of functional analysis methodology.
Moore, James W; Fisher, Wayne W
2007-01-01
Lectures and two types of video modeling were compared to determine their relative effectiveness in training 3 staff members to conduct functional analysis sessions. Video modeling that contained a larger number of therapist exemplars resulted in mastery-level performance eight of the nine times it was introduced, whereas neither lectures nor partial video modeling produced significant improvements in performance. Results demonstrated that video modeling provided an effective training strategy but only when a wide range of exemplars of potential therapist behaviors were depicted in the videotape.
The Effects of Videotape Modeling on Staff Acquisition of Functional Analysis Methodology
Moore, James W; Fisher, Wayne W
2007-01-01
Lectures and two types of video modeling were compared to determine their relative effectiveness in training 3 staff members to conduct functional analysis sessions. Video modeling that contained a larger number of therapist exemplars resulted in mastery-level performance eight of the nine times it was introduced, whereas neither lectures nor partial video modeling produced significant improvements in performance. Results demonstrated that video modeling provided an effective training strategy but only when a wide range of exemplars of potential therapist behaviors were depicted in the videotape. PMID:17471805
The recovery and utilization of space suit range-of-motion data
NASA Technical Reports Server (NTRS)
Reinhardt, AL; Walton, James S.
1988-01-01
A technique for recovering data for the range of motion of a subject wearing a space suit is described along with the validation of this technique on an EVA space suit. Digitized data are automatically acquired from video images of the subject; three-dimensional trajectories are recovered from these data, and can be displayed using three-dimensional computer graphics. Target locations are recovered using a unique video processor and close-range photogrammetry. It is concluded that such data can be used in such applications as the animation of anthropometric computer models.
NASA Astrophysics Data System (ADS)
den Hollander, Richard J. M.; Bouma, Henri; van Rest, Jeroen H. C.; ten Hove, Johan-Martijn; ter Haar, Frank B.; Burghouts, Gertjan J.
2017-10-01
Video analytics is essential for managing large quantities of raw data that are produced by video surveillance systems (VSS) for the prevention, repression and investigation of crime and terrorism. Analytics is highly sensitive to changes in the scene, and for changes in the optical chain so a VSS with analytics needs careful configuration and prompt maintenance to avoid false alarms. However, there is a trend from static VSS consisting of fixed CCTV cameras towards more dynamic VSS deployments over public/private multi-organization networks, consisting of a wider variety of visual sensors, including pan-tilt-zoom (PTZ) cameras, body-worn cameras and cameras on moving platforms. This trend will lead to more dynamic scenes and more frequent changes in the optical chain, creating structural problems for analytics. If these problems are not adequately addressed, analytics will not be able to continue to meet end users' developing needs. In this paper, we present a three-part solution for managing the performance of complex analytics deployments. The first part is a register containing meta data describing relevant properties of the optical chain, such as intrinsic and extrinsic calibration, and parameters of the scene such as lighting conditions or measures for scene complexity (e.g. number of people). A second part frequently assesses these parameters in the deployed VSS, stores changes in the register, and signals relevant changes in the setup to the VSS administrator. A third part uses the information in the register to dynamically configure analytics tasks based on VSS operator input. In order to support the feasibility of this solution, we give an overview of related state-of-the-art technologies for autocalibration (self-calibration), scene recognition and lighting estimation in relation to person detection. The presented solution allows for rapid and robust deployment of Video Content Analysis (VCA) tasks in large scale ad-hoc networks.
A Low Cost Microcomputer System for Process Dynamics and Control Simulations.
ERIC Educational Resources Information Center
Crowl, D. A.; Durisin, M. J.
1983-01-01
Discusses a video simulator microcomputer system used to provide real-time demonstrations to strengthen students' understanding of process dynamics and control. Also discusses hardware/software and simulations developed using the system. The four simulations model various configurations of a process liquid level tank system. (JN)
Dynamic Bayesian Network Modeling of Game Based Diagnostic Assessments. CRESST Report 837
ERIC Educational Resources Information Center
Levy, Roy
2014-01-01
Digital games offer an appealing environment for assessing student proficiencies, including skills and misconceptions in a diagnostic setting. This paper proposes a dynamic Bayesian network modeling approach for observations of student performance from an educational video game. A Bayesian approach to model construction, calibration, and use in…
Dynamic Assessment in Combination with Video Interaction Guidance in Preschool Education
ERIC Educational Resources Information Center
Krejcová, Kristýna
2015-01-01
Dynamic assessment represents an alternative diagnostic approach focused on the revelation of the tested persons' learning potential. The learning potential is observed via the emphasis on the achievement process. It aims at meaningful connection with the intervention that immediately applies diagnostic findings to support the development of an…
The Orbital Maneuvering Vehicle Training Facility visual system concept
NASA Technical Reports Server (NTRS)
Williams, Keith
1989-01-01
The purpose of the Orbital Maneuvering Vehicle (OMV) Training Facility (OTF) is to provide effective training for OMV pilots. A critical part of the training environment is the Visual System, which will simulate the video scenes produced by the OMV Closed-Circuit Television (CCTV) system. The simulation will include camera models, dynamic target models, moving appendages, and scene degradation due to the compression/decompression of video signal. Video system malfunctions will also be provided to ensure that the pilot is ready to meet all challenges the real-world might provide. One possible visual system configuration for the training facility that will meet existing requirements is described.
Kliemann, Dorit; Richardson, Hilary; Anzellotti, Stefano; Ayyash, Dima; Haskins, Amanda J; Gabrieli, John D E; Saxe, Rebecca R
2018-06-01
Individuals with Autism Spectrum Disorders (ASD) report difficulties extracting meaningful information from dynamic and complex social cues, like facial expressions. The nature and mechanisms of these difficulties remain unclear. Here we tested whether that difficulty can be traced to the pattern of activity in "social brain" regions, when viewing dynamic facial expressions. In two studies, adult participants (male and female) watched brief videos of a range of positive and negative facial expressions, while undergoing functional magnetic resonance imaging (Study 1: ASD n = 16, control n = 21; Study 2: ASD n = 22, control n = 30). Patterns of hemodynamic activity differentiated among facial emotional expressions in left and right superior temporal sulcus, fusiform gyrus, and parts of medial prefrontal cortex. In both control participants and high-functioning individuals with ASD, we observed (i) similar responses to emotional valence that generalized across facial expressions and animated social events; (ii) similar flexibility of responses to emotional valence, when manipulating the task-relevance of perceived emotions; and (iii) similar responses to a range of emotions within valence. Altogether, the data indicate that there was little or no group difference in cortical responses to isolated dynamic emotional facial expressions, as measured with fMRI. Difficulties with real-world social communication and social interaction in ASD may instead reflect differences in initiating and maintaining contingent interactions, or in integrating social information over time or context. Copyright © 2018 Elsevier Ltd. All rights reserved.
Enhancements for a Dynamic Data Warehousing and Mining System for Large-scale HSCB Data
2016-07-20
Intelligent Automation Incorporated Enhancements for a Dynamic Data Warehousing and Mining ...Page | 2 Intelligent Automation Incorporated Monthly Report No. 4 Enhancements for a Dynamic Data Warehousing and Mining System Large-Scale HSCB...including Top Videos, Top Users, Top Words, and Top Languages, and also applied NER to the text associated with YouTube posts. We have also developed UI for
Enhancements for a Dynamic Data Warehousing and Mining System for Large-Scale HSCB Data
2016-07-20
Intelligent Automation Incorporated Enhancements for a Dynamic Data Warehousing and Mining ...Page | 2 Intelligent Automation Incorporated Monthly Report No. 4 Enhancements for a Dynamic Data Warehousing and Mining System Large-Scale HSCB...including Top Videos, Top Users, Top Words, and Top Languages, and also applied NER to the text associated with YouTube posts. We have also developed UI for
Carter, Bernie; Bray, Lucy; Keating, Paula; Wilkinson, Catherine
2017-09-15
Caring for a child with complex health care needs places additional stress and time demands on parents. Parents often turn to their peers to share their experiences, gain support, and lobby for change; increasingly this is done through social media. The WellChild #notanurse_but is a parent-driven campaign that states its aim is to "shine a light" on the care parents, who are not nurses, have to undertake for their child with complex health care needs and to raise decision-makers' awareness of the gaps in service provision and support. This article reports on a study that analyzed the #notanurse_but parent-driven campaign videos. The purpose of the study was to consider the videos in terms of the range, content, context, perspectivity (motivation), and affect (sense of being there) in order to inform the future direction of the campaign. Analysis involved repeated viewing of a subset of 30 purposively selected videos and documenting our analysis on a specifically designed data extraction sheet. Each video was analyzed by a minimum of 2 researchers. All but 2 of the 30 videos were filmed inside the home. A variety of filming techniques were used. Mothers were the main narrators in all but 1 set of videos. The sense of perspectivity was clearly linked to the campaign with the narration pressing home the reality, complexity, and need for vigilance in caring for a child with complex health care needs. Different clinical tasks and routines undertaken as part of the child's care were depicted. Videos also reported on a sense of feeling different than "normal families"; the affect varied among the researchers, ranging from strong to weaker emotional responses.
YouTube as a source of information on mouth (oral) cancer.
Hassona, Y; Taimeh, D; Marahleh, A; Scully, C
2016-04-01
We examined the content of YouTube(™) videos on mouth (oral) cancer and evaluated their usefulness in promoting early detection of oral cancer. A systematic search of YouTube(™) for videos containing information on mouth cancer was conducted using the keywords 'mouth cancer' and 'oral cancer'. Demographics of videos, including type, source, length, and viewers' interaction, were evaluated, and three researchers independently assessed the videos for usefulness in promoting early detection of oral cancer. A total of 188 YouTube(™) videos (152 patient-oriented educational videos and 36 testimonial videos) were analyzed. The overall usefulness score ranged from 0 to 10 (mean = 3.56 ± 2.44). The most useful videos ranked late on the viewing list, and there was no significant correlation between video usefulness and viewing rate, viewers' interaction, and video length. Videos uploaded by individual users were less useful compared with videos uploaded by professional organizations or by healthcare professionals. Healthcare professionals, academic institutions, and professional organizations have a responsibility for improving the content of YouTube(™) about mouth cancer by uploading useful videos, and directing patients to reliable information sources. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
A simulator tool set for evaluating HEVC/SHVC streaming
NASA Astrophysics Data System (ADS)
Al Hadhrami, Tawfik; Nightingale, James; Wang, Qi; Grecos, Christos; Kehtarnavaz, Nasser
2015-02-01
Video streaming and other multimedia applications account for an ever increasing proportion of all network traffic. The recent adoption of High Efficiency Video Coding (HEVC) as the H.265 standard provides many opportunities for new and improved services multimedia services and applications in the consumer domain. Since the delivery of version one of H.265, the Joint Collaborative Team on Video Coding have been working towards standardisation of a scalable extension (SHVC) to the H.265 standard and a series of range extensions and new profiles. As these enhancements are added to the standard the range of potential applications and research opportunities will expend. For example the use of video is also growing rapidly in other sectors such as safety, security, defence and health with real-time high quality video transmission playing an important role in areas like critical infrastructure monitoring and disaster management. Each of which may benefit from the application of enhanced HEVC/H.265 and SHVC capabilities. The majority of existing research into HEVC/H.265 transmission has focussed on the consumer domain addressing issues such as broadcast transmission and delivery to mobile devices with the lack of freely available tools widely cited as an obstacle to conducting this type of research. In this paper we present a toolset which facilitates the transmission and evaluation of HEVC/H.265 and SHVC encoded video on the popular open source NCTUns simulator. Our toolset provides researchers with a modular, easy to use platform for evaluating video transmission and adaptation proposals on large scale wired, wireless and hybrid architectures. The toolset consists of pre-processing, transmission, SHVC adaptation and post-processing tools to gather and analyse statistics. It has been implemented using HM15 and SHM5, the latest versions of the HEVC and SHVC reference software implementations to ensure that currently adopted proposals for scalable and range extensions to the standard can be investigated. We demonstrate the effectiveness and usability of our toolset by evaluating SHVC streaming and adaptation to meet terminal constraints and network conditions in a range of wired, wireless, and large scale wireless mesh network scenarios, each of which is designed to simulate a realistic environment. Our results are compared to those for H264/SVC, the scalable extension to the existing H.264/AVC advanced video coding standard.
Exemplar-Based Image and Video Stylization Using Fully Convolutional Semantic Features.
Zhu, Feida; Yan, Zhicheng; Bu, Jiajun; Yu, Yizhou
2017-05-10
Color and tone stylization in images and videos strives to enhance unique themes with artistic color and tone adjustments. It has a broad range of applications from professional image postprocessing to photo sharing over social networks. Mainstream photo enhancement softwares, such as Adobe Lightroom and Instagram, provide users with predefined styles, which are often hand-crafted through a trial-and-error process. Such photo adjustment tools lack a semantic understanding of image contents and the resulting global color transform limits the range of artistic styles it can represent. On the other hand, stylistic enhancement needs to apply distinct adjustments to various semantic regions. Such an ability enables a broader range of visual styles. In this paper, we first propose a novel deep learning architecture for exemplar-based image stylization, which learns local enhancement styles from image pairs. Our deep learning architecture consists of fully convolutional networks (FCNs) for automatic semantics-aware feature extraction and fully connected neural layers for adjustment prediction. Image stylization can be efficiently accomplished with a single forward pass through our deep network. To extend our deep network from image stylization to video stylization, we exploit temporal superpixels (TSPs) to facilitate the transfer of artistic styles from image exemplars to videos. Experiments on a number of datasets for image stylization as well as a diverse set of video clips demonstrate the effectiveness of our deep learning architecture.
Special-effect edit detection using VideoTrails: a comparison with existing techniques
NASA Astrophysics Data System (ADS)
Kobla, Vikrant; DeMenthon, Daniel; Doermann, David S.
1998-12-01
Video segmentation plays an integral role in many multimedia applications, such as digital libraries, content management systems, and various other video browsing, indexing, and retrieval systems. Many algorithms for segmentation of video have appeared within the past few years. Most of these algorithms perform well on cuts, but yield poor performance on gradual transitions or special effects edits. A complete video segmentation system must also achieve good performance on special effect edit detection. In this paper, we discuss the performance of our Video Trails-based algorithms, with other existing special effect edit-detection algorithms within the literature. Results from experiments testing for the ability to detect edits from TV programs, ranging from commercials to news magazine programs, including diverse special effect edits, which we have introduced.
Portrayal of Alcohol Brands Popular Among Underage Youth on YouTube: A Content Analysis.
Primack, Brian A; Colditz, Jason B; Rosen, Eva B; Giles, Leila M; Jackson, Kristina M; Kraemer, Kevin L
2017-09-01
We characterized leading YouTube videos featuring alcohol brand references and examined video characteristics associated with each brand and video category. We systematically captured the 137 most relevant and popular videos on YouTube portraying alcohol brands that are popular among underage youth. We used an iterative process to codebook development. We coded variables within domains of video type, character sociodemographics, production quality, and negative and positive associations with alcohol use. All variables were double coded, and Cohen's kappa was greater than .80 for all variables except age, which was eliminated. There were 96,860,936 combined views for all videos. The most common video type was "traditional advertisements," which comprised 40% of videos. Of the videos, 20% were "guides" and 10% focused on chugging a bottle of distilled spirits. While 95% of videos featured males, 40% featured females. Alcohol intoxication was present in 19% of videos. Aggression, addiction, and injuries were uncommonly identified (2%, 3%, and 4%, respectively), but 47% of videos contained humor. Traditional advertisements represented the majority of videos related to Bud Light (83%) but only 18% of Grey Goose and 8% of Hennessy videos. Intoxication was most present in chugging demonstrations (77%), whereas addiction was only portrayed in music videos (22%). Videos containing humor ranged from 11% for music-related videos to 77% for traditional advertisements. YouTube videos depicting the alcohol brands favored by underage youth are heavily viewed, and the majority are traditional or narrative advertisements. Understanding characteristics associated with different brands and video categories may aid in intervention development.
Dense 3D Face Alignment from 2D Video for Real-Time Use
Jeni, László A.; Cohn, Jeffrey F.; Kanade, Takeo
2018-01-01
To enable real-time, person-independent 3D registration from 2D video, we developed a 3D cascade regression approach in which facial landmarks remain invariant across pose over a range of approximately 60 degrees. From a single 2D image of a person’s face, a dense 3D shape is registered in real time for each frame. The algorithm utilizes a fast cascade regression framework trained on high-resolution 3D face-scans of posed and spontaneous emotion expression. The algorithm first estimates the location of a dense set of landmarks and their visibility, then reconstructs face shapes by fitting a part-based 3D model. Because no assumptions are required about illumination or surface properties, the method can be applied to a wide range of imaging conditions that include 2D video and uncalibrated multi-view video. The method has been validated in a battery of experiments that evaluate its precision of 3D reconstruction, extension to multi-view reconstruction, temporal integration for videos and 3D head-pose estimation. Experimental findings strongly support the validity of real-time, 3D registration and reconstruction from 2D video. The software is available online at http://zface.org. PMID:29731533
Gerald, II, Rex E.; Sanchez, Jairo; Rathke, Jerome W.
2004-08-10
A video toroid cavity imager for in situ measurement of electrochemical properties of an electrolytic material sample includes a cylindrical toroid cavity resonator containing the sample and employs NMR and video imaging for providing high-resolution spectral and visual information of molecular characteristics of the sample on a real-time basis. A large magnetic field is applied to the sample under controlled temperature and pressure conditions to simultaneously provide NMR spectroscopy and video imaging capabilities for investigating electrochemical transformations of materials or the evolution of long-range molecular aggregation during cooling of hydrocarbon melts. The video toroid cavity imager includes a miniature commercial video camera with an adjustable lens, a modified compression coin cell imager with a fiat circular principal detector element, and a sample mounted on a transparent circular glass disk, and provides NMR information as well as a video image of a sample, such as a polymer film, with micrometer resolution.
Squids old and young: Scale-free design for a simple billboard
NASA Astrophysics Data System (ADS)
Packard, Andrew
2011-03-01
Squids employ a large range of brightness-contrast spatial frequencies in their camouflage and signalling displays. The 'billboard' of coloured elements ('spots'=chromatophore organs) in the skin is built autopoietically-probably by lateral inhibitory processes-and enlarges as much as 10,000-fold during development. The resulting two-dimensional array is a fractal-like colour/size hierarchy lying in several layers of a multilayered network. Dynamic control of the array by muscles and nerves produces patterns that recall 'half-tone' processing (cf. ink-jet printer). In the more sophisticated (loliginid) squids, patterns also combine 'continuous tones' (cf. dye-sublimation printer). Physiologists and engineers can exploit the natural colour-coding of the integument to understand nerve and muscle system dynamics, examined here at the level of the ensemble. Integrative functions of the whole (H) are analysed in terms of the power spectrum within and between ensembles and of spontaneous waves travelling through the billboard. Video material may be obtained from the author at the above address.
Dynamics of Salmonella infection of macrophages at the single cell level.
Gog, Julia R; Murcia, Alicia; Osterman, Natan; Restif, Olivier; McKinley, Trevelyan J; Sheppard, Mark; Achouri, Sarra; Wei, Bin; Mastroeni, Pietro; Wood, James L N; Maskell, Duncan J; Cicuta, Pietro; Bryant, Clare E
2012-10-07
Salmonella enterica causes a range of diseases. Salmonellae are intracellular parasites of macrophages, and the control of bacteria within these cells is critical to surviving an infection. The dynamics of the bacteria invading, surviving, proliferating in and killing macrophages are central to disease pathogenesis. Fundamentally important parameters, however, such as the cellular infection rate, have not previously been calculated. We used two independent approaches to calculate the macrophage infection rate: mathematical modelling of Salmonella infection experiments, and analysis of real-time video microscopy of infection events. Cells repeatedly encounter salmonellae, with the bacteria often remain associated with the macrophage for more than ten seconds. Once Salmonella encounters a macrophage, the probability of that bacterium infecting the cell is remarkably low: less than 5%. The macrophage population is heterogeneous in terms of its susceptibility to the first infection event. Once infected, a macrophage can undergo further infection events, but these reinfection events occur at a lower rate than that of the primary infection.
Dynamics of topological solitons, knotted streamlines, and transport of cargo in liquid crystals
NASA Astrophysics Data System (ADS)
Sohn, Hayley R. O.; Ackerman, Paul J.; Boyle, Timothy J.; Sheetah, Ghadah H.; Fornberg, Bengt; Smalyukh, Ivan I.
2018-05-01
Active colloids and liquid crystals are capable of locally converting the macroscopically supplied energy into directional motion and promise a host of new applications, ranging from drug delivery to cargo transport at the mesoscale. Here we uncover how topological solitons in liquid crystals can locally transform electric energy to translational motion and allow for the transport of cargo along directions dependent on frequency of the applied electric field. By combining polarized optical video microscopy and numerical modeling that reproduces both the equilibrium structures of solitons and their temporal evolution in applied fields, we uncover the physical underpinnings behind this reconfigurable motion and study how it depends on the structure and topology of solitons. We show that, unexpectedly, the directional motion of solitons with and without the cargo arises mainly from the asymmetry in rotational dynamics of molecular ordering in liquid crystal rather than from the asymmetry of fluid flows, as in conventional active soft matter systems.
Shape Distributions of Nonlinear Dynamical Systems for Video-Based Inference.
Venkataraman, Vinay; Turaga, Pavan
2016-12-01
This paper presents a shape-theoretic framework for dynamical analysis of nonlinear dynamical systems which appear frequently in several video-based inference tasks. Traditional approaches to dynamical modeling have included linear and nonlinear methods with their respective drawbacks. A novel approach we propose is the use of descriptors of the shape of the dynamical attractor as a feature representation of nature of dynamics. The proposed framework has two main advantages over traditional approaches: a) representation of the dynamical system is derived directly from the observational data, without any inherent assumptions, and b) the proposed features show stability under different time-series lengths where traditional dynamical invariants fail. We illustrate our idea using nonlinear dynamical models such as Lorenz and Rossler systems, where our feature representations (shape distribution) support our hypothesis that the local shape of the reconstructed phase space can be used as a discriminative feature. Our experimental analyses on these models also indicate that the proposed framework show stability for different time-series lengths, which is useful when the available number of samples are small/variable. The specific applications of interest in this paper are: 1) activity recognition using motion capture and RGBD sensors, 2) activity quality assessment for applications in stroke rehabilitation, and 3) dynamical scene classification. We provide experimental validation through action and gesture recognition experiments on motion capture and Kinect datasets. In all these scenarios, we show experimental evidence of the favorable properties of the proposed representation.
A new visual navigation system for exploring biomedical Open Educational Resource (OER) videos
Zhao, Baoquan; Xu, Songhua; Lin, Shujin; Luo, Xiaonan; Duan, Lian
2016-01-01
Objective Biomedical videos as open educational resources (OERs) are increasingly proliferating on the Internet. Unfortunately, seeking personally valuable content from among the vast corpus of quality yet diverse OER videos is nontrivial due to limitations of today’s keyword- and content-based video retrieval techniques. To address this need, this study introduces a novel visual navigation system that facilitates users’ information seeking from biomedical OER videos in mass quantity by interactively offering visual and textual navigational clues that are both semantically revealing and user-friendly. Materials and Methods The authors collected and processed around 25 000 YouTube videos, which collectively last for a total length of about 4000 h, in the broad field of biomedical sciences for our experiment. For each video, its semantic clues are first extracted automatically through computationally analyzing audio and visual signals, as well as text either accompanying or embedded in the video. These extracted clues are subsequently stored in a metadata database and indexed by a high-performance text search engine. During the online retrieval stage, the system renders video search results as dynamic web pages using a JavaScript library that allows users to interactively and intuitively explore video content both efficiently and effectively. Results The authors produced a prototype implementation of the proposed system, which is publicly accessible at https://patentq.njit.edu/oer. To examine the overall advantage of the proposed system for exploring biomedical OER videos, the authors further conducted a user study of a modest scale. The study results encouragingly demonstrate the functional effectiveness and user-friendliness of the new system for facilitating information seeking from and content exploration among massive biomedical OER videos. Conclusion Using the proposed tool, users can efficiently and effectively find videos of interest, precisely locate video segments delivering personally valuable information, as well as intuitively and conveniently preview essential content of a single or a collection of videos. PMID:26335986
Studying the movement behavior of benthic macroinvertebrates with automated video tracking.
Augusiak, Jacqueline; Van den Brink, Paul J
2015-04-01
Quantifying and understanding movement is critical for a wide range of questions in basic and applied ecology. Movement ecology is also fostered by technological advances that allow automated tracking for a wide range of animal species. However, for aquatic macroinvertebrates, such detailed methods do not yet exist. We developed a video tracking method for two different species of benthic macroinvertebrates, the crawling isopod Asellus aquaticus and the swimming fresh water amphipod Gammarus pulex. We tested the effects of different light sources and marking techniques on their movement behavior to establish the possibilities and limitations of the experimental protocol and to ensure that the basic handling of test specimens would not bias conclusions drawn from movement path analyses. To demonstrate the versatility of our method, we studied the influence of varying population densities on different movement parameters related to resting behavior, directionality, and step lengths. We found that our method allows studying species with different modes of dispersal and under different conditions. For example, we found that gammarids spend more time moving at higher population densities, while asellids rest more under similar conditions. At the same time, in response to higher densities, gammarids mostly decreased average step lengths, whereas asellids did not. Gammarids, however, were also more sensitive to general handling and marking than asellids. Our protocol for marking and video tracking can be easily adopted for other species of aquatic macroinvertebrates or testing conditions, for example, presence or absence of food sources, shelter, or predator cues. Nevertheless, limitations with regard to the marking protocol, material, and a species' physical build need to be considered and tested before a wider application, particularly for swimming species. Data obtained with this approach can deepen the understanding of population dynamics on larger spatial scales and of the effects of different management strategies on a species' dispersal potential.
Studying the movement behavior of benthic macroinvertebrates with automated video tracking
Augusiak, Jacqueline; Van den Brink, Paul J
2015-01-01
Quantifying and understanding movement is critical for a wide range of questions in basic and applied ecology. Movement ecology is also fostered by technological advances that allow automated tracking for a wide range of animal species. However, for aquatic macroinvertebrates, such detailed methods do not yet exist. We developed a video tracking method for two different species of benthic macroinvertebrates, the crawling isopod Asellus aquaticus and the swimming fresh water amphipod Gammarus pulex. We tested the effects of different light sources and marking techniques on their movement behavior to establish the possibilities and limitations of the experimental protocol and to ensure that the basic handling of test specimens would not bias conclusions drawn from movement path analyses. To demonstrate the versatility of our method, we studied the influence of varying population densities on different movement parameters related to resting behavior, directionality, and step lengths. We found that our method allows studying species with different modes of dispersal and under different conditions. For example, we found that gammarids spend more time moving at higher population densities, while asellids rest more under similar conditions. At the same time, in response to higher densities, gammarids mostly decreased average step lengths, whereas asellids did not. Gammarids, however, were also more sensitive to general handling and marking than asellids. Our protocol for marking and video tracking can be easily adopted for other species of aquatic macroinvertebrates or testing conditions, for example, presence or absence of food sources, shelter, or predator cues. Nevertheless, limitations with regard to the marking protocol, material, and a species’ physical build need to be considered and tested before a wider application, particularly for swimming species. Data obtained with this approach can deepen the understanding of population dynamics on larger spatial scales and of the effects of different management strategies on a species’ dispersal potential. PMID:25937901
NASA Technical Reports Server (NTRS)
Reiber, J. H. C.
1976-01-01
To automate the data acquisition procedure, a real-time contour detection and data acquisition system for the left ventricular outline was developed using video techniques. The X-ray image of the contrast-filled left ventricle is stored for subsequent processing on film (cineangiogram), video tape or disc. The cineangiogram is converted into video format using a television camera. The video signal from either the TV camera, video tape or disc is the input signal to the system. The contour detection is based on a dynamic thresholding technique. Since the left ventricular outline is a smooth continuous function, for each contour side a narrow expectation window is defined in which the next borderpoint will be detected. A computer interface was designed and built for the online acquisition of the coordinates using a PDP-12 computer. The advantage of this system over other available systems is its potential for online, real-time acquisition of the left ventricular size and shape during angiocardiography.
High-quality and small-capacity e-learning video featuring lecturer-superimposing PC screen images
NASA Astrophysics Data System (ADS)
Nomura, Yoshihiko; Murakami, Michinobu; Sakamoto, Ryota; Sugiura, Tokuhiro; Matsui, Hirokazu; Kato, Norihiko
2006-10-01
Information processing and communication technology are progressing quickly, and are prevailing throughout various technological fields. Therefore, the development of such technology should respond to the needs for improvement of quality in the e-learning education system. The authors propose a new video-image compression processing system that ingeniously employs the features of the lecturing scene. While dynamic lecturing scene is shot by a digital video camera, screen images are electronically stored by a PC screen image capturing software in relatively long period at a practical class. Then, a lecturer and a lecture stick are extracted from the digital video images by pattern recognition techniques, and the extracted images are superimposed on the appropriate PC screen images by off-line processing. Thus, we have succeeded to create a high-quality and small-capacity (HQ/SC) video-on-demand educational content featuring the advantages: the high quality of image sharpness, the small electronic file capacity, and the realistic lecturer motion.
Provision of QoS for Multimedia Services in IEEE 802.11 Wireless Network
2006-10-01
Provision of QoS for Multimedia Services in IEEE 802.11 Wireless Network. In Dynamic Communications Management (pp. 10-1 – 10-16). Meeting Proceedings...mechanisms have been used for managing a limited bandwidth link within the IPv6 military narrowband network. The detailed description of these...confirms that implemented video rate adaptation mechanism enables improvement of qaulity of video transfer. Provision of QoS for Multimedia Services in
CONARC Soft Skills Training Conference.
1973-04-05
videocassette) Script of video tape: (Audio portion only) USAMPS Presents DYNAMICS OF HUMAN BEHAVIOR EGO DEFENSE MECHANISMS V-98 Ib I I SCENE fI Mr...prepared for distribution on request to CONARC Training Aids Agency, Fort Eustis, Virginia 23604. In order to secure said presentation a 60 minute video ...potential critical situations with which a driver may have to cope . In order to identify the specific purposes and situations which constitute a given job
Video Guidance, Landing, and Imaging system (VGLIS) for space missions
NASA Technical Reports Server (NTRS)
Schappell, R. T.; Knickerbocker, R. L.; Tietz, J. C.; Grant, C.; Flemming, J. C.
1975-01-01
The feasibility of an autonomous video guidance system that is capable of observing a planetary surface during terminal descent and selecting the most acceptable landing site was demonstrated. The system was breadboarded and "flown" on a physical simulator consisting of a control panel and monitor, a dynamic simulator, and a PDP-9 computer. The breadboard VGLIS consisted of an image dissector camera and the appropriate processing logic. Results are reported.
NASA Technical Reports Server (NTRS)
Barker, Ed; Maley, Paul; Mulrooney, Mark; Beaulieu, Kevin
2009-01-01
In September 2008, a joint ESA/NASA multi-instrument airborne observing campaign was conducted over the Southern Pacific ocean. The objective was the acquisition of data to support detailed atmospheric re-entry analysis for the first flight of the European Automated Transfer Vehicle (ATV)-1. Skilled observers were deployed aboard two aircraft which were flown at 12.8 km altitude within visible range of the ATV-1 re-entry zone. The observers operated a suite of instruments with low-light-level detection sensitivity including still cameras, high speed and 30 fps video cameras, and spectrographs. The collected data has provided valuable information regarding the dynamic time evolution of the ATV-1 re-entry fragmentation. Specifically, the data has satisfied the primary mission objective of recording the explosion of ATV-1's primary fuel tank and thereby validating predictions regarding the tanks demise and the altitude of its occurrence. Furthermore, the data contains the brightness and trajectories of several hundred ATV-1 fragments. It is the analysis of these properties, as recorded by the particular instrument set sponsored by NASA/Johnson Space Center, which we present here.
Stuart, Samuel; Hickey, Aodhán; Galna, Brook; Lord, Sue; Rochester, Lynn; Godfrey, Alan
2017-01-01
Detection of saccades (fast eye-movements) within raw mobile electrooculography (EOG) data involves complex algorithms which typically process data acquired during seated static tasks only. Processing of data during dynamic tasks such as walking is relatively rare and complex, particularly in older adults or people with Parkinson's disease (PD). Development of algorithms that can be easily implemented to detect saccades is required. This study aimed to develop an algorithm for the detection and measurement of saccades in EOG data during static (sitting) and dynamic (walking) tasks, in older adults and PD. Eye-tracking via mobile EOG and infra-red (IR) eye-tracker (with video) was performed with a group of older adults (n = 10) and PD participants (n = 10) (⩾50 years). Horizontal saccades made between targets set 5°, 10° and 15° apart were first measured while seated. Horizontal saccades were then measured while a participant walked and executed a 40° turn left and right. The EOG algorithm was evaluated by comparing the number of correct saccade detections and agreement (ICC 2,1 ) between output from visual inspection of eye-tracker videos and IR eye-tracker. The EOG algorithm detected 75-92% of saccades compared to video inspection and IR output during static testing, with fair to excellent agreement (ICC 2,1 0.49-0.93). However, during walking EOG saccade detection reduced to 42-88% compared to video inspection or IR output, with poor to excellent (ICC 2,1 0.13-0.88) agreement between methodologies. The algorithm was robust during seated testing but less so during walking, which was likely due to increased measurement and analysis error with a dynamic task. Future studies may consider a combination of EOG and IR for comprehensive measurement.
Neuswanger, Jason R.; Wipfli, Mark S.; Rosenberger, Amanda E.; Hughes, Nicholas F.
2017-01-01
Applications of video in fisheries research range from simple biodiversity surveys to three-dimensional (3D) measurement of complex swimming, schooling, feeding, and territorial behaviors. However, researchers lack a transparently developed, easy-to-use, general purpose tool for 3D video measurement and event logging. Thus, we developed a new measurement system, with freely available, user-friendly software, easily obtained hardware, and flexible underlying mathematical methods capable of high precision and accuracy. The software, VidSync, allows users to efficiently record, organize, and navigate complex 2D or 3D measurements of fish and their physical habitats. Laboratory tests showed submillimetre accuracy in length measurements of 50.8 mm targets at close range, with increasing errors (mostly <1%) at longer range and for longer targets. A field test on juvenile Chinook salmon (Oncorhynchus tshawytscha) feeding behavior in Alaska streams found that individuals within aggregations avoided the immediate proximity of their competitors, out to a distance of 1.0 to 2.9 body lengths. This system makes 3D video measurement a practical tool for laboratory and field studies of aquatic or terrestrial animal behavior and ecology.
On mobile wireless ad hoc IP video transports
NASA Astrophysics Data System (ADS)
Kazantzidis, Matheos
2006-05-01
Multimedia transports in wireless, ad-hoc, multi-hop or mobile networks must be capable of obtaining information about the network and adaptively tune sending and encoding parameters to the network response. Obtaining meaningful metrics to guide a stable congestion control mechanism in the transport (i.e. passive, simple, end-to-end and network technology independent) is a complex problem. Equally difficult is obtaining a reliable QoS metrics that agrees with user perception in a client/server or distributed environment. Existing metrics, objective or subjective, are commonly used after or before to test or report on a transmission and require access to both original and transmitted frames. In this paper, we propose that an efficient and successful video delivery and the optimization of overall network QoS requires innovation in a) a direct measurement of available and bottleneck capacity for its congestion control and b) a meaningful subjective QoS metric that is dynamically reported to video sender. Once these are in place, a binomial -stable, fair and TCP friendly- algorithm can be used to determine the sending rate and other packet video parameters. An adaptive mpeg codec can then continually test and fit its parameters and temporal-spatial data-error control balance using the perceived QoS dynamic feedback. We suggest a new measurement based on a packet dispersion technique that is independent of underlying network mechanisms. We then present a binomial control based on direct measurements. We implement a QoS metric that is known to agree with user perception (MPQM) in a client/server, distributed environment by using predetermined table lookups and characterization of video content.
4D optical coherence tomography of aortic valve dynamics in a murine mouse model ex vivo
NASA Astrophysics Data System (ADS)
Schnabel, Christian; Jannasch, Anett; Faak, Saskia; Waldow, Thomas; Koch, Edmund
2015-07-01
The heart and its mechanical components, especially the heart valves and leaflets, are under enormous strain during lifetime. Like all highly stressed materials, also these biological components undergo fatigue and signs of wear, which impinge upon cardiac output and in the end on health and living comfort of affected patients. Thereby pathophysiological changes of the aortic valve leading to calcific aortic valve stenosis (AVS) as most frequent heart valve disease in humans are of particular interest. The knowledge about changes of the dynamic behavior during the course of this disease and the possibility of early stage diagnosis could lead to the development of new treatment strategies and drug-based options of prevention or therapy. ApoE-/- mice as established model of AVS versus wildtype mice were introduced in an ex vivo artificially stimulated heart model. 4D optical coherence tomography (OCT) in combination with high-speed video microscopy were applied to characterize dynamic behavior of the murine aortic valve and to characterize dynamic properties during artificial stimulation. OCT and high-speed video microscopy with high spatial and temporal resolution represent promising tools for the investigation of dynamic behavior and their changes in calcific aortic stenosis disease models in mice.
Monkey vocal tracts are speech-ready.
Fitch, W Tecumseh; de Boer, Bart; Mathur, Neil; Ghazanfar, Asif A
2016-12-01
For four decades, the inability of nonhuman primates to produce human speech sounds has been claimed to stem from limitations in their vocal tract anatomy, a conclusion based on plaster casts made from the vocal tract of a monkey cadaver. We used x-ray videos to quantify vocal tract dynamics in living macaques during vocalization, facial displays, and feeding. We demonstrate that the macaque vocal tract could easily produce an adequate range of speech sounds to support spoken language, showing that previous techniques based on postmortem samples drastically underestimated primate vocal capabilities. Our findings imply that the evolution of human speech capabilities required neural changes rather than modifications of vocal anatomy. Macaques have a speech-ready vocal tract but lack a speech-ready brain to control it.
Propulsive efficiency of the underwater dolphin kick in humans.
von Loebbecke, Alfred; Mittal, Rajat; Fish, Frank; Mark, Russell
2009-05-01
Three-dimensional fully unsteady computational fluid dynamic simulations of five Olympic-level swimmers performing the underwater dolphin kick are used to estimate the swimmer's propulsive efficiencies. These estimates are compared with those of a cetacean performing the dolphin kick. The geometries of the swimmers and the cetacean are based on laser and CT scans, respectively, and the stroke kinematics is based on underwater video footage. The simulations indicate that the propulsive efficiency for human swimmers varies over a relatively wide range from about 11% to 29%. The efficiency of the cetacean is found to be about 56%, which is significantly higher than the human swimmers. The computed efficiency is found not to correlate with either the slender body theory or with the Strouhal number.
Understanding pharmacokinetics: are YouTube videos a useful learning resource?
Azer, S A
2014-07-01
To investigate whether YouTube videos on pharmacokinetics can be a useful learning resource for medical students. YouTube was searched from 01 November to 15 November 2013 for search terms "Pharmacokinetics", "Drug absorption", "Drug distribution", Drug metabolism", "Drug elimination", "Biliary excretion of drugs", and "Renal excretion of drugs". Only videos in the English and those matching the inclusion criteria were included. For each video, the following characteristic data were collected: title, URL, duration, number of viewers, date uploaded, and viewership per day, like, dislike, number of comments, number of video sharing, and the uploader /creator. Using standardized criteria comprising technical, content, authority and pedagogy parameters, three evaluators independently assessed the videos for educational usefulness. Data were analyzed using SPSS software and the agreement between the evaluators was calculated using Cohen's kappa analysis. The search identified 1460 videos. Of these, only 48 fulfilled the inclusion criteria. Only 30 were classified as educationally useful videos (62.5%) scoring 13.83±0.45 (mean±SD) while the remaining 18 videos were not educationally useful (37.5%) scoring 6.48±1.64 (mean±SD), p = 0.000. The educationally useful videos were created by pharmacologists/educators 83.3% (25/30), professors from two universities 13.3% (04/30) and private tutoring body 3.3% (01/30). The useful videos were viewed by 12096 (65.4%) and had a total of 433332 days on YouTube, while the non-educationally useful videos were viewed by 6378 (34.6%) viewers and had 20684 days on YouTube. No correlation was found between video total score and number of like (R2 0.258), dislike (R2 0.103), viewers (R2 0.186), viewership/day (R2 0.256), comments (R2 0.250), or share (R2 0.174). The agreement between the three evaluators had an overall Cohen's kappa score in the range of 0.582-0.949. YouTube videos on pharmacokinetics and drug elimination showed a range of variability in regard to the contents of their educational usefulness. Medical educators should be aware of the potential influence of YouTube videos may have on student's understanding of pharmacokinetics and drug elimination. Users who rely on the comments made by viewers or the approval expressed in terms of the number of like given by viewers should become aware that these indicators are not accurate and do not correlate with the scores given to videos.
Dynamic Simulation and Static Matching for Action Prediction: Evidence from Body Part Priming
ERIC Educational Resources Information Center
Springer, Anne; Brandstadter, Simone; Prinz, Wolfgang
2013-01-01
Accurately predicting other people's actions may involve two processes: internal real-time simulation (dynamic updating) and matching recently perceived action images (static matching). Using a priming of body parts, this study aimed to differentiate the two processes. Specifically, participants played a motion-controlled video game with…
A Constraint Generation Approach to Learning Stable Linear Dynamical Systems
2008-01-01
task of learning dynamic textures from image sequences as well as to modeling biosurveillance drug-sales data. The constraint generation approach...previous methods in our experiments. One application of LDSs in computer vision is learning dynamic textures from video data [8]. An advantage of...over-the-counter (OTC) drug sales for biosurveillance , and sunspot numbers from the UCR archive [9]. Comparison to the best alternative methods [7, 10
Human recognition in a video network
NASA Astrophysics Data System (ADS)
Bhanu, Bir
2009-10-01
Video networks is an emerging interdisciplinary field with significant and exciting scientific and technological challenges. It has great promise in solving many real-world problems and enabling a broad range of applications, including smart homes, video surveillance, environment and traffic monitoring, elderly care, intelligent environments, and entertainment in public and private spaces. This paper provides an overview of the design of a wireless video network as an experimental environment, camera selection, hand-off and control, anomaly detection. It addresses challenging questions for individual identification using gait and face at a distance and present new techniques and their comparison for robust identification.
Thermal-Polarimetric and Visible Data Collection for Face Recognition
2016-09-01
pixels • Spectral range: 7.5–13 μm • Analog image output: NTSC analog video • Digital image output: Firewire radiometric, 14-bit digital video to...PC The analog video was not used for this study. The radiometric, 14-bit digital data provided temperature measurement information for comparison...distribution unlimited. 18 9. References 1. Choi J, Hu S, Young SS, Davis LS. Thermal to visible face recognition. Proc. SPIE 8371, Sensing
Neil A. Clark; Sang-Mook Lee
2004-01-01
This paper demonstrates how a digital video camera with a long lens can be used with pulse laser ranging in order to collect very large-scale tree crown measurements. The long focal length of the camera lens provides the magnification required for precise viewing of distant points with the trade-off of spatial coverage. Multiple video frames are mosaicked into a single...
NASA Astrophysics Data System (ADS)
Michel, Robert G.; Cavallari, Jennifer M.; Znamenskaia, Elena; Yang, Karl X.; Sun, Tao; Bent, Gary
1999-12-01
This article is an electronic publication in Spectrochimica Acta Electronica (SAE), a section of Spectrochimica Acta Part B (SAB). The hardcopy text is accompanied by an electronic archive, stored on the CD-ROM accompanying this issue. The archive contains video clips. The main article discusses the scientific aspects of the subject and explains the purpose of the video files. Short, 15-30 s, digital video clips are easily controllable at the computer keyboard, which gives a speaker the ability to show fine details through the use of slow motion. Also, they are easily accessed from the computer hard drive for rapid extemporaneous presentation. In addition, they are easily transferred to the Internet for dissemination. From a pedagogical point of view, the act of making a video clip by a student allows for development of powers of observation, while the availability of the technology to make digital video clips gives a teacher the flexibility to demonstrate scientific concepts that would otherwise have to be done as 'live' demonstrations, with all the likely attendant misadventures. Our experience with digital video clips has been through their use in computer-based presentations by undergraduate and graduate students in analytical chemistry classes, and by high school and middle school teachers and their students in a variety of science and non-science classes. In physics teaching laboratories, we have used the hardware to capture digital video clips of dynamic processes, such as projectiles and pendulums, for later mathematical analysis.
Echocardiogram video summarization
NASA Astrophysics Data System (ADS)
Ebadollahi, Shahram; Chang, Shih-Fu; Wu, Henry D.; Takoma, Shin
2001-05-01
This work aims at developing innovative algorithms and tools for summarizing echocardiogram videos. Specifically, we summarize the digital echocardiogram videos by temporally segmenting them into the constituent views and representing each view by the most informative frame. For the segmentation we take advantage of the well-defined spatio- temporal structure of the echocardiogram videos. Two different criteria are used: presence/absence of color and the shape of the region of interest (ROI) in each frame of the video. The change in the ROI is due to different modes of echocardiograms present in one study. The representative frame is defined to be the frame corresponding to the end- diastole of the heart cycle. To locate the end-diastole we track the ECG of each frame to find the exact time the time- marker on the ECG crosses the peak of the end-diastole we track the ECG of each frame to find the exact time the time- marker on the ECG crosses the peak of the R-wave. The corresponding frame is chosen to be the key-frame. The entire echocardiogram video can be summarized into either a static summary, which is a storyboard type of summary and a dynamic summary, which is a concatenation of the selected segments of the echocardiogram video. To the best of our knowledge, this if the first automated system for summarizing the echocardiogram videos base don visual content.
Development and evaluation of the DECIDE to move! Physical activity educational video.
Majid, Haseeb M; Schumann, Kristina P; Doswell, Angela; Sutherland, June; Hill Golden, Sherita; Stewart, Kerry J; Hill-Briggs, Felicia
2012-01-01
To develop a video that provides accessible and usable information about the importance of physical activity to type 2 diabetes self-management and ways of incorporating physical activity into everyday life. A 15-minute physical activity educational video narrated by US Surgeon General Dr Regina Benjamin was developed and evaluated. The video addresses the following topics: the effects of exercise on diabetes, preparations for beginning physical activity, types of physical activity, safety considerations (eg, awareness of symptoms of hypoglycemia during activity), and goal setting. Two patient screening groups were held for evaluation and revision of the video. Patient satisfaction ratings ranged 4.6 to 4.9 out of a possible 5.0 on dimensions of overall satisfaction, how informative they found the video to be, how well the video held their interest and attention, how easy the video was to understand, and how easy the video was to see and hear. Patients reported the educational video effective in empowering them to take strides toward increasing and maintaining physical activity in their lives. The tool is currently used in a clinical research trial, Project DECIDE, as one component of a diabetes and cardiovascular disease self-management program.
NASA Astrophysics Data System (ADS)
Nightingale, James; Wang, Qi; Grecos, Christos; Goma, Sergio
2014-02-01
High Efficiency Video Coding (HEVC), the latest video compression standard (also known as H.265), can deliver video streams of comparable quality to the current H.264 Advanced Video Coding (H.264/AVC) standard with a 50% reduction in bandwidth. Research into SHVC, the scalable extension to the HEVC standard, is still in its infancy. One important area for investigation is whether, given the greater compression ratio of HEVC (and SHVC), the loss of packets containing video content will have a greater impact on the quality of delivered video than is the case with H.264/AVC or its scalable extension H.264/SVC. In this work we empirically evaluate the layer-based, in-network adaptation of video streams encoded using SHVC in situations where dynamically changing bandwidths and datagram loss ratios require the real-time adaptation of video streams. Through the use of extensive experimentation, we establish a comprehensive set of benchmarks for SHVC-based highdefinition video streaming in loss prone network environments such as those commonly found in mobile networks. Among other results, we highlight that packet losses of only 1% can lead to a substantial reduction in PSNR of over 3dB and error propagation in over 130 pictures following the one in which the loss occurred. This work would be one of the earliest studies in this cutting-edge area that reports benchmark evaluation results for the effects of datagram loss on SHVC picture quality and offers empirical and analytical insights into SHVC adaptation to lossy, mobile networking conditions.
Facial Attractiveness Ratings from Video-Clips and Static Images Tell the Same Story
Rhodes, Gillian; Lie, Hanne C.; Thevaraja, Nishta; Taylor, Libby; Iredell, Natasha; Curran, Christine; Tan, Shi Qin Claire; Carnemolla, Pia; Simmons, Leigh W.
2011-01-01
Most of what we know about what makes a face attractive and why we have the preferences we do is based on attractiveness ratings of static images of faces, usually photographs. However, several reports that such ratings fail to correlate significantly with ratings made to dynamic video clips, which provide richer samples of appearance, challenge the validity of this literature. Here, we tested the validity of attractiveness ratings made to static images, using a substantial sample of male faces. We found that these ratings agreed very strongly with ratings made to videos of these men, despite the presence of much more information in the videos (multiple views, neutral and smiling expressions and speech-related movements). Not surprisingly, given this high agreement, the components of video-attractiveness were also very similar to those reported previously for static-attractiveness. Specifically, averageness, symmetry and masculinity were all significant components of attractiveness rated from videos. Finally, regression analyses yielded very similar effects of attractiveness on success in obtaining sexual partners, whether attractiveness was rated from videos or static images. These results validate the widespread use of attractiveness ratings made to static images in evolutionary and social psychological research. We speculate that this validity may stem from our tendency to make rapid and robust judgements of attractiveness. PMID:22096491
Automated Video Based Facial Expression Analysis of Neuropsychiatric Disorders
Wang, Peng; Barrett, Frederick; Martin, Elizabeth; Milanova, Marina; Gur, Raquel E.; Gur, Ruben C.; Kohler, Christian; Verma, Ragini
2008-01-01
Deficits in emotional expression are prominent in several neuropsychiatric disorders, including schizophrenia. Available clinical facial expression evaluations provide subjective and qualitative measurements, which are based on static 2D images that do not capture the temporal dynamics and subtleties of expression changes. Therefore, there is a need for automated, objective and quantitative measurements of facial expressions captured using videos. This paper presents a computational framework that creates probabilistic expression profiles for video data and can potentially help to automatically quantify emotional expression differences between patients with neuropsychiatric disorders and healthy controls. Our method automatically detects and tracks facial landmarks in videos, and then extracts geometric features to characterize facial expression changes. To analyze temporal facial expression changes, we employ probabilistic classifiers that analyze facial expressions in individual frames, and then propagate the probabilities throughout the video to capture the temporal characteristics of facial expressions. The applications of our method to healthy controls and case studies of patients with schizophrenia and Asperger’s syndrome demonstrate the capability of the video-based expression analysis method in capturing subtleties of facial expression. Such results can pave the way for a video based method for quantitative analysis of facial expressions in clinical research of disorders that cause affective deficits. PMID:18045693
Utilization of Facial Image Analysis Technology for Blink Detection: A Validation Study.
Kitazawa, Momoko; Yoshimura, Michitaka; Liang, Kuo-Ching; Wada, Satoshi; Mimura, Masaru; Tsubota, Kazuo; Kishimoto, Taishiro
2018-06-25
The assessment of anterior eye diseases and the understanding of psychological functions of blinking can benefit greatly from a validated blinking detection technology. In this work, we proposed an algorithm based on facial recognition built on current video processing technologies to automatically filter and analyze blinking movements. We compared electrooculography (EOG), the gold standard of blinking measurement, with manual video tape recording counting (mVTRc) and our proposed automated video tape recording analysis (aVTRa) in both static and dynamic conditions to validate our aVTRa method. We measured blinking in both static condition, where the subject was sitting still with chin fixed on the table, and dynamic condition, where the subject's face was not fixed and natural communication was taking place between the subject and interviewer. We defined concordance of blinks between measurement methods as having less than 50 ms difference between eyes opening and closing. The subjects consisted of seven healthy Japanese volunteers (3 male, four female) without significant eye disease with average age of 31.4±7.2. The concordance of EOG vs. aVTRa, EOG vs. mVTRc, and aVTRa vs. mVTRc (average±SD) were found to be 92.2±10.8%, 85.0±16.5%, and 99.6±1.0% in static conditions and 32.6±31.0%, 28.0±24.2%, and 98.5±2.7% in dynamic conditions, respectively. In static conditions, we have found a high blink concordance rate between the proposed aVTRa versus EOG, and confirmed the validity of aVTRa in both static and dynamic conditions.
Liu, Wei; Gerdtz, Marie; Manias, Elizabeth
2015-12-01
To examine the challenges and opportunities of undertaking a video ethnographic study on medication communication among nurses, doctors, pharmacists and patients. Video ethnography has proved to be a dynamic and useful method to explore clinical communication activities. This approach involves filming actual behaviours and activities of clinicians to develop new knowledge and to stimulate reflections of clinicians on their behaviours and activities. However, there is limited information about the complex negotiations required to use video ethnography in actual clinical practice. Discursive paper. A video ethnographic approach was used to gain better understanding of medication communication processes in two general medical wards of a metropolitan hospital in Melbourne, Australia. This paper presents the arduous and delicate process of gaining access into hospital wards to video-record actual clinical practice and the methodological and ethical issues associated with video-recording. Obtaining access to clinical settings and clinician consent are the first hurdles of conducting a video ethnographic study. Clinicians may still feel intimidated or self-conscious in being video recorded about their medication communication practices, which they could perceive as judgements being passed about their clinical competence. By thoughtful and strategic planning, video ethnography can provide in-depth understandings of medication communication in acute care hospital settings. Ethical issues of informed consent, patient safety and respect for the confidentiality of patients and clinicians need to be carefully addressed to build up and maintain trusting relationships between researchers and participants in the clinical environment. By prudently considering the complex ethical and methodological concerns of using video ethnography, this approach can help to reveal the unpredictability and messiness of clinical practice. The visual data generated can stimulate clinicians' reflexivity about their norms of practice and bring about improved communication about managing medications. © 2015 John Wiley & Sons Ltd.
Ares I-X Separation and Reentry Trajectory Analyses
NASA Technical Reports Server (NTRS)
Tartabini, Paul V.; Starr, Brett R.
2011-01-01
The Ares I-X Flight Test Vehicle was launched on October 28, 2009 and was the first and only test flight of NASA s two-stage Ares I launch vehicle design. The launch was successful and the flight test met all of its primary and secondary objectives. This paper discusses the stage separation and reentry trajectory analysis that was performed in support of the Ares I-X test flight. Pre-flight analyses were conducted to assess the risk of stage recontact during separation, to evaluate the first stage flight dynamics during reentry, and to define the range safety impact ellipses of both stages. The results of these pre-flight analyses were compared with available flight data. On-board video taken during flight showed that the flight test vehicle successfully separated without any recontact. Reconstructed trajectory data also showed that first stage flight dynamics were well characterized by pre-flight Monte Carlo results. In addition, comparisons with flight data indicated that the complex interference aerodynamic models employed in the reentry simulation were effective in capturing the flight dynamics during separation. Finally, the splash-down locations of both stages were well within predicted impact ellipses.
1997-09-30
COUPLING BEHAVIOR AND VERTICAL DISTRIBUTION OF PTEROPODS IN COASTAL WATERS USING DATA FROM THE VIDEO PLANKTON RECORDER Scott M. Gallager Woods Hole...OBJECTIVES The general hypothesis being tested is that the vertical distribution of the pteropod Limacina retroversa is predictable as a function of light...the plankton, to a dynamic description of its instantaneous swimming behavior. 3) To couple objectives 1 and 2 through numerical modeling of pteropod
Doyle, Thomas W.; Michot, Thomas C.; Roetker, Fred; Sullivan, Jason; Melder, Marcus; Handley, Benjamin; Balmat, Jeff
2002-01-01
The advent of analog and digital video has provided amateur photographers with professional-like technology to capture dynamic images with ease and clarity. Videography is also rapidly changing traditional business and scientific applications. In the natural sciences, camcorders are being used largely to record timely observations of plant and animal behavior or consequence of some catastrophic event. Spectacular video of dynamic events such as hurricanes, volcanic eruptions and wildfire document the active process and aftermath. Scientists can analyze video images to quantify aspects of a given event, behavior, or response, temporally and spatially. In this study we demonstrate the simple use of an aerial application of videography to record the spatial extent and damage expression of mangrove forest in the Bay Islands and mainland coast of northern Honduras from wind damage following Hurricane Mitch (1998). In this study, we conducted a video overflight of coastal forests of the Bay Islands and mainland coast of northern Honduras 14 months after impact by Hurricane Mitch (1998). Coastal areas were identified where damage was evident and described relative to damage extent to forest cover, windfall orientation, and height of downed trees. The variability and spatial extent of impact on coastal forest resources is related to reconstructed wind profiles based on model simulations of Mitch's path, strength, and circulation during landfall.
Delivery of video-on-demand services using local storages within passive optical networks.
Abeywickrama, Sandu; Wong, Elaine
2013-01-28
At present, distributed storage systems have been widely studied to alleviate Internet traffic build-up caused by high-bandwidth, on-demand applications. Distributed storage arrays located locally within the passive optical network were previously proposed to deliver Video-on-Demand services. As an added feature, a popularity-aware caching algorithm was also proposed to dynamically maintain the most popular videos in the storage arrays of such local storages. In this paper, we present a new dynamic bandwidth allocation algorithm to improve Video-on-Demand services over passive optical networks using local storages. The algorithm exploits the use of standard control packets to reduce the time taken for the initial request communication between the customer and the central office, and to maintain the set of popular movies in the local storage. We conduct packet level simulations to perform a comparative analysis of the Quality-of-Service attributes between two passive optical networks, namely the conventional passive optical network and one that is equipped with a local storage. Results from our analysis highlight that strategic placement of a local storage inside the network enables the services to be delivered with improved Quality-of-Service to the customer. We further formulate power consumption models of both architectures to examine the trade-off between enhanced Quality-of-Service performance versus the increased power requirement from implementing a local storage within the network.
Segmentation of Pollen Tube Growth Videos Using Dynamic Bi-Modal Fusion and Seam Carving.
Tambo, Asongu L; Bhanu, Bir
2016-05-01
The growth of pollen tubes is of significant interest in plant cell biology, as it provides an understanding of internal cell dynamics that affect observable structural characteristics such as cell diameter, length, and growth rate. However, these parameters can only be measured in experimental videos if the complete shape of the cell is known. The challenge is to accurately obtain the cell boundary in noisy video images. Usually, these measurements are performed by a scientist who manually draws regions-of-interest on the images displayed on a computer screen. In this paper, a new automated technique is presented for boundary detection by fusing fluorescence and brightfield images, and a new efficient method of obtaining the final cell boundary through the process of Seam Carving is proposed. This approach takes advantage of the nature of the fusion process and also the shape of the pollen tube to efficiently search for the optimal cell boundary. In video segmentation, the first two frames are used to initialize the segmentation process by creating a search space based on a parametric model of the cell shape. Updates to the search space are performed based on the location of past segmentations and a prediction of the next segmentation.Experimental results show comparable accuracy to a previous method, but significant decrease in processing time. This has the potential for real time applications in pollen tube microscopy.
A Video Game Platform for Exploring Satellite and In-Situ Data Streams
NASA Astrophysics Data System (ADS)
Cai, Y.
2014-12-01
Exploring spatiotemporal patterns of moving objects are essential to Earth Observation missions, such as tracking, modeling and predicting movement of clouds, dust, plumes and harmful algal blooms. Those missions involve high-volume, multi-source, and multi-modal imagery data analysis. Analytical models intend to reveal inner structure, dynamics, and relationship of things. However, they are not necessarily intuitive to humans. Conventional scientific visualization methods are intuitive but limited by manual operations, such as area marking, measurement and alignment of multi-source data, which are expensive and time-consuming. A new development of video analytics platform has been in progress, which integrates the video game engine with satellite and in-situ data streams. The system converts Earth Observation data into articulated objects that are mapped from a high-dimensional space to a 3D space. The object tracking and augmented reality algorithms highlight the objects' features in colors, shapes and trajectories, creating visual cues for observing dynamic patterns. The head and gesture tracker enable users to navigate the data space interactively. To validate our design, we have used NASA SeaWiFS satellite images of oceanographic remote sensing data and NOAA's in-situ cell count data. Our study demonstrates that the video game system can reduce the size and cost of traditional CAVE systems in two to three orders of magnitude. This system can also be used for satellite mission planning and public outreaching.
Variability sensitivity of dynamic texture based recognition in clinical CT data
NASA Astrophysics Data System (ADS)
Kwitt, Roland; Razzaque, Sharif; Lowell, Jeffrey; Aylward, Stephen
2014-03-01
Dynamic texture recognition using a database of template models has recently shown promising results for the task of localizing anatomical structures in Ultrasound video. In order to understand its clinical value, it is imperative to study the sensitivity with respect to inter-patient variability as well as sensitivity to acquisition parameters such as Ultrasound probe angle. Fully addressing patient and acquisition variability issues, however, would require a large database of clinical Ultrasound from many patients, acquired in a multitude of controlled conditions, e.g., using a tracked transducer. Since such data is not readily attainable, we advocate an alternative evaluation strategy using abdominal CT data as a surrogate. In this paper, we describe how to replicate Ultrasound variabilities by extracting subvolumes from CT and interpreting the image material as an ordered sequence of video frames. Utilizing this technique, and based on a database of abdominal CT from 45 patients, we report recognition results on an organ (kidney) recognition task, where we try to discriminate kidney subvolumes/videos from a collection of randomly sampled negative instances. We demonstrate that (1) dynamic texture recognition is relatively insensitive to inter-patient variation while (2) viewing angle variability needs to be accounted for in the template database. Since naively extending the template database to counteract variability issues can lead to impractical database sizes, we propose an alternative strategy based on automated identification of a small set of representative models.
Urine Flow Dynamics Through Prostatic Urethra With Tubular Organ Modeling Using Endoscopic Imagery
Kambara, Yoichi; Yamanishi, Tomonori; Naya, Yukio; Igarashi, Tatsuo
2014-01-01
Voiding dysfunction is common in the aged male population. However, the obstruction mechanism in the lower urinary tract and critical points for obstruction remains uncertain. The aim of this paper was to develop a system to investigate the relationship between voiding dysfunction and alteration of the shape of the prostatic urethra by processing endoscopic video images of the urethra and analyzing the fluid dynamics of the urine stream. A panoramic image of the prostatic urethra was generated from cystourethroscopic video images. A virtual 3-D model of the urethra was constructed using the luminance values in the image. Fluid dynamics using the constructed model was then calculated assuming a static urethra and maximum urine flow rate. Cystourethroscopic videos from 11 patients with benign prostatic hyperplasia were recorded around administration of an alpha-1 adrenoceptor antagonist. The calculated pressure loss through the prostatic urethra in each model corresponded to the prostatic volume, and the improvements of the pressure loss after treatment correlated to the conventional clinical indices. As shown by the proposed method, the shape of the prostatic urethra affects the transporting urine fluid energy, and this paper implies a possible method for detecting critical lesions responsible for voiding dysfunction. The proposed method provides critical information about deformation of the prostatic urethra on voiding function. Detailed differences in the various types of relaxants for the lower urinary tract could be estimated. PMID:27170869
Van Hillegondsberg, Ludo; Carr, Jonathan; Brey, Naeem; Henning, Franclo
2017-12-01
This study seeks to determine whether the use of Eulerian video magnification (EVM) increases the detection of muscle fasciculations in people with amyotrophic lateral sclerosis (PALS) compared with direct clinical observation (DCO). Thirty-second-long video recordings were taken of 9 body regions of 7 PALS and 7 controls, and fasciculations were counted by DCO during the same 30-s period. The video recordings were then motion magnified and reviewed by 2 independent assessors. In PALS, median fasciculation count per body region was 1 by DCO (range 0-10) and 3 in the EVM recordings (range 0-15; P < 0.0001). EVM revealed more fasciculations than DCO in 61% of recordings. In controls, median fasciculation count was 0 for both DCO and EVM. Compared with DCO, EVM significantly increased the detection of fasciculations in body regions of PALS. When it is used to supplement clinical examination, EVM has the potential to facilitate the diagnosis of ALS. Muscle Nerve 56: 1063-1067, 2017. © 2017 Wiley Periodicals, Inc.
Parry, Ingrid; Carbullido, Clarissa; Kawada, Jason; Bagley, Anita; Sen, Soman; Greenhalgh, David; Palmieri, Tina
2014-08-01
Commercially available interactive video games are commonly used in rehabilitation to aide in physical recovery from a variety of conditions and injuries, including burns. Most video games were not originally designed for rehabilitation purposes and although some games have shown therapeutic potential in burn rehabilitation, the physical demands of more recently released video games, such as Microsoft Xbox Kinect™ (Kinect) and Sony PlayStation 3 Move™ (PS Move), have not been objectively evaluated. Video game technology is constantly evolving and demonstrating different immersive qualities and interactive demands that may or may not have therapeutic potential for patients recovering from burns. This study analyzed the upper extremity motion demands of Kinect and PS Move using three-dimensional motion analysis to determine their applicability in burn rehabilitation. Thirty normal children played each video game while real-time movement of their upper extremities was measured to determine maximal excursion and amount of elevation time. Maximal shoulder flexion, shoulder abduction and elbow flexion range of motion were significantly greater while playing Kinect than the PS Move (p≤0.01). Elevation time of the arms above 120° was also significantly longer with Kinect (p<0.05). The physical demands for shoulder and elbow range of motion while playing the Kinect, and to a lesser extent PS Move, are comparable to functional motion needed for daily tasks such as eating with a utensil and hair combing. Therefore, these more recently released commercially available video games show therapeutic potential in burn rehabilitation. Objectively quantifying the physical demands of video games commonly used in rehabilitation aides clinicians in the integration of them into practice and lays the framework for further research on their efficacy. Copyright © 2013 Elsevier Ltd and ISBI. All rights reserved.
Youk, Ji Hyun; Jung, Inkyung; Yoon, Jung Hyun; Kim, Sung Hun; Kim, You Me; Lee, Eun Hye; Jeong, Sun Hye; Kim, Min Jung
2016-09-01
Our aim was to compare the inter-observer variability and diagnostic performance of the Breast Imaging Reporting and Data System (BI-RADS) lexicon for breast ultrasound of static and video images. Ninety-nine breast masses visible on ultrasound examination from 95 women 19-81 y of age at five institutions were enrolled in this study. They were scheduled to undergo biopsy or surgery or had been stable for at least 2 y of ultrasound follow-up after benign biopsy results or typically benign findings. For each mass, representative long- and short-axis static ultrasound images were acquired; real-time long- and short-axis B-mode video images through the mass area were separately saved as cine clips. Each image was reviewed independently by five radiologists who were asked to classify ultrasound features according to the fifth edition of the BI-RADS lexicon. Inter-observer variability was assessed using kappa (κ) statistics. Diagnostic performance on static and video images was compared using the area under the receiver operating characteristic curve. No significant difference was found in κ values between static and video images for all descriptors, although κ values of video images were higher than those of static images for shape, orientation, margin and calcifications. After receiver operating characteristic curve analysis, the video images (0.83, range: 0.77-0.87) had higher areas under the curve than the static images (0.80, range: 0.75-0.83; p = 0.08). Inter-observer variability and diagnostic performance of video images was similar to that of static images on breast ultrasonography according to the new edition of BI-RADS. Copyright © 2016 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.
Big hits on the small screen: an evaluation of concussion-related videos on YouTube.
Williams, David; Sullivan, S John; Schneiders, Anthony G; Ahmed, Osman Hassan; Lee, Hopin; Balasundaram, Arun Prasad; McCrory, Paul R
2014-01-01
YouTube is one of the largest social networking websites, allowing users to upload and view video content that provides entertainment and conveys many messages, including those related to health conditions, such as concussion. However, little is known about the content of videos relating to concussion. To identify and classify the content of concussion-related videos available on YouTube. An observational study using content analysis. YouTube's video database was systematically searched using 10 search terms selected from MeSH and Google Adwords. The 100 videos with the largest view counts were chosen from the identified videos. These videos and their accompanying text were analysed for purpose, source and description of content by a panel of assessors who classified them into data-driven thematic categories. 434 videos met the inclusion criteria and the 100 videos with the largest view counts were chosen. The most common categories of the videos were the depiction of a sporting injury (37%) and news reports (25%). News and media organisations were the predominant source (51%) of concussion-related videos on YouTube, with very few being uploaded by professional or academic organisations. The median number of views per video was 26 191. Although a wide range of concussion-related videos were identified, there is a need for healthcare and educational organisations to explore YouTube as a medium for the dissemination of quality-controlled information on sports concussion.
Intelligent keyframe extraction for video printing
NASA Astrophysics Data System (ADS)
Zhang, Tong
2004-10-01
Nowadays most digital cameras have the functionality of taking short video clips, with the length of video ranging from several seconds to a couple of minutes. The purpose of this research is to develop an algorithm which extracts an optimal set of keyframes from each short video clip so that the user could obtain proper video frames to print out. In current video printing systems, keyframes are normally obtained by evenly sampling the video clip over time. Such an approach, however, may not reflect highlights or regions of interest in the video. Keyframes derived in this way may also be improper for video printing in terms of either content or image quality. In this paper, we present an intelligent keyframe extraction approach to derive an improved keyframe set by performing semantic analysis of the video content. For a video clip, a number of video and audio features are analyzed to first generate a candidate keyframe set. These features include accumulative color histogram and color layout differences, camera motion estimation, moving object tracking, face detection and audio event detection. Then, the candidate keyframes are clustered and evaluated to obtain a final keyframe set. The objective is to automatically generate a limited number of keyframes to show different views of the scene; to show different people and their actions in the scene; and to tell the story in the video shot. Moreover, frame extraction for video printing, which is a rather subjective problem, is considered in this work for the first time, and a semi-automatic approach is proposed.
Multimodal Speaker Diarization.
Noulas, A; Englebienne, G; Krose, B J A
2012-01-01
We present a novel probabilistic framework that fuses information coming from the audio and video modality to perform speaker diarization. The proposed framework is a Dynamic Bayesian Network (DBN) that is an extension of a factorial Hidden Markov Model (fHMM) and models the people appearing in an audiovisual recording as multimodal entities that generate observations in the audio stream, the video stream, and the joint audiovisual space. The framework is very robust to different contexts, makes no assumptions about the location of the recording equipment, and does not require labeled training data as it acquires the model parameters using the Expectation Maximization (EM) algorithm. We apply the proposed model to two meeting videos and a news broadcast video, all of which come from publicly available data sets. The results acquired in speaker diarization are in favor of the proposed multimodal framework, which outperforms the single modality analysis results and improves over the state-of-the-art audio-based speaker diarization.
Quantitative assessment of human motion using video motion analysis
NASA Technical Reports Server (NTRS)
Probe, John D.
1990-01-01
In the study of the dynamics and kinematics of the human body, a wide variety of technologies was developed. Photogrammetric techniques are well documented and are known to provide reliable positional data from recorded images. Often these techniques are used in conjunction with cinematography and videography for analysis of planar motion, and to a lesser degree three-dimensional motion. Cinematography has been the most widely used medium for movement analysis. Excessive operating costs and the lag time required for film development coupled with recent advances in video technology have allowed video based motion analysis systems to emerge as a cost effective method of collecting and analyzing human movement. The Anthropometric and Biomechanics Lab at Johnson Space Center utilizes the video based Ariel Performance Analysis System to develop data on shirt-sleeved and space-suited human performance in order to plan efficient on orbit intravehicular and extravehicular activities. The system is described.
Free-viewpoint video of human actors using multiple handheld Kinects.
Ye, Genzhi; Liu, Yebin; Deng, Yue; Hasler, Nils; Ji, Xiangyang; Dai, Qionghai; Theobalt, Christian
2013-10-01
We present an algorithm for creating free-viewpoint video of interacting humans using three handheld Kinect cameras. Our method reconstructs deforming surface geometry and temporal varying texture of humans through estimation of human poses and camera poses for every time step of the RGBZ video. Skeletal configurations and camera poses are found by solving a joint energy minimization problem, which optimizes the alignment of RGBZ data from all cameras, as well as the alignment of human shape templates to the Kinect data. The energy function is based on a combination of geometric correspondence finding, implicit scene segmentation, and correspondence finding using image features. Finally, texture recovery is achieved through jointly optimization on spatio-temporal RGB data using matrix completion. As opposed to previous methods, our algorithm succeeds on free-viewpoint video of human actors under general uncontrolled indoor scenes with potentially dynamic background, and it succeeds even if the cameras are moving.
Video Imaging System Particularly Suited for Dynamic Gear Inspection
NASA Technical Reports Server (NTRS)
Broughton, Howard (Inventor)
1999-01-01
A digital video imaging system that captures the image of a single tooth of interest of a rotating gear is disclosed. The video imaging system detects the complete rotation of the gear and divide that rotation into discrete time intervals so that each tooth of interest of the gear is precisely determined when it is at a desired location that is illuminated in unison with a digital video camera so as to record a single digital image for each tooth. The digital images are available to provide instantaneous analysis of the tooth of interest, or to be stored and later provide images that yield a history that may be used to predict gear failure, such as gear fatigue. The imaging system is completely automated by a controlling program so that it may run for several days acquiring images without supervision from the user.
NASA Astrophysics Data System (ADS)
Fragkoulis, Alexandros; Kondi, Lisimachos P.; Parsopoulos, Konstantinos E.
2015-03-01
We propose a method for the fair and efficient allocation of wireless resources over a cognitive radio system network to transmit multiple scalable video streams to multiple users. The method exploits the dynamic architecture of the Scalable Video Coding extension of the H.264 standard, along with the diversity that OFDMA networks provide. We use a game-theoretic Nash Bargaining Solution (NBS) framework to ensure that each user receives the minimum video quality requirements, while maintaining fairness over the cognitive radio system. An optimization problem is formulated, where the objective is the maximization of the Nash product while minimizing the waste of resources. The problem is solved by using a Swarm Intelligence optimizer, namely Particle Swarm Optimization. Due to the high dimensionality of the problem, we also introduce a dimension-reduction technique. Our experimental results demonstrate the fairness imposed by the employed NBS framework.
Use of Video Analysis System for Working Posture Evaluations
NASA Technical Reports Server (NTRS)
McKay, Timothy D.; Whitmore, Mihriban
1994-01-01
In a work environment, it is important to identify and quantify the relationship among work activities, working posture, and workplace design. Working posture may impact the physical comfort and well-being of individuals, as well as performance. The Posture Video Analysis Tool (PVAT) is an interactive menu and button driven software prototype written in Supercard (trademark). Human Factors analysts are provided with a predefined set of options typically associated with postural assessments and human performance issues. Once options have been selected, the program is used to evaluate working posture and dynamic tasks from video footage. PVAT has been used to evaluate postures from Orbiter missions, as well as from experimental testing of prototype glove box designs. PVAT can be used for video analysis in a number of industries, with little or no modification. It can contribute to various aspects of workplace design such as training, task allocations, procedural analyses, and hardware usability evaluations. The major advantage of the video analysis approach is the ability to gather data, non-intrusively, in restricted-access environments, such as emergency and operation rooms, contaminated areas, and control rooms. Video analysis also provides the opportunity to conduct preliminary evaluations of existing work areas.
High-frame-rate infrared and visible cameras for test range instrumentation
NASA Astrophysics Data System (ADS)
Ambrose, Joseph G.; King, B.; Tower, John R.; Hughes, Gary W.; Levine, Peter A.; Villani, Thomas S.; Esposito, Benjamin J.; Davis, Timothy J.; O'Mara, K.; Sjursen, W.; McCaffrey, Nathaniel J.; Pantuso, Francis P.
1995-09-01
Field deployable, high frame rate camera systems have been developed to support the test and evaluation activities at the White Sands Missile Range. The infrared cameras employ a 640 by 480 format PtSi focal plane array (FPA). The visible cameras employ a 1024 by 1024 format backside illuminated CCD. The monolithic, MOS architecture of the PtSi FPA supports commandable frame rate, frame size, and integration time. The infrared cameras provide 3 - 5 micron thermal imaging in selectable modes from 30 Hz frame rate, 640 by 480 frame size, 33 ms integration time to 300 Hz frame rate, 133 by 142 frame size, 1 ms integration time. The infrared cameras employ a 500 mm, f/1.7 lens. Video outputs are 12-bit digital video and RS170 analog video with histogram-based contrast enhancement. The 1024 by 1024 format CCD has a 32-port, split-frame transfer architecture. The visible cameras exploit this architecture to provide selectable modes from 30 Hz frame rate, 1024 by 1024 frame size, 32 ms integration time to 300 Hz frame rate, 1024 by 1024 frame size (with 2:1 vertical binning), 0.5 ms integration time. The visible cameras employ a 500 mm, f/4 lens, with integration time controlled by an electro-optical shutter. Video outputs are RS170 analog video (512 by 480 pixels), and 12-bit digital video.
Twenty-Five Years of Dynamic Growth.
ERIC Educational Resources Information Center
Pipes, Lana
1980-01-01
Discusses developments in instructional technology in the past 25 years in the areas of audio, video, micro-electronics, social evolution, the space race, and living with rapidly changing technology. (CMV)
Motion-seeded object-based attention for dynamic visual imagery
NASA Astrophysics Data System (ADS)
Huber, David J.; Khosla, Deepak; Kim, Kyungnam
2017-05-01
This paper† describes a novel system that finds and segments "objects of interest" from dynamic imagery (video) that (1) processes each frame using an advanced motion algorithm that pulls out regions that exhibit anomalous motion, and (2) extracts the boundary of each object of interest using a biologically-inspired segmentation algorithm based on feature contours. The system uses a series of modular, parallel algorithms, which allows many complicated operations to be carried out by the system in a very short time, and can be used as a front-end to a larger system that includes object recognition and scene understanding modules. Using this method, we show 90% accuracy with fewer than 0.1 false positives per frame of video, which represents a significant improvement over detection using a baseline attention algorithm.
PBX 9502 Gas Generation Progress Report FY17
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holmes, Matthew David; Erickson, Michael Andrew Englert
The self-ignition (“cookoff”) behavior of PBX 9502 depends on the dynamic evolution of gas permeability and physical damage in the material. The time-resolved measurement of product gas generation yields insight regarding the crucial properties that dominate cookoff behavior. We report on small-scale laboratory testing performed in FY17, in which small unconfined samples of PBX 9502 were heated in a small custom-built sealed pressure vessel to self-ignition. We recorded time-lapse video of the evolving physical changes in the sample, quasi-static long-duration pressure rise, then high-speed video and dynamic pressure rise of the cookoff event. We report the full pressure attained duringmore » the cookoff of a 1.02g sample in a free volume of 62.5 cm 3.« less
Multimedia category preferences of working engineers
NASA Astrophysics Data System (ADS)
Baukal, Charles E.; Ausburn, Lynna J.
2016-09-01
Many have argued for the importance of continuing engineering education (CEE), but relatively few recommendations were found in the literature for how to use multimedia technologies to deliver it most effectively. The study reported here addressed this gap by investigating the multimedia category preferences of working engineers. Four categories of multimedia, with two types in each category, were studied: verbal (text and narration), static graphics (drawing and photograph), dynamic non-interactive graphics (animation and video), and dynamic interactive graphics (simulated virtual reality (VR) and photo-real VR). The results showed that working engineers strongly preferred text over narration and somewhat preferred drawing over photograph, animation over video, and simulated VR over photo-real VR. These results suggest that a variety of multimedia types should be used in the instructional design of CEE content.
2014-05-19
NASA's Solar Dynamics Observatory (SDO) zoomed in almost to its maximum level to watch tight, bright loops and much longer, softer loops shift and sway above an active region on the sun, while a darker blob of plasma in their midst was pulled about every which way (May 13-14, 2014). The video clip covers just over a day beginning at 14:19 UT on May 13. The frames were taken in the 171-angstroms wavelength of extreme ultraviolet light, but colorized red, instead of its usual bronze tone. This type of dynamic activity continues almost non-stop on the sun as opposing magnetic forces tangle with each other. Credit: NASA/Solar Dynamics Observatory NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram
Video-Guidance Design for the DART Rendezvous Mission
NASA Technical Reports Server (NTRS)
Ruth, Michael; Tracy, Chisholm
2004-01-01
NASA's Demonstration of Autonomous Rendezvous Technology (DART) mission will validate a number of different guidance technologies, including state-differenced GPS transfers and close-approach video guidance. The video guidance for DART will employ NASA/Marshall s Advanced Video Guidance Sensor (AVGS). This paper focuses on the terminal phase of the DART mission that includes close-approach maneuvers under AVGS guidance. The closed-loop video guidance design for DART is driven by a number of competing requirements, including a need for maximizing tracking bandwidths while coping with measurement noise and the need to minimize RCS firings. A range of different strategies for attitude control and docking guidance have been considered for the DART mission, and design decisions are driven by a goal of minimizing both the design complexity and the effects of video guidance lags. The DART design employs an indirect docking approach, in which the guidance position targets are defined using relative attitude information. Flight simulation results have proven the effectiveness of the video guidance design.
NASA Astrophysics Data System (ADS)
Khosla, Deepak; Huber, David J.; Bhattacharyya, Rajan
2017-05-01
In this paper, we describe an algorithm and system for optimizing search and detection performance for "items of interest" (IOI) in large-sized images and videos that employ the Rapid Serial Visual Presentation (RSVP) based EEG paradigm and surprise algorithms that incorporate motion processing to determine whether static or video RSVP is used. The system works by first computing a motion surprise map on image sub-regions (chips) of incoming sensor video data and then uses those surprise maps to label the chips as either "static" or "moving". This information tells the system whether to use a static or video RSVP presentation and decoding algorithm in order to optimize EEG based detection of IOI in each chip. Using this method, we are able to demonstrate classification of a series of image regions from video with an azimuth value of 1, indicating perfect classification, over a range of display frequencies and video speeds.
A Complex Systems Investigation of Group Work Dynamics in L2 Interactive Tasks
ERIC Educational Resources Information Center
Poupore, Glen
2018-01-01
Working with Korean university-level learners of English, this study provides a detailed analytical comparison of 2 task work groups that were video-recorded, with 1 group scoring very high and the other relatively low based on the results of a Group Work Dynamic (GWD) measuring instrument. Adopting a complexity theory (CT) perspective and…
ERIC Educational Resources Information Center
Alonzo, Alicia C.; Kim, Jiwon
2016-01-01
Although pedagogical content knowledge (PCK) has become widely recognized as an essential part of the knowledge base for teaching, empirical evidence demonstrating a connection between PCK and teaching practice or student learning outcomes is mixed. In response, we argue for further attention to the measurement of dynamic (spontaneous or flexible,…
Mezher, Ahmad Mohamad; Igartua, Mónica Aguilar; de la Cruz Llopis, Luis J.; Segarra, Esteve Pallarès; Tripp-Barba, Carolina; Urquiza-Aguiar, Luis; Forné, Jordi; Gargallo, Emilio Sanvicente
2015-01-01
The prevention of accidents is one of the most important goals of ad hoc networks in smart cities. When an accident happens, dynamic sensors (e.g., citizens with smart phones or tablets, smart vehicles and buses, etc.) could shoot a video clip of the accident and send it through the ad hoc network. With a video message, the level of seriousness of the accident could be much better evaluated by the authorities (e.g., health care units, police and ambulance drivers) rather than with just a simple text message. Besides, other citizens would be rapidly aware of the incident. In this way, smart dynamic sensors could participate in reporting a situation in the city using the ad hoc network so it would be possible to have a quick reaction warning citizens and emergency units. The deployment of an efficient routing protocol to manage video-warning messages in mobile Ad hoc Networks (MANETs) has important benefits by allowing a fast warning of the incident, which potentially can save lives. To contribute with this goal, we propose a multipath routing protocol to provide video-warning messages in MANETs using a novel game-theoretical approach. As a base for our work, we start from our previous work, where a 2-players game-theoretical routing protocol was proposed to provide video-streaming services over MANETs. In this article, we further generalize the analysis made for a general number of N players in the MANET. Simulations have been carried out to show the benefits of our proposal, taking into account the mobility of the nodes and the presence of interfering traffic.Finally, we also have tested our approach in a vehicular ad hoc network as an incipient start point to develop a novel proposal specifically designed for VANETs. PMID:25897496
Assessment of YouTube videos as a source of information on medication use in pregnancy.
Hansen, Craig; Interrante, Julia D; Ailes, Elizabeth C; Frey, Meghan T; Broussard, Cheryl S; Godoshian, Valerie J; Lewis, Courtney; Polen, Kara N D; Garcia, Amanda P; Gilboa, Suzanne M
2016-01-01
When making decisions about medication use in pregnancy, women consult many information sources, including the Internet. The aim of this study was to assess the content of publicly accessible YouTube videos that discuss medication use in pregnancy. Using 2023 distinct combinations of search terms related to medications and pregnancy, we extracted metadata from YouTube videos using a YouTube video Application Programming Interface. Relevant videos were defined as those with a medication search term and a pregnancy-related search term in either the video title or description. We viewed relevant videos and abstracted content from each video into a database. We documented whether videos implied each medication to be "safe" or "unsafe" in pregnancy and compared that assessment with the medication's Teratogen Information System (TERIS) rating. After viewing 651 videos, 314 videos with information about medication use in pregnancy were available for the final analyses. The majority of videos were from law firms (67%), television segments (10%), or physicians (8%). Selective serotonin reuptake inhibitors (SSRIs) were the most common medication class named (225 videos, 72%), and 88% of videos about SSRIs indicated that they were unsafe for use in pregnancy. However, the TERIS ratings for medication products in this class range from "unlikely" to "minimal" teratogenic risk. For the majority of medications, current YouTube video content does not adequately reflect what is known about the safety of their use in pregnancy and should be interpreted cautiously. However, YouTube could serve as a platform for communicating evidence-based medication safety information. Copyright © 2015 John Wiley & Sons, Ltd.
High-Speed Video Analysis in a Conceptual Physics Class
NASA Astrophysics Data System (ADS)
Desbien, Dwain M.
2011-09-01
The use of probe ware and computers has become quite common in introductory physics classrooms. Video analysis is also becoming more popular and is available to a wide range of students through commercially available and/or free software.2,3 Video analysis allows for the study of motions that cannot be easily measured in the traditional lab setting and also allows real-world situations to be analyzed. Many motions are too fast to easily be captured at the standard video frame rate of 30 frames per second (fps) employed by most video cameras. This paper will discuss using a consumer camera that can record high-frame-rate video in a college-level conceptual physics class. In particular this will involve the use of model rockets to determine the acceleration during the boost period right at launch and compare it to a simple model of the expected acceleration.
The compressed average image intensity metric for stereoscopic video quality assessment
NASA Astrophysics Data System (ADS)
Wilczewski, Grzegorz
2016-09-01
The following article depicts insights towards design, creation and testing of a genuine metric designed for a 3DTV video quality evaluation. The Compressed Average Image Intensity (CAII) mechanism is based upon stereoscopic video content analysis, setting its core feature and functionality to serve as a versatile tool for an effective 3DTV service quality assessment. Being an objective type of quality metric it may be utilized as a reliable source of information about the actual performance of a given 3DTV system, under strict providers evaluation. Concerning testing and the overall performance analysis of the CAII metric, the following paper presents comprehensive study of results gathered across several testing routines among selected set of samples of stereoscopic video content. As a result, the designed method for stereoscopic video quality evaluation is investigated across the range of synthetic visual impairments injected into the original video stream.
Update on POCIT portable optical communicators: VideoBeam and EtherBeam
NASA Astrophysics Data System (ADS)
Mecherle, G. Stephen; Holcomb, Terry L.
2000-05-01
LDSC is developing the POCITTM (Portable Optical Communication Integrated Transceiver) family of products which includes VideoBeamTM and the latest addition, EtherBeamTM. Each is a full duplex portable laser communicator: VideoBeamTM providing near-broadcast- quality analog video and stereo audio, and EtherBeamTM providing standard Ethernet connectivity. Each POCITTM transceiver consists of a 3.5-pound unit with a binocular- type form factor, which can be manually pointed, tripod- mounted or gyro-stabilized. Both units have an operational range of over two miles (clear air) with excellent jam- resistance and low probability of interception characteristics. The transmission wavelength of 1550 nm enables Class 1 eyesafe operation (ANSI, IEC). The POCITTM units are ideally suited for numerous military scenarios, surveillance/espionage, industrial precious mineral exploration, and campus video teleconferencing applications. VideoBeam will be available second quarter 2000, followed by EtherBeam in third quarter 2000.
Video modeling to train staff to implement discrete-trial instruction.
Catania, Cynthia N; Almeida, Daniel; Liu-Constant, Brian; DiGennaro Reed, Florence D
2009-01-01
Three new direct-service staff participated in a program that used a video model to train target skills needed to conduct a discrete-trial session. Percentage accuracy in completing a discrete-trial teaching session was evaluated using a multiple baseline design across participants. During baseline, performances ranged from a mean of 12% to 63% accuracy. During video modeling, there was an immediate increase in accuracy to a mean of 98%, 85%, and 94% for each participant. Performance during maintenance and generalization probes remained at high levels. Results suggest that video modeling can be an effective technique to train staff to conduct discrete-trial sessions.
Capabilities Test Operations Center Test Director Range Control Track Control Communications Tracking Radars Us Range Videos/Photos Range Capabilities Test Operations Center Test Director Range Control Track Control Communications Tracking Radars Optical Systems Cinetheodolites Telescopes R&D Telescopes
YouTube Videos on Botulinum Toxin A for Wrinkles: A Useful Resource for Patient Education.
Wong, Katharine; Doong, Judy; Trang, Trinh; Joo, Sarah; Chien, Anna L
2017-12-01
Patients interested in botulinum toxin type A (BTX-A) for wrinkles search for videos on YouTube, but little is known about the quality and reliability of the content. The authors examined the quality, reliability, content, and target audience of YouTube videos on BTX for wrinkles. In this cross-sectional study, the term "Botox" was searched on YouTube. Sixty relevant videos in English were independently categorized by 2 reviewers as useful informational, misleading informational, useful patient view, or misleading patient view. Disagreements were settled by a third reviewer. Videos were rated on the Global Quality Scale (GQS) (1 = poor, 5 = excellent). Sixty-three percent of the BTX YouTube videos were useful informational (GQS = 4.4 ± 0.7), 33% as useful patient view (GQS = 3.21 ± 1.2), 2% as misleading informational (GQS = 1), and 2% as misleading patient view (GQS = 2.5). The large number of useful videos, high reliability, and the wide range of content covered suggests that those who search for antiwrinkle BTX videos on YouTube are likely to view high-quality content. This suggests that YouTube may be a good source of videos to recommend for patients interested in BTX.
Reduction of capsule endoscopy reading times by unsupervised image mining.
Iakovidis, D K; Tsevas, S; Polydorou, A
2010-09-01
The screening of the small intestine has become painless and easy with wireless capsule endoscopy (WCE) that is a revolutionary, relatively non-invasive imaging technique performed by a wireless swallowable endoscopic capsule transmitting thousands of video frames per examination. The average time required for the visual inspection of a full 8-h WCE video ranges from 45 to 120min, depending on the experience of the examiner. In this paper, we propose a novel approach to WCE reading time reduction by unsupervised mining of video frames. The proposed methodology is based on a data reduction algorithm which is applied according to a novel scheme for the extraction of representative video frames from a full length WCE video. It can be used either as a video summarization or as a video bookmarking tool, providing the comparative advantage of being general, unbounded by the finiteness of a training set. The number of frames extracted is controlled by a parameter that can be tuned automatically. Comprehensive experiments on real WCE videos indicate that a significant reduction in the reading times is feasible. In the case of the WCE videos used this reduction reached 85% without any loss of abnormalities.
Quality of YouTube TM videos on dental implants.
Abukaraky, A; Hamdan, A-A; Ameera, M-N; Nasief, M; Hassona, Y
2018-07-01
Patients search YouTube for health-care information. To examine what YouTube offers patients seeking information on dental implants, and to evaluate the quality of provided information. A systematic search of YouTube for videos containing information on dental implants was performed using the key words Dental implant and Tooth replacement. Videos were examined by two senior Oral and Maxillofacial Surgery residents who were trained and calibrated to perform the search. Initial assessment was performed to exclude non- English language videos, duplicate videos, conference lectures, and irrelevant videos. Included videos were analyzed with regard to demographics and content's usefulness. Information for patients available from the American Academy of Implant Dentistry, European Association of Osseointegration, and British Society of Restorative Dentistry were used for benchmarking. A total of 117 videos were analyzed. The most commonly discussed topics were related to procedures involved in dental implantology (76.1%, n=89), and to the indications for dental implants (58.1%, n=78). The mean usefulness score of videos was poor (6.02 ±4.7 [range 0-21]), and misleading content was common (30.1% of videos); mainly in topics related to prognosis and maintenance of dental implants. Most videos (83.1%, n=97) failed to mention the source of information presented in the video or where to find more about dental implants. Information about dental implants on YouTube is limited in quality and quantity. YouTube videos can have a potentially important role in modulating patients attitude and treatment decision regarding dental implants.
Distributed video data fusion and mining
NASA Astrophysics Data System (ADS)
Chang, Edward Y.; Wang, Yuan-Fang; Rodoplu, Volkan
2004-09-01
This paper presents an event sensing paradigm for intelligent event-analysis in a wireless, ad hoc, multi-camera, video surveillance system. In particilar, we present statistical methods that we have developed to support three aspects of event sensing: 1) energy-efficient, resource-conserving, and robust sensor data fusion and analysis, 2) intelligent event modeling and recognition, and 3) rapid deployment, dynamic configuration, and continuous operation of the camera networks. We outline our preliminary results, and discuss future directions that research might take.
A new visual navigation system for exploring biomedical Open Educational Resource (OER) videos.
Zhao, Baoquan; Xu, Songhua; Lin, Shujin; Luo, Xiaonan; Duan, Lian
2016-04-01
Biomedical videos as open educational resources (OERs) are increasingly proliferating on the Internet. Unfortunately, seeking personally valuable content from among the vast corpus of quality yet diverse OER videos is nontrivial due to limitations of today's keyword- and content-based video retrieval techniques. To address this need, this study introduces a novel visual navigation system that facilitates users' information seeking from biomedical OER videos in mass quantity by interactively offering visual and textual navigational clues that are both semantically revealing and user-friendly. The authors collected and processed around 25 000 YouTube videos, which collectively last for a total length of about 4000 h, in the broad field of biomedical sciences for our experiment. For each video, its semantic clues are first extracted automatically through computationally analyzing audio and visual signals, as well as text either accompanying or embedded in the video. These extracted clues are subsequently stored in a metadata database and indexed by a high-performance text search engine. During the online retrieval stage, the system renders video search results as dynamic web pages using a JavaScript library that allows users to interactively and intuitively explore video content both efficiently and effectively.ResultsThe authors produced a prototype implementation of the proposed system, which is publicly accessible athttps://patentq.njit.edu/oer To examine the overall advantage of the proposed system for exploring biomedical OER videos, the authors further conducted a user study of a modest scale. The study results encouragingly demonstrate the functional effectiveness and user-friendliness of the new system for facilitating information seeking from and content exploration among massive biomedical OER videos. Using the proposed tool, users can efficiently and effectively find videos of interest, precisely locate video segments delivering personally valuable information, as well as intuitively and conveniently preview essential content of a single or a collection of videos. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Development Of A Dynamic Radiographic Capability Using High-Speed Video
NASA Astrophysics Data System (ADS)
Bryant, Lawrence E.
1985-02-01
High-speed video equipment can be used to optically image up to 2,000 full frames per second or 12,000 partial frames per second. X-ray image intensifiers have historically been used to image radiographic images at 30 frames per second. By combining these two types of equipment, it is possible to perform dynamic x-ray imaging of up to 2,000 full frames per second. The technique has been demonstrated using conventional, industrial x-ray sources such as 150 Kv and 300 Kv constant potential x-ray generators, 2.5 MeV Van de Graaffs, and linear accelerators. A crude form of this high-speed radiographic imaging has been shown to be possible with a cobalt 60 source. Use of a maximum aperture lens makes best use of the available light output from the image intensifier. The x-ray image intensifier input and output fluors decay rapidly enough to allow the high frame rate imaging. Data are presented on the maximum possible video frame rates versus x-ray penetration of various thicknesses of aluminum and steel. Photographs illustrate typical radiographic setups using the high speed imaging method. Video recordings show several demonstrations of this technique with the played-back x-ray images slowed down up to 100 times as compared to the actual event speed. Typical applications include boiling type action of liquids in metal containers, compressor operation with visualization of crankshaft, connecting rod and piston movement and thermal battery operation. An interesting aspect of this technique combines both the optical and x-ray capabilities to observe an object or event with both external and internal details with one camera in a visual mode and the other camera in an x-ray mode. This allows both kinds of video images to appear side by side in a synchronized presentation.
Waldén, Markus; Krosshaug, Tron; Bjørneboe, John; Andersen, Thor Einar; Faul, Oliver
2015-01-01
Background Current knowledge on anterior cruciate ligament (ACL) injury mechanisms in male football players is limited. Aim To describe ACL injury mechanisms in male professional football players using systematic video analysis. Methods We assessed videos from 39 complete ACL tears recorded via prospective professional football injury surveillance between 2001 and 2011. Five analysts independently reviewed all videos to estimate the time of initial foot contact with the ground and the time of ACL tear. We then analysed all videos according to a structured format describing the injury circumstances and lower limb joint biomechanics. Results Twenty-five injuries were non-contact, eight indirect contact and six direct contact injuries. We identified three main categories of non-contact and indirect contact injury situations: (1) pressing (n=11), (2) re-gaining balance after kicking (n=5) and (3) landing after heading (n=5). The fourth main injury situation was direct contact with the injured leg or knee (n=6). Knee valgus was frequently seen in the main categories of non-contact and indirect contact playing situations (n=11), but a dynamic valgus collapse was infrequent (n=3). This was in contrast to the tackling-induced direct contact situations where a knee valgus collapse occurred in all cases (n=3). Conclusions Eighty-five per cent of the ACL injuries in male professional football players resulted from non-contact or indirect contact mechanisms. The most common playing situation leading to injury was pressing followed by kicking and heading. Knee valgus was frequently seen regardless of the playing situation, but a dynamic valgus collapse was rare. PMID:25907183
Pro-Anorexia and Anti-Pro-Anorexia Videos on YouTube: Sentiment Analysis of User Responses.
Oksanen, Atte; Garcia, David; Sirola, Anu; Näsi, Matti; Kaakinen, Markus; Keipi, Teo; Räsänen, Pekka
2015-11-12
Pro-anorexia communities exist online and encourage harmful weight loss and weight control practices, often through emotional content that enforces social ties within these communities. User-generated responses to videos that directly oppose pro-anorexia communities have not yet been researched in depth. The aim was to study emotional reactions to pro-anorexia and anti-pro-anorexia online content on YouTube using sentiment analysis. Using the 50 most popular YouTube pro-anorexia and anti-pro-anorexia user channels as a starting point, we gathered data on users, their videos, and their commentators. A total of 395 anorexia videos and 12,161 comments were analyzed using positive and negative sentiments and ratings submitted by the viewers of the videos. The emotional information was automatically extracted with an automatic sentiment detection tool whose reliability was tested with human coders. Ordinary least squares regression models were used to estimate the strength of sentiments. The models controlled for the number of video views and comments, number of months the video had been on YouTube, duration of the video, uploader's activity as a video commentator, and uploader's physical location by country. The 395 videos had more than 6 million views and comments by almost 8000 users. Anti-pro-anorexia video comments expressed more positive sentiments on a scale of 1 to 5 (adjusted prediction [AP] 2.15, 95% CI 2.11-2.19) than did those of pro-anorexia videos (AP 2.02, 95% CI 1.98-2.06). Anti-pro-anorexia videos also received more likes (AP 181.02, 95% CI 155.19-206.85) than pro-anorexia videos (AP 31.22, 95% CI 31.22-37.81). Negative sentiments and video dislikes were equally distributed in responses to both pro-anorexia and anti-pro-anorexia videos. Despite pro-anorexia content being widespread on YouTube, videos promoting help for anorexia and opposing the pro-anorexia community were more popular, gaining more positive feedback and comments than pro-anorexia videos. Thus, the anti-pro-anorexia content provided a user-generated counterforce against pro-anorexia content on YouTube. Professionals working with young people should be aware of the social media dynamics and versatility of user-generated eating disorder content online.
Pro-Anorexia and Anti-Pro-Anorexia Videos on YouTube: Sentiment Analysis of User Responses
Garcia, David; Sirola, Anu; Näsi, Matti; Kaakinen, Markus; Keipi, Teo; Räsänen, Pekka
2015-01-01
Background Pro-anorexia communities exist online and encourage harmful weight loss and weight control practices, often through emotional content that enforces social ties within these communities. User-generated responses to videos that directly oppose pro-anorexia communities have not yet been researched in depth. Objective The aim was to study emotional reactions to pro-anorexia and anti-pro-anorexia online content on YouTube using sentiment analysis. Methods Using the 50 most popular YouTube pro-anorexia and anti-pro-anorexia user channels as a starting point, we gathered data on users, their videos, and their commentators. A total of 395 anorexia videos and 12,161 comments were analyzed using positive and negative sentiments and ratings submitted by the viewers of the videos. The emotional information was automatically extracted with an automatic sentiment detection tool whose reliability was tested with human coders. Ordinary least squares regression models were used to estimate the strength of sentiments. The models controlled for the number of video views and comments, number of months the video had been on YouTube, duration of the video, uploader’s activity as a video commentator, and uploader’s physical location by country. Results The 395 videos had more than 6 million views and comments by almost 8000 users. Anti-pro-anorexia video comments expressed more positive sentiments on a scale of 1 to 5 (adjusted prediction [AP] 2.15, 95% CI 2.11-2.19) than did those of pro-anorexia videos (AP 2.02, 95% CI 1.98-2.06). Anti-pro-anorexia videos also received more likes (AP 181.02, 95% CI 155.19-206.85) than pro-anorexia videos (AP 31.22, 95% CI 31.22-37.81). Negative sentiments and video dislikes were equally distributed in responses to both pro-anorexia and anti-pro-anorexia videos. Conclusions Despite pro-anorexia content being widespread on YouTube, videos promoting help for anorexia and opposing the pro-anorexia community were more popular, gaining more positive feedback and comments than pro-anorexia videos. Thus, the anti-pro-anorexia content provided a user-generated counterforce against pro-anorexia content on YouTube. Professionals working with young people should be aware of the social media dynamics and versatility of user-generated eating disorder content online. PMID:26563678
ICASE/LaRC Symposium on Visualizing Time-Varying Data
NASA Technical Reports Server (NTRS)
Banks, D. C. (Editor); Crockett, T. W. (Editor); Stacy, K. (Editor)
1996-01-01
Time-varying datasets present difficult problems for both analysis and visualization. For example, the data may be terabytes in size, distributed across mass storage systems at several sites, with time scales ranging from femtoseconds to eons. In response to these challenges, ICASE and NASA Langley Research Center, in cooperation with ACM SIGGRAPH, organized the first symposium on visualizing time-varying data. The purpose was to bring the producers of time-varying data together with visualization specialists to assess open issues in the field, present new solutions, and encourage collaborative problem-solving. These proceedings contain the peer-reviewed papers which were presented at the symposium. They cover a broad range of topics, from methods for modeling and compressing data to systems for visualizing CFD simulations and World Wide Web traffic. Because the subject matter is inherently dynamic, a paper proceedings cannot adequately convey all aspects of the work. The accompanying video proceedings provide additional context for several of the papers.
NASA Astrophysics Data System (ADS)
Czerwiński, Andrzej; Łuczko, Jan
2018-01-01
The paper summarises the experimental investigations and numerical simulations of non-planar parametric vibrations of a statically deformed pipe. Underpinning the theoretical analysis is a 3D dynamic model of curved pipe. The pipe motion is governed by four non-linear partial differential equations with periodically varying coefficients. The Galerkin method was applied, the shape function being that governing the beam's natural vibrations. Experiments were conducted in the range of simple and combination parametric resonances, evidencing the possibility of in-plane and out-of-plane vibrations as well as fully non-planar vibrations in the combination resonance range. It is demonstrated that sub-harmonic and quasi-periodic vibrations are likely to be excited. The method suggested allows the spatial modes to be determined basing on results registered at selected points in the pipe. Results are summarised in the form of time histories, phase trajectory plots and spectral diagrams. Dedicated video materials give us a better insight into the investigated phenomena.
Camera Operator and Videographer
ERIC Educational Resources Information Center
Moore, Pam
2007-01-01
Television, video, and motion picture camera operators produce images that tell a story, inform or entertain an audience, or record an event. They use various cameras to shoot a wide range of material, including television series, news and sporting events, music videos, motion pictures, documentaries, and training sessions. Those who film or…
"Comuniquemonos, Ya]": strengthening interpersonal communication and health through video.
1992-01-01
The Nutrition Communication Project has overseen production of a training video interpersonal communication for health workers involved in growth monitoring and promotion (GMP) programs in Latin America entitled Comuniquemonos, Ya] Producers used the following questions as their guidelines: Who is the audience?, Why is the training needed?, and What are the objectives and advantages of using video? Communication specialists, anthropologists, educators, and nutritionists worked together to write the script. Then video camera specialists taped the video in Bolivia and Guatemala. A facilitator's guide complete with an outline of an entire workshop comes with the video. The guide encourages trainees to participate in various situations. Trainees are able to compare their interpersonal skills with those of the health workers on the video. Further they can determine cause and effect. The video has 2 scenes to demonstrate poor and good communication skills using the same health worker in both situations. Other scenes highlight 6 communication skills: developing a warm environment, asking questions, sharing results, listening, observing, and doing demonstration. All types of health workers ranging from physicians to community health workers as well as health workers from various countries (Guatemala, Honduras, Bolivia, and Ecuador) approve of the video. Some trainers have used the video without using the guide and comment that it began a debate on communication 's role in GMP efforts.
Live-cell Video Microscopy of Fungal Pathogen Phagocytosis
Lewis, Leanne E.; Bain, Judith M.; Okai, Blessing; Gow, Neil A.R.; Erwig, Lars Peter
2013-01-01
Phagocytic clearance of fungal pathogens, and microorganisms more generally, may be considered to consist of four distinct stages: (i) migration of phagocytes to the site where pathogens are located; (ii) recognition of pathogen-associated molecular patterns (PAMPs) through pattern recognition receptors (PRRs); (iii) engulfment of microorganisms bound to the phagocyte cell membrane, and (iv) processing of engulfed cells within maturing phagosomes and digestion of the ingested particle. Studies that assess phagocytosis in its entirety are informative1, 2, 3, 4, 5 but are limited in that they do not normally break the process down into migration, engulfment and phagosome maturation, which may be affected differentially. Furthermore, such studies assess uptake as a single event, rather than as a continuous dynamic process. We have recently developed advanced live-cell imaging technologies, and have combined these with genetic functional analysis of both pathogen and host cells to create a cross-disciplinary platform for the analysis of innate immune cell function and fungal pathogenesis. These studies have revealed novel aspects of phagocytosis that could only be observed using systematic temporal analysis of the molecular and cellular interactions between human phagocytes and fungal pathogens and infectious microorganisms more generally. For example, we have begun to define the following: (a) the components of the cell surface required for each stage of the process of recognition, engulfment and killing of fungal cells1, 6, 7, 8; (b) how surface geometry influences the efficiency of macrophage uptake and killing of yeast and hyphal cells7; and (c) how engulfment leads to alteration of the cell cycle and behavior of macrophages 9, 10. In contrast to single time point snapshots, live-cell video microscopy enables a wide variety of host cells and pathogens to be studied as continuous sequences over lengthy time periods, providing spatial and temporal information on a broad range of dynamic processes, including cell migration, replication and vesicular trafficking. Here we describe in detail how to prepare host and fungal cells, and to conduct the video microscopy experiments. These methods can provide a user-guide for future studies with other phagocytes and microorganisms. PMID:23329139
DOE Office of Scientific and Technical Information (OSTI.GOV)
Teuton, Jeremy R.; Griswold, Richard L.; Mehdi, Beata L.
Precise analysis of both (S)TEM images and video are time and labor intensive processes. As an example, determining when crystal growth and shrinkage occurs during the dynamic process of Li dendrite deposition and stripping involves manually scanning through each frame in the video to extract a specific set of frames/images. For large numbers of images, this process can be very time consuming, so a fast and accurate automated method is desirable. Given this need, we developed software that uses analysis of video compression statistics for detecting and characterizing events in large data sets. This software works by converting the datamore » into a series of images which it compresses into an MPEG-2 video using the open source “avconv” utility [1]. The software does not use the video itself, but rather analyzes the video statistics from the first pass of the video encoding that avconv records in the log file. This file contains statistics for each frame of the video including the frame quality, intra-texture and predicted texture bits, forward and backward motion vector resolution, among others. In all, avconv records 15 statistics for each frame. By combining different statistics, we have been able to detect events in various types of data. We have developed an interactive tool for exploring the data and the statistics that aids the analyst in selecting useful statistics for each analysis. Going forward, an algorithm for detecting and possibly describing events automatically can be written based on statistic(s) for each data type.« less
Development and Validation of a Bilingual Stroke Preparedness Assessment Instrument.
Skolarus, Lesli E; Mazor, Kathleen M; Sánchez, Brisa N; Dome, Mackenzie; Biller, José; Morgenstern, Lewis B
2017-04-01
Stroke preparedness interventions are limited by the lack of psychometrically sound intermediate end points. We sought to develop and assess the reliability and validity of the video-Stroke Action Test (video-STAT) an English and a Spanish video-based test to assess people's ability to recognize and react to stroke signs. Video-STAT development and testing was divided into 4 phases: (1) video development and community-generated response options, (2) pilot testing in community health centers, (3) administration in a national sample, bilingual sample, and neurologist sample, and (4) administration before and after a stroke preparedness intervention. The final version of the video-STAT included 8 videos: 4 acute stroke/emergency, 2 prior stroke/nonemergency, 1 nonstroke/emergency, and 1 nonstroke/nonemergency. Acute stroke recognition and action response were queried after each vignette. Video-STAT scoring was based on the acute stroke vignettes only (score range 0-12 best). The national sample consisted of 598 participants, 438 who took the video-STAT in English and 160 who took the video-STAT in Spanish. There was adequate internal consistency (Cronbach α=0.72). The average video-STAT score was 5.6 (SD=3.6), whereas the average neurologist score was 11.4 (SD=1.3). There was no difference in video-STAT scores between the 116 bilingual video-STAT participants who took the video-STAT in English or Spanish. Compared with baseline scores, the video-STAT scores increased after a stroke preparedness intervention (6.2 versus 8.9, P <0.01) among a sample of 101 black adults and youth. The video-STAT yields reliable scores that seem to be valid measures of stroke preparedness. © 2017 American Heart Association, Inc.
Development and Validation of a Bilingual Stroke Preparedness Assessment Instrument
Skolarus, Lesli E.; Mazor, Kathleen M.; Sánchez, Brisa N.; Dome, Mackenzie; Biller, José; Morgenstern, Lewis B.
2017-01-01
Background and Purpose Stroke preparedness interventions are limited by the lack of psychometrically sound intermediate endpoints. We sought to develop and assess the reliability and validity of the video-Stroke Action Test, video-STAT, an English and Spanish video-based test to assess people’s ability to recognize and react to stroke signs. Methods Video-STAT development and testing was divided into four phases: 1) video development and community-generated response options; 2) pilot testing in community health centers; 3) administration in a national sample, bilingual sample and neurologist sample; and 4) administration before and after a stroke preparedness intervention. Results The final version of the video-STAT included 8 videos: 4 acute stroke/emergency, 2 prior stroke/non-emergency, 1 non-stroke/emergency, 1 non-stroke/non-emergency. Acute stroke recognition and action response were queried after each vignette. Video-STAT scoring was based on the acute stroke vignettes only (score range 0–12 best). The national sample consisted of 598 participants, 438 who took the video-STAT in English and 160 who took the video-STAT in Spanish. There was adequate internal consistency (Cronbach’s alpha=0.72). The average video-STAT score was 5.6 (sd=3.6) while the average neurologist score was 11.4 (sd=1.3). There was no difference in video-STAT scores between the 116 bilingual video-STAT participants who took the video-STAT in English or Spanish. Compared to baseline scores, the video-STAT scores increased following a stroke preparedness intervention (6.2 vs. 8.9, p<0.01) among a sample of 101 African American adults and youth. Conclusion The video-STAT yields reliable scores that appear to be valid measures of stroke preparedness. PMID:28250199
Video attention deviation estimation using inter-frame visual saliency map analysis
NASA Astrophysics Data System (ADS)
Feng, Yunlong; Cheung, Gene; Le Callet, Patrick; Ji, Yusheng
2012-01-01
A viewer's visual attention during video playback is the matching of his eye gaze movement to the changing video content over time. If the gaze movement matches the video content (e.g., follow a rolling soccer ball), then the viewer keeps his visual attention. If the gaze location moves from one video object to another, then the viewer shifts his visual attention. A video that causes a viewer to shift his attention often is a "busy" video. Determination of which video content is busy is an important practical problem; a busy video is difficult for encoder to deploy region of interest (ROI)-based bit allocation, and hard for content provider to insert additional overlays like advertisements, making the video even busier. One way to determine the busyness of video content is to conduct eye gaze experiments with a sizable group of test subjects, but this is time-consuming and costineffective. In this paper, we propose an alternative method to determine the busyness of video-formally called video attention deviation (VAD): analyze the spatial visual saliency maps of the video frames across time. We first derive transition probabilities of a Markov model for eye gaze using saliency maps of a number of consecutive frames. We then compute steady state probability of the saccade state in the model-our estimate of VAD. We demonstrate that the computed steady state probability for saccade using saliency map analysis matches that computed using actual gaze traces for a range of videos with different degrees of busyness. Further, our analysis can also be used to segment video into shorter clips of different degrees of busyness by computing the Kullback-Leibler divergence using consecutive motion compensated saliency maps.
Unguided Rocket Employment: Why We Must Update Marine Corps Rotary Wing Attack Training
2008-03-01
100 meters of target area from a range of 1500m or less during the initial.engagement. Using NTS video , validate an effective HELLFIRE engagement of a...as stipulated by the terminal controller) using 5.00 inch rockets or 20mm within 30 seconds of TOT during the initial engagement. Using NTS video ...Standards. Achievement of desired illumination effects (as stipulated in OAS brief) will be debriefed by flight lead. Using NTS video , validate an
Physiological Monitoring During Simulation Training and Testing
2005-07-29
35. Participants varied in combat experience, rank, and competence with video games . Subject’s years of service ranged from less than 1 year to 15...Shoothouse Exercises Figure 10 SVRTUALRE6MCA ’ Experiment I Video game VS. Real world In this study, we asked the question of whether or not the action of...playing a video game would affect the outcome of the performance in the real shoothouse and real village. There is some evidence in the literature that
Automated video surveillance: teaching an old dog new tricks
NASA Astrophysics Data System (ADS)
McLeod, Alastair
1993-12-01
The automated video surveillance market is booming with new players, new systems, new hardware and software, and an extended range of applications. This paper reviews available technology, and describes the features required for a good automated surveillance system. Both hardware and software are discussed. An overview of typical applications is also given. A shift towards PC-based hybrid systems, use of parallel processing, neural networks, and exploitation of modern telecomms are introduced, highlighting the evolution modern video surveillance systems.
Quantitative assessment of human motion using video motion analysis
NASA Technical Reports Server (NTRS)
Probe, John D.
1993-01-01
In the study of the dynamics and kinematics of the human body a wide variety of technologies has been developed. Photogrammetric techniques are well documented and are known to provide reliable positional data from recorded images. Often these techniques are used in conjunction with cinematography and videography for analysis of planar motion, and to a lesser degree three-dimensional motion. Cinematography has been the most widely used medium for movement analysis. Excessive operating costs and the lag time required for film development, coupled with recent advances in video technology, have allowed video based motion analysis systems to emerge as a cost effective method of collecting and analyzing human movement. The Anthropometric and Biomechanics Lab at Johnson Space Center utilizes the video based Ariel Performance Analysis System (APAS) to develop data on shirtsleeved and space-suited human performance in order to plan efficient on-orbit intravehicular and extravehicular activities. APAS is a fully integrated system of hardware and software for biomechanics and the analysis of human performance and generalized motion measurement. Major components of the complete system include the video system, the AT compatible computer, and the proprietary software.
MPCM: a hardware coder for super slow motion video sequences
NASA Astrophysics Data System (ADS)
Alcocer, Estefanía; López-Granado, Otoniel; Gutierrez, Roberto; Malumbres, Manuel P.
2013-12-01
In the last decade, the improvements in VLSI levels and image sensor technologies have led to a frenetic rush to provide image sensors with higher resolutions and faster frame rates. As a result, video devices were designed to capture real-time video at high-resolution formats with frame rates reaching 1,000 fps and beyond. These ultrahigh-speed video cameras are widely used in scientific and industrial applications, such as car crash tests, combustion research, materials research and testing, fluid dynamics, and flow visualization that demand real-time video capturing at extremely high frame rates with high-definition formats. Therefore, data storage capability, communication bandwidth, processing time, and power consumption are critical parameters that should be carefully considered in their design. In this paper, we propose a fast FPGA implementation of a simple codec called modulo-pulse code modulation (MPCM) which is able to reduce the bandwidth requirements up to 1.7 times at the same image quality when compared with PCM coding. This allows current high-speed cameras to capture in a continuous manner through a 40-Gbit Ethernet point-to-point access.
Liu, Baolin; Wang, Zhongning; Jin, Zhixing
2009-09-11
In real life, the human brain usually receives information through visual and auditory channels and processes the multisensory information, but studies on the integration processing of the dynamic visual and auditory information are relatively few. In this paper, we have designed an experiment, where through the presentation of common scenario, real-world videos, with matched and mismatched actions (images) and sounds as stimuli, we aimed to study the integration processing of synchronized visual and auditory information in videos of real-world events in the human brain, through the use event-related potentials (ERPs) methods. Experimental results showed that videos of mismatched actions (images) and sounds would elicit a larger P400 as compared to videos of matched actions (images) and sounds. We believe that the P400 waveform might be related to the cognitive integration processing of mismatched multisensory information in the human brain. The results also indicated that synchronized multisensory information would interfere with each other, which would influence the results of the cognitive integration processing.
NASA Astrophysics Data System (ADS)
Guo, Shiyi; Mai, Ying; Zhao, Hongying; Gao, Pengqi
2013-05-01
The airborne video streams of small-UAVs are commonly plagued with distractive jittery and shaking motions, disorienting rotations, noisy and distorted images and other unwanted movements. These problems collectively make it very difficult for observers to obtain useful information from the video. Due to the small payload of small-UAVs, it is a priority to improve the image quality by means of electronic image stabilization. But when small-UAV makes a turn, affected by the flight characteristics of it, the video is easy to become oblique. This brings a lot of difficulties to electronic image stabilization technology. Homography model performed well in the oblique image motion estimation, while bringing great challenges to intentional motion estimation. Therefore, in this paper, we focus on solve the problem of the video stabilized when small-UAVs banking and turning. We attend to the small-UAVs fly along with an arc of a fixed turning radius. For this reason, after a series of experimental analysis on the flight characteristics and the path how small-UAVs turned, we presented a new method to estimate the intentional motion in which the path of the frame center was used to fit the video moving track. Meanwhile, the image sequences dynamic mosaic was done to make up for the limited field of view. At last, the proposed algorithm was carried out and validated by actual airborne videos. The results show that the proposed method is effective to stabilize the oblique video of small-UAVs.
Engineering visualization utilizing advanced animation
NASA Technical Reports Server (NTRS)
Sabionski, Gunter R.; Robinson, Thomas L., Jr.
1989-01-01
Engineering visualization is the use of computer graphics to depict engineering analysis and simulation in visual form from project planning through documentation. Graphics displays let engineers see data represented dynamically which permits the quick evaluation of results. The current state of graphics hardware and software generally allows the creation of two types of 3D graphics. The use of animated video as an engineering visualization tool is presented. The engineering, animation, and videography aspects of animated video production are each discussed. Specific issues include the integration of staffing expertise, hardware, software, and the various production processes. A detailed explanation of the animation process reveals the capabilities of this unique engineering visualization method. Automation of animation and video production processes are covered and future directions are proposed.
SCD's uncooled detectors and video engines for a wide-range of applications
NASA Astrophysics Data System (ADS)
Fraenkel, A.; Mizrahi, U.; Bikov, L.; Giladi, A.; Shiloah, N.; Elkind, S.; Kogan, I.; Maayani, S.; Amsterdam, A.; Vaserman, I.; Duman, O.; Hirsh, Y.; Schapiro, F.; Tuito, A.; Ben-Ezra, M.
2011-06-01
Over the last decade SCD has established a state of the art VOx μ-Bolometer product line. Due to its overall advantages this technology is penetrating a large range of systems. In addition to a large variety of detectors, SCD has also recently introduced modular video engines with an open architecture. In this paper we will describe the versatile applications supported by the products based on 17μm pitch: Low SWaP short range systems, mid range systems based on VGA arrays and high-end systems that will utilize the XGA format. These latter systems have the potential to compete with cooled 2nd Gen scanning LWIR arrays, as will be demonstrated by TRM3 system level calculations.
Catering to millennial learners: assessing and improving fine-needle aspiration performance.
Rowse, Phillip G; Ruparel, Raaj K; AlJamal, Yazan N; Abdelsattar, Jad M; Heller, Stephanie F; Farley, David R
2014-01-01
Fine-needle aspiration (FNA) of a palpable cervical lymph node is a straightforward procedure that should be safely performed by educated general surgery (GS) trainees. Retention of technical skill is suspect, unless sequential learning experiences are provided. However, voluntary learning experiences are no guarantee that trainees will actually use the resource. A 3-minute objective structured assessment of technical skill-type station was created to assess GS trainee performance using FNA. Objective criteria were developed and a checklist was generated (perfect score = 24). Following abysmal performance of 11 postgraduate year (PGY)-4 trainees on the FNA station of our semiannual surgical skills assessment ("X-Games"), we provided all GS residents with electronic access to a 90-second YouTube video clip demonstrating proper FNA technique. PGY-2 (n = 11) and PGY-3 (n = 10) residents subsequently were tested on FNA technique 5 and 12 days later, respectively. All 32 trainees completed the station in less than 3 minutes. Overall scores ranged from 4 to 24 (mean = 14.9). PGY-4 residents assessed before the creation of the video clip scored lowest (range: 4-18, mean = 11.4). PGY-3 residents (range: 10-22, mean = 17.8) and PGY-2 residents (range: 10-24, mean = 15.8) subsequently scored higher (p < 0.05). Ten residents admitted watching the 90-second FNA video clip and scored higher (mean = 21.7) than the 11 residents that admitted they did not watch the clip (mean = 13.1, p < 0.001). Of the 11 trainees who did not watch the video, 6 claimed they did not have time, and 5 felt it would not be useful to them. Overall performance of FNA was poor in 32 midlevel GS residents. However, a 90-second video clip demonstrating proper FNA technique viewed less than 2 weeks before the examination significantly elevated scores. Half of trainees given the chance to learn online did not take the opportunity to view the video clip. Although preemptive learning is effective, future efforts should attempt to improve self-directed learning habits of trainees and evaluate actual long-term skill retention. Copyright © 2014. Published by Elsevier Inc.
Nagamitsu, Shinichiro; Nagano, Miki; Yamashita, Yushiro; Takashima, Sachio; Matsuishi, Toyojiro
2006-06-01
Video game playing is an attractive form of entertainment among school-age children. Although this activity reportedly has many adverse effects on child development, these effects remain controversial. To investigate the effect of video game playing on regional cerebral blood volume, we measured cerebral hemoglobin concentrations using near-infrared spectroscopy in 12 normal volunteers consisting of six children and six adults. A Hitachi Optical Topography system was used to measure hemoglobin changes. For all subjects, the video game Donkey Kong was played on a Game Boy device. After spectroscopic probes were positioned on the scalp near the target brain regions, the participants were asked to play the game for nine periods of 15s each, with 15-s rest intervals between these task periods. Significant increases in bilateral prefrontal total-hemoglobin concentrations were observed in four of the adults during video game playing. On the other hand, significant decreases in bilateral prefrontal total-hemoglobin concentrations were seen in two of the children. A significant positive correlation between mean oxy-hemoglobin changes in the prefrontal region and those in the bilateral motor cortex area was seen in adults. Playing video games gave rise to dynamic changes in cerebral blood volume in both age groups, while the difference in the prefrontal oxygenation patterns suggested an age-dependent utilization of different neural circuits during video game tasks.
Keyhole imaging method for dynamic objects behind the occlusion area
NASA Astrophysics Data System (ADS)
Hao, Conghui; Chen, Xi; Dong, Liquan; Zhao, Yuejin; Liu, Ming; Kong, Lingqin; Hui, Mei; Liu, Xiaohua; Wu, Hong
2018-01-01
A method of keyhole imaging based on camera array is realized to obtain the video image behind a keyhole in shielded space at a relatively long distance. We get the multi-angle video images by using a 2×2 CCD camera array to take the images behind the keyhole in four directions. The multi-angle video images are saved in the form of frame sequences. This paper presents a method of video frame alignment. In order to remove the non-target area outside the aperture, we use the canny operator and morphological method to realize the edge detection of images and fill the images. The image stitching of four images is accomplished on the basis of the image stitching algorithm of two images. In the image stitching algorithm of two images, the SIFT method is adopted to accomplish the initial matching of images, and then the RANSAC algorithm is applied to eliminate the wrong matching points and to obtain a homography matrix. A method of optimizing transformation matrix is proposed in this paper. Finally, the video image with larger field of view behind the keyhole can be synthesized with image frame sequence in which every single frame is stitched. The results show that the screen of the video is clear and natural, the brightness transition is smooth. There is no obvious artificial stitching marks in the video, and it can be applied in different engineering environment .
Packet based serial link realized in FPGA dedicated for high resolution infrared image transmission
NASA Astrophysics Data System (ADS)
Bieszczad, Grzegorz
2015-05-01
In article the external digital interface specially designed for thermographic camera built in Military University of Technology is described. The aim of article is to illustrate challenges encountered during design process of thermal vision camera especially related to infrared data processing and transmission. Article explains main requirements for interface to transfer Infra-Red or Video digital data and describes the solution which we elaborated based on Low Voltage Differential Signaling (LVDS) physical layer and signaling scheme. Elaborated link for image transmission is built using FPGA integrated circuit with built-in high speed serial transceivers achieving up to 2500Gbps throughput. Image transmission is realized using proprietary packet protocol. Transmission protocol engine was described in VHDL language and tested in FPGA hardware. The link is able to transmit 1280x1024@60Hz 24bit video data using one signal pair. Link was tested to transmit thermal-vision camera picture to remote monitor. Construction of dedicated video link allows to reduce power consumption compared to solutions with ASIC based encoders and decoders realizing video links like DVI or packed based Display Port, with simultaneous reduction of wires needed to establish link to one pair. Article describes functions of modules integrated in FPGA design realizing several functions like: synchronization to video source, video stream packeting, interfacing transceiver module and dynamic clock generation for video standard conversion.
Progress in video immersion using Panospheric imaging
NASA Astrophysics Data System (ADS)
Bogner, Stephen L.; Southwell, David T.; Penzes, Steven G.; Brosinsky, Chris A.; Anderson, Ron; Hanna, Doug M.
1998-09-01
Having demonstrated significant technical and marketplace advantages over other modalities for video immersion, PanosphericTM Imaging (PI) continues to evolve rapidly. This paper reports on progress achieved since AeroSense 97. The first practical field deployment of the technology occurred in June-August 1997 during the NASA-CMU 'Atacama Desert Trek' activity, where the Nomad mobile robot was teleoperated via immersive PanosphericTM imagery from a distance of several thousand kilometers. Research using teleoperated vehicles at DRES has also verified the exceptional utility of the PI technology for achieving high levels of situational awareness, operator confidence, and mission effectiveness. Important performance enhancements have been achieved with the completion of the 4th Generation PI DSP-based array processor system. The system is now able to provide dynamic full video-rate generation of spatial and computational transformations, resulting in a programmable and fully interactive immersive video telepresence. A new multi- CCD camera architecture has been created to exploit the bandwidth of this processor, yielding a well-matched PI system with greatly improved resolution. While the initial commercial application for this technology is expected to be video tele- conferencing, it also appears to have excellent potential for application in the 'Immersive Cockpit' concept. Additional progress is reported in the areas of Long Wave Infrared PI Imaging, Stereo PI concepts, PI based Video-Servoing concepts, PI based Video Navigation concepts, and Foveation concepts (to merge localized high-resolution views with immersive views).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Yongchao; Dorn, Charles; Mancini, Tyler
Enhancing the spatial and temporal resolution of vibration measurements and modal analysis could significantly benefit dynamic modelling, analysis, and health monitoring of structures. For example, spatially high-density mode shapes are critical for accurate vibration-based damage localization. In experimental or operational modal analysis, higher (frequency) modes, which may be outside the frequency range of the measurement, contain local structural features that can improve damage localization as well as the construction and updating of the modal-based dynamic model of the structure. In general, the resolution of vibration measurements can be increased by enhanced hardware. Traditional vibration measurement sensors such as accelerometers havemore » high-frequency sampling capacity; however, they are discrete point-wise sensors only providing sparse, low spatial sensing resolution measurements, while dense deployment to achieve high spatial resolution is expensive and results in the mass-loading effect and modification of structure's surface. Non-contact measurement methods such as scanning laser vibrometers provide high spatial and temporal resolution sensing capacity; however, they make measurements sequentially that requires considerable acquisition time. As an alternative non-contact method, digital video cameras are relatively low-cost, agile, and provide high spatial resolution, simultaneous, measurements. Combined with vision based algorithms (e.g., image correlation or template matching, optical flow, etc.), video camera based measurements have been successfully used for experimental and operational vibration measurement and subsequent modal analysis. However, the sampling frequency of most affordable digital cameras is limited to 30–60 Hz, while high-speed cameras for higher frequency vibration measurements are extremely costly. This work develops a computational algorithm capable of performing vibration measurement at a uniform sampling frequency lower than what is required by the Shannon-Nyquist sampling theorem for output-only modal analysis. In particular, the spatio-temporal uncoupling property of the modal expansion of structural vibration responses enables a direct modal decoupling of the temporally-aliased vibration measurements by existing output-only modal analysis methods, yielding (full-field) mode shapes estimation directly. Then the signal aliasing properties in modal analysis is exploited to estimate the modal frequencies and damping ratios. Furthermore, the proposed method is validated by laboratory experiments where output-only modal identification is conducted on temporally-aliased acceleration responses and particularly the temporally-aliased video measurements of bench-scale structures, including a three-story building structure and a cantilever beam.« less
Yang, Yongchao; Dorn, Charles; Mancini, Tyler; ...
2016-12-05
Enhancing the spatial and temporal resolution of vibration measurements and modal analysis could significantly benefit dynamic modelling, analysis, and health monitoring of structures. For example, spatially high-density mode shapes are critical for accurate vibration-based damage localization. In experimental or operational modal analysis, higher (frequency) modes, which may be outside the frequency range of the measurement, contain local structural features that can improve damage localization as well as the construction and updating of the modal-based dynamic model of the structure. In general, the resolution of vibration measurements can be increased by enhanced hardware. Traditional vibration measurement sensors such as accelerometers havemore » high-frequency sampling capacity; however, they are discrete point-wise sensors only providing sparse, low spatial sensing resolution measurements, while dense deployment to achieve high spatial resolution is expensive and results in the mass-loading effect and modification of structure's surface. Non-contact measurement methods such as scanning laser vibrometers provide high spatial and temporal resolution sensing capacity; however, they make measurements sequentially that requires considerable acquisition time. As an alternative non-contact method, digital video cameras are relatively low-cost, agile, and provide high spatial resolution, simultaneous, measurements. Combined with vision based algorithms (e.g., image correlation or template matching, optical flow, etc.), video camera based measurements have been successfully used for experimental and operational vibration measurement and subsequent modal analysis. However, the sampling frequency of most affordable digital cameras is limited to 30–60 Hz, while high-speed cameras for higher frequency vibration measurements are extremely costly. This work develops a computational algorithm capable of performing vibration measurement at a uniform sampling frequency lower than what is required by the Shannon-Nyquist sampling theorem for output-only modal analysis. In particular, the spatio-temporal uncoupling property of the modal expansion of structural vibration responses enables a direct modal decoupling of the temporally-aliased vibration measurements by existing output-only modal analysis methods, yielding (full-field) mode shapes estimation directly. Then the signal aliasing properties in modal analysis is exploited to estimate the modal frequencies and damping ratios. Furthermore, the proposed method is validated by laboratory experiments where output-only modal identification is conducted on temporally-aliased acceleration responses and particularly the temporally-aliased video measurements of bench-scale structures, including a three-story building structure and a cantilever beam.« less
Tokita, Daisuke; Ebihara, Arata; Miyara, Kana; Okiji, Takashi
2017-08-01
This study examined the dynamic fracture behavior of nickel-titanium rotary instruments in torsional or cyclic loading at continuous or reciprocating rotation by means of high-speed digital video imaging. The ProFile instruments (size 30, 0.06 taper; Dentsply Maillefer, Ballaigues, Switzerland) were categorized into 4 groups (n = 7 in each group) as follows: torsional/continuous (TC), torsional/reciprocating (TR), cyclic/continuous (CC), and cyclic/reciprocating (CR). Torsional loading was performed by rotating the instruments by holding the tip with a vise. For cyclic loading, a custom-made device with a 38° curvature was used. Dynamic fracture behavior was observed with a high-speed camera. The time to fracture was recorded, and the fractured surface was examined with scanning electron microscopy. The TC group initially exhibited necking of the file followed by the development of an initial crack line. The TR group demonstrated opening and closing of a crack according to its rotation in the cutting and noncutting directions, respectively. The CC group separated without any detectable signs of deformation. In the CR group, initial crack formation was recognized in 5 of 7 samples. The reciprocating rotation exhibited a longer time to fracture in both torsional and cyclic fatigue testing (P < .05). The scanning electron microscopic images showed a severely deformed surface in the TR group. The dynamic fracture behavior of NiTi rotary instruments, as visualized with high-speed digital video imaging, varied between the different modes of rotation and different fatigue testing. Reciprocating rotation induced a slower crack propagation and conferred higher fatigue resistance than continuous rotation in both torsional and cyclic loads. Copyright © 2017 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.
Griffiths, Jason I.; Fronhofer, Emanuel A.; Garnier, Aurélie; Seymour, Mathew; Altermatt, Florian; Petchey, Owen L.
2017-01-01
The development of video-based monitoring methods allows for rapid, dynamic and accurate monitoring of individuals or communities, compared to slower traditional methods, with far reaching ecological and evolutionary applications. Large amounts of data are generated using video-based methods, which can be effectively processed using machine learning (ML) algorithms into meaningful ecological information. ML uses user defined classes (e.g. species), derived from a subset (i.e. training data) of video-observed quantitative features (e.g. phenotypic variation), to infer classes in subsequent observations. However, phenotypic variation often changes due to environmental conditions, which may lead to poor classification, if environmentally induced variation in phenotypes is not accounted for. Here we describe a framework for classifying species under changing environmental conditions based on the random forest classification. A sliding window approach was developed that restricts temporal and environmentally conditions to improve the classification. We tested our approach by applying the classification framework to experimental data. The experiment used a set of six ciliate species to monitor changes in community structure and behavior over hundreds of generations, in dozens of species combinations and across a temperature gradient. Differences in biotic and abiotic conditions caused simplistic classification approaches to be unsuccessful. In contrast, the sliding window approach allowed classification to be highly successful, as phenotypic differences driven by environmental change, could be captured by the classifier. Importantly, classification using the random forest algorithm showed comparable success when validated against traditional, slower, manual identification. Our framework allows for reliable classification in dynamic environments, and may help to improve strategies for long-term monitoring of species in changing environments. Our classification pipeline can be applied in fields assessing species community dynamics, such as eco-toxicology, ecology and evolutionary ecology. PMID:28472193
Nathan, Mitchell J; Walkington, Candace
2017-01-01
We develop a theory of grounded and embodied mathematical cognition (GEMC) that draws on action-cognition transduction for advancing understanding of how the body can support mathematical reasoning. GEMC proposes that participants' actions serve as inputs capable of driving the cognition-action system toward associated cognitive states. This occurs through a process of transduction that promotes valuable mathematical insights by eliciting dynamic depictive gestures that enact spatio-temporal properties of mathematical entities. Our focus here is on pre-college geometry proof production. GEMC suggests that action alone can foster insight but is insufficient for valid proof production if action is not coordinated with language systems for propositionalizing general properties of objects and space. GEMC guides the design of a video game-based learning environment intended to promote students' mathematical insights and informal proofs by eliciting dynamic gestures through in-game directed actions. GEMC generates several hypotheses that contribute to theories of embodied cognition and to the design of science, technology, engineering, and mathematics (STEM) education interventions. Pilot study results with a prototype video game tentatively support theory-based predictions regarding the role of dynamic gestures for fostering insight and proof-with-insight, and for the role of action coupled with language to promote proof-with-insight. But the pilot yields mixed results for deriving in-game interventions intended to elicit dynamic gesture production. Although our central purpose is an explication of GEMC theory and the role of action-cognition transduction, the theory-based video game design reveals the potential of GEMC to improve STEM education, and highlights the complex challenges of connecting embodiment research to education practices and learning environment design.
Why don't end-of-life conversations go viral? A review of videos on YouTube.
Mitchell, Imogen A; Schuster, Anne L R; Lynch, Thomas; Smith, Katherine Clegg; Bridges, John F P; Aslakson, Rebecca A
2017-06-01
To identify videos on YouTube concerning advance care planning (ACP) and synthesise existing video content and style elements. Informed by stakeholder engagement, two researchers searched YouTube for ACP videos using predefined search terms and snowballing techniques. Videos identified were reviewed and deemed ineligible for analysis if they: targeted healthcare professionals; contained irrelevant content; focused on viewers under the age of 18; were longer than 7 min in duration; received fewer than 150 views; were in a language other than English; or were a duplicate version. For each video, two investigators independently extracted general information as well as video content and stylistic characteristics. The YouTube search identified 23 100 videos with 213 retrieved for assessment and 42 meeting eligibility criteria. The majority of videos had been posted to YouTube since 2010 and produced by organisations in the USA (71%). Viewership ranged from 171 to 10 642. Most videos used a documentary style and featured healthcare providers (60%) rather than patients (19%) or families (45%). A minority of videos (29%) used upbeat or hopeful music. The videos frequently focused on completing legal medical documents (86%). None of the ACP videos on YouTube went viral and a relatively small number of them contained elements endorsed by stakeholders. In emphasising the completion of legal medical documents, videos may have failed to support more meaningful ACP. Further research is needed to understand the features of videos that will engage patients and the wider community with ACP and palliative and end-of-life care conversations. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
ASSESSMENT OF YOUTUBE VIDEOS AS A SOURCE OF INFORMATION ON MEDICATION USE IN PREGNANCY
Hansen, Craig; Interrante, Julia D; Ailes, Elizabeth C; Frey, Meghan T; Broussard, Cheryl S; Godoshian, Valerie J; Lewis, Courtney; Polen, Kara ND; Garcia, Amanda P; Gilboa, Suzanne M
2015-01-01
Background When making decisions about medication use in pregnancy, women consult many information sources, including the Internet. The aim of this study was to assess the content of publicly-accessible YouTube videos that discuss medication use in pregnancy. Methods Using 2,023 distinct combinations of search terms related to medications and pregnancy, we extracted metadata from YouTube videos using a YouTube video Application Programming Interface. Relevant videos were defined as those with a medication search term and a pregnancy-related search term in either the video title or description. We viewed relevant videos and abstracted content from each video into a database. We documented whether videos implied each medication to be ‘safe’ or ‘unsafe’ in pregnancy and compared that assessment with the medication’s Teratogen Information System (TERIS) rating. Results After viewing 651 videos, 314 videos with information about medication use in pregnancy were available for the final analyses. The majority of videos were from law firms (67%), television segments (10%), or physicians (8%). Selective serotonin reuptake inhibitors (SSRIs) were the most common medication class named (225 videos, 72%), and 88% percent of videos about SSRIs indicated they were ‘unsafe’ for use in pregnancy. However, the TERIS ratings for medication products in this class range from ‘unlikely’ to ‘minimal’ teratogenic risk. Conclusion For the majority of medications, current YouTube video content does not adequately reflect what is known about the safety of their use in pregnancy and should be interpreted cautiously. However, YouTube could serve as a valuable platform for communicating evidence-based medication safety information. PMID:26541372
Biological Response to the Dynamic Spectral-Polarized Underwater Light Field
2011-09-30
www.bio.utexas.edu/research/cummingslab/ LONG-TERM GOALS Camouflage in marine environments requires matching all of the background optical ...polarized light field in near-shore and near-surface environments (2) Characterize the biological camouflage response of organisms to these dynamic optical ...field will be measured by the simultaneous deployment of a comprehensive optical suite including underwater video-polarimetry (Cummings), inherent
ERIC Educational Resources Information Center
Pfeiffer, Vanessa D. I.; Scheiter, Katharina; Kuhl, Tim; Gemballa, Sven
2011-01-01
This study investigated whether studying dynamic-static visualizations prepared first-year Biology students better for an out-of-classroom experience in an aquarium than learning how to identify species with more traditional instructional materials. During an initial classroom phase, learners either watched underwater videos of 15 freshwater fish…
ERIC Educational Resources Information Center
Miller, Jonas G.; Nuselovici, Jacob N.; Hastings, Paul D.
2016-01-01
How does empathic physiology unfold as a dynamic process, and which aspect of empathy predicts children's kindness? In response to empathy induction videos, 4- to 6-year-old children (N = 180) showed an average pattern of dynamic respiratory sinus arrhythmia (RSA) change characterized by early RSA suppression, followed by RSA recovery, and modest…
Yellow River Icicle Hazard Dynamic Monitoring Using UAV Aerial Remote Sensing Technology
NASA Astrophysics Data System (ADS)
Wang, H. B.; Wang, G. H.; Tang, X. M.; Li, C. H.
2014-02-01
Monitoring the response of Yellow River icicle hazard change requires accurate and repeatable topographic surveys. A new method based on unmanned aerial vehicle (UAV) aerial remote sensing technology is proposed for real-time data processing in Yellow River icicle hazard dynamic monitoring. The monitoring area is located in the Yellow River ice intensive care area in southern BaoTou of Inner Mongolia autonomous region. Monitoring time is from the 20th February to 30th March in 2013. Using the proposed video data processing method, automatic extraction covering area of 7.8 km2 of video key frame image 1832 frames took 34.786 seconds. The stitching and correcting time was 122.34 seconds and the accuracy was better than 0.5 m. Through the comparison of precise processing of sequence video stitching image, the method determines the change of the Yellow River ice and locates accurate positioning of ice bar, improving the traditional visual method by more than 100 times. The results provide accurate aid decision information for the Yellow River ice prevention headquarters. Finally, the effect of dam break is repeatedly monitored and ice break five meter accuracy is calculated through accurate monitoring and evaluation analysis.
Work zone speed reduction utilizing dynamic speed signs
DOT National Transportation Integrated Search
2011-08-30
Vast quantities of transportation data are automatically recorded by intelligent transportations infrastructure, such as inductive loop detectors, video cameras, and side-fire radar devices. Such devices are typically deployed by traffic management c...
Kumar, Ankur N.; Miga, Michael I.; Pheiffer, Thomas S.; Chambless, Lola B.; Thompson, Reid C.; Dawant, Benoit M.
2014-01-01
One of the major challenges impeding advancement in image-guided surgical (IGS) systems is the soft-tissue deformation during surgical procedures. These deformations reduce the utility of the patient’s preoperative images and may produce inaccuracies in the application of preoperative surgical plans. Solutions to compensate for the tissue deformations include the acquisition of intraoperative tomographic images of the whole organ for direct displacement measurement and techniques that combines intraoperative organ surface measurements with computational biomechanical models to predict subsurface displacements. The later solution has the advantage of being less expensive and amenable to surgical workflow. Several modalities such as textured laser scanners, conoscopic holography, and stereo-pair cameras have been proposed for the intraoperative 3D estimation of organ surfaces to drive patient-specific biomechanical models for the intraoperative update of preoperative images. Though each modality has its respective advantages and disadvantages, stereo-pair camera approaches used within a standard operating microscope is the focus of this article. A new method that permits the automatic and near real-time estimation of 3D surfaces (at 1Hz) under varying magnifications of the operating microscope is proposed. This method has been evaluated on a CAD phantom object and on full-length neurosurgery video sequences (~1 hour) acquired intraoperatively by the proposed stereovision system. To the best of our knowledge, this type of validation study on full-length brain tumor surgery videos has not been done before. The method for estimating the unknown magnification factor of the operating microscope achieves accuracy within 0.02 of the theoretical value on a CAD phantom and within 0.06 on 4 clinical videos of the entire brain tumor surgery. When compared to a laser range scanner, the proposed method for reconstructing 3D surfaces intraoperatively achieves root mean square errors (surface-to-surface distance) in the 0.28-0.81mm range on the phantom object and in the 0.54-1.35mm range on 4 clinical cases. The digitization accuracy of the presented stereovision methods indicate that the operating microscope can be used to deliver the persistent intraoperative input required by computational biomechanical models to update the patient’s preoperative images and facilitate active surgical guidance. PMID:25189364
Kumar, Ankur N; Miga, Michael I; Pheiffer, Thomas S; Chambless, Lola B; Thompson, Reid C; Dawant, Benoit M
2015-01-01
One of the major challenges impeding advancement in image-guided surgical (IGS) systems is the soft-tissue deformation during surgical procedures. These deformations reduce the utility of the patient's preoperative images and may produce inaccuracies in the application of preoperative surgical plans. Solutions to compensate for the tissue deformations include the acquisition of intraoperative tomographic images of the whole organ for direct displacement measurement and techniques that combines intraoperative organ surface measurements with computational biomechanical models to predict subsurface displacements. The later solution has the advantage of being less expensive and amenable to surgical workflow. Several modalities such as textured laser scanners, conoscopic holography, and stereo-pair cameras have been proposed for the intraoperative 3D estimation of organ surfaces to drive patient-specific biomechanical models for the intraoperative update of preoperative images. Though each modality has its respective advantages and disadvantages, stereo-pair camera approaches used within a standard operating microscope is the focus of this article. A new method that permits the automatic and near real-time estimation of 3D surfaces (at 1 Hz) under varying magnifications of the operating microscope is proposed. This method has been evaluated on a CAD phantom object and on full-length neurosurgery video sequences (∼1 h) acquired intraoperatively by the proposed stereovision system. To the best of our knowledge, this type of validation study on full-length brain tumor surgery videos has not been done before. The method for estimating the unknown magnification factor of the operating microscope achieves accuracy within 0.02 of the theoretical value on a CAD phantom and within 0.06 on 4 clinical videos of the entire brain tumor surgery. When compared to a laser range scanner, the proposed method for reconstructing 3D surfaces intraoperatively achieves root mean square errors (surface-to-surface distance) in the 0.28-0.81 mm range on the phantom object and in the 0.54-1.35 mm range on 4 clinical cases. The digitization accuracy of the presented stereovision methods indicate that the operating microscope can be used to deliver the persistent intraoperative input required by computational biomechanical models to update the patient's preoperative images and facilitate active surgical guidance. Copyright © 2014 Elsevier B.V. All rights reserved.
STS-74/MIR Photogrammetric Appendage Structural Dynamics Experiment Preliminary Data Analysis
NASA Technical Reports Server (NTRS)
Gilbert, Michael G.; Welch, Sharon S.; Pappa, Richard S.; Demeo, Martha E.
1997-01-01
The Photogrammetric Appendage Structural Dynamics Experiment was designed, developed, and flown to demonstrate and prove measurement of the structural vibration response of a Russian Space Station Mir solar array using photogrammetric methods. The experiment flew on the STS-74 Space Shuttle mission to Mir in November 1995 and obtained video imagery of solar array structural response to various excitation events. The video imagery has been digitized and triangulated to obtain response time history data at discrete points on the solar array. This data has been further processed using the Eigensystem Realization Algorithm modal identification technique to determine the natural vibration frequencies, damping, and mode shapes of the solar array. The results demonstrate that photogrammetric measurement of articulating, nonoptically targeted, flexible solar arrays and appendages is a viable, low-cost measurement option for the International Space Station.
NASA Astrophysics Data System (ADS)
Sood, Suresh; Pattinson, Hugh
Traditionally, face-to-face negotiations in the real world have not been looked at as a complex systems interaction of actors resulting in a dynamic and potentially emergent system. If indeed negotiations are an outcome of a dynamic interaction of simpler behavior just as with a complex system, we should be able to see the patterns contributing to the complexities of a negotiation under study. This paper and the supporting research sets out to show B2B (business-to-business) negotiations as complex systems of interacting actors exhibiting dynamic and emergent behavior. This paper discusses the exploratory research based on negotiation simulations in which a large number of business students participate as buyers and sellers. The student interactions are captured on video and a purpose built research method attempts to look for patterns of interactions between actors using visualization techniques traditionally reserved to observe the algorithmic complexity of complex systems. Students are videoed negotiating with partners. Each video is tagged according to a recognized classification and coding scheme for negotiations. The classification relates to the phases through which any particular negotiation might pass, such as laughter, aggression, compromise, and so forth — through some 30 possible categories. Were negotiations more or less successful if they progressed through the categories in different ways? Furthermore, does the data depict emergent pathway segments considered to be more or less successful? This focus on emergence within the data provides further strong support for face-to-face (F2F) negotiations to be construed as complex systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
A video on computer security is described. Lonnie Moore, the Computer Security Manager, CSSM/CPPM at Lawrence Livermore National Laboratory (LLNL) and Gale Warshawsky, the Coordinator for Computer Security Education and Awareness at LLNL, wanted to share topics such as computer ethics, software piracy, privacy issues, and protecting information in a format that would capture and hold an audience`s attention. Four Computer Security Short Subject videos were produced which ranged from 1--3 minutes each. These videos are very effective education and awareness tools that can be used to generate discussions about computer security concerns and good computing practices.
Video Modeling: A Visually Based Intervention for Children with Autism Spectrum Disorder
ERIC Educational Resources Information Center
Ganz, Jennifer B.; Earles-Vollrath, Theresa L.; Cook, Katherine E.
2011-01-01
Visually based interventions such as video modeling have been demonstrated to be effective with students with autism spectrum disorder (ASD). This approach has wide utility, is appropriate for use with students of a range of ages and abilities, promotes independent functioning, and can be used to address numerous learner objectives, including…
Tendon rupture associated with excessive smartphone gaming.
Gilman, Luke; Cage, Dori N; Horn, Adam; Bishop, Frank; Klam, Warren P; Doan, Andrew P
2015-06-01
Excessive use of smartphones has been associated with injuries. A 29-year-old, right hand-dominant man presented with chronic left thumb pain and loss of active motion from playing a Match-3 puzzle video game on his smartphone all day for 6 to 8 weeks. On physical examination, the left extensor pollicis longus tendon was not palpable, and no tendon motion was noted with wrist tenodesis. The thumb metacarpophalangeal range of motion was 10° to 80°, and thumb interphalangeal range of motion was 30° to 70°. The clinical diagnosis was rupture of the left extensor pollicis longus tendon. The patient subsequently underwent an extensor indicis proprius (1 of 2 tendons that extend the index finger) to extensor pollicis longus tendon transfer. During surgery, rupture of the extensor pollicis longus tendon was seen between the metacarpophalangeal and wrist joints. The potential for video games to reduce pain perception raises clinical and social considerations about excessive use, abuse, and addiction. Future research should consider whether pain reduction is a reason some individuals play video games excessively, manifest addiction, or sustain injuries associated with video gaming.
Video Guidance Sensor System With Integrated Rangefinding
NASA Technical Reports Server (NTRS)
Book, Michael L. (Inventor); Bryan, Thomas C. (Inventor); Howard, Richard T. (Inventor); Roe, Fred Davis, Jr. (Inventor); Bell, Joseph L. (Inventor)
2006-01-01
A video guidance sensor system for use, p.g., in automated docking of a chase vehicle with a target vehicle. The system includes an integrated rangefinder sub-system that uses time of flight measurements to measure range. The rangefinder sub-system includes a pair of matched photodetectors for respectively detecting an output laser beam and return laser beam, a buffer memory for storing the photodetector outputs, and a digitizer connected to the buffer memory and including dual amplifiers and analog-to-digital converters. A digital signal processor processes the digitized output to produce a range measurement.
Tompkins, Matthew L.; Woods, Andy T.; Aimola Davies, Anne M.
2016-01-01
Drawing inspiration from sleight-of-hand magic tricks, we developed an experimental paradigm to investigate whether magicians’ misdirection techniques could be used to induce the misperception of “phantom” objects. While previous experiments investigating sleight-of-hand magic tricks have focused on creating false assumptions about the movement of an object in a scene, our experiment investigated creating false assumptions about the presence of an object in a scene. Participants watched a sequence of silent videos depicting a magician performing with a single object. Following each video, participants were asked to write a description of the events in the video. In the final video, participants watched the Phantom Vanish Magic Trick, a novel magic trick developed for this experiment, in which the magician pantomimed the actions of presenting an object and then making it magically disappear. No object was presented during the final video. The silent videos precluded the use of false verbal suggestions, and participants were not asked leading questions about the objects. Nevertheless, 32% of participants reported having visual impressions of non-existent objects. These findings support an inferential model of perception, wherein top-down expectations can be manipulated by the magician to generate vivid illusory experiences, even in the absence of corresponding bottom-up information. PMID:27493635
Real time visualization of dynamic magnetic fields with a nanomagnetic ferrolens
NASA Astrophysics Data System (ADS)
Markoulakis, Emmanouil; Rigakis, Iraklis; Chatzakis, John; Konstantaras, Antonios; Antonidakis, Emmanuel
2018-04-01
Due to advancements in nanomagnetism and latest nanomagnetic materials and devices, a new potential field has been opened up for research and applications which was not possible before. We herein propose a new research field and application for nanomagnetism for the visualization of dynamic magnetic fields in real-time. In short, Nano Magnetic Vision. A new methodology, technique and apparatus were invented and prototyped in order to demonstrate and test this new application. As an application example the visualization of the dynamic magnetic field on a transmitting antenna was chosen. Never seen before high-resolution, photos and real-time color video revealing the actual dynamic magnetic field inside a transmitting radio antenna rod has been captured for the first time. The antenna rod is fed with six hundred volts, orthogonal pulses. This unipolar signal is in the very low frequency (i.e. VLF) range. The signal combined with an extremely short electrical length of the rod, ensures the generation of a relatively strong fluctuating magnetic field, analogue to the signal transmitted, along and inside the antenna. This field is induced into a ferrolens and becomes visible in real-time within the normal human eyes frequency spectrum. The name we have given to the new observation apparatus is, SPIONs Superparamagnetic Ferrolens Microscope (SSFM), a powerful passive scientific observation tool with many other potential applications in the near future.
Method to investigate temporal dynamics of ganglion and other retinal cells in the living human eye
NASA Astrophysics Data System (ADS)
Kurokawa, Kazuhiro; Liu, Zhuolin; Crowell, James; Zhang, Furu; Miller, Donald T.
2018-02-01
The inner retina is critical for visual processing, but much remains unknown about its neural circuitry and vulnerability to disease. A major bottleneck has been our inability to observe the structure and function of the cells composing these retinal layers in the living human eye. Here, we present a noninvasive method to observe both structural and functional information. Adaptive optics optical coherence tomography (AO-OCT) is used to resolve the inner retinal cells in all three dimensions and novel post processing algorithms are applied to extract structure and physiology down to the cellular level. AO-OCT captured the 3D mosaic of individual ganglion cell somas, retinal nerve fiber bundles of micron caliber, and microglial cells, all in exquisite detail. Time correlation analysis of the AO-OCT videos revealed notable temporal differences between the principal layers of the inner retina. The GC layer was more dynamic than the nerve fiber and inner plexiform layers. At the cellular level, we applied a customized correlation method to individual GCL somas, and found a mean time constant of activity of 0.57 s and spread of +/-0.1 s suggesting a range of physiological dynamics even in the same cell type. Extending our method to slower dynamics (from minutes to one year), time-lapse imaging and temporal speckle contrast revealed appendage and soma motion of resting microglial cells at the retinal surface.
NASA Astrophysics Data System (ADS)
Le Bars, M.; Wacheul, J. B.
2015-12-01
Telluric planet formation involved the settling of large amounts of liquid iron coming from impacting planetesimals into an ambient viscous magma ocean. The initial state of planets was mostly determined by exchanges of heat and elements during this iron rain. Up to now, most models of planet formation simply assume that the metal rapidly equilibrated with the whole mantle. Other models account for simplified dynamics of the iron rain, involving the settling of single size drops at the Stokes velocity. But the fluid dynamics of iron sedimentation is much more complex, and influenced by the large viscosity ratio between the metal and the ambient fluid, as shown in studies of rising gas bubbles (e.g. Bonometti and Magnaudet 2006). We aim at developing a global understanding of the iron rain dynamics. Our study relies on a model experiment, consisting in popping a balloon of heated metal liquid at the top of a tank filled with viscous liquid. The experiments reach the relevant turbulent planetary regime, and tackle the whole range of expected viscosity ratios. High-speed videos allow determining the dynamics of drop clouds, as well as the statistics of drop sizes, shapes, and velocities. We also develop an analytical model of turbulent diffusion during settling, validated by measuring the temperature decrease of the metal blob. We finally present consequences for models of planet formation.
Ergonomic problems encountered by the surgical team during video endoscopic surgery.
Kaya, Oskay I; Moran, Munevver; Ozkardes, Alper B; Taskin, Emre Y; Seker, Gaye E; Ozmen, Mahir M
2008-02-01
The aim of this study is to analyze the problems related to the ergonomic conditions faced by video endoscopic surgical teams during video endoscopic surgery by means of a questionnaire. A questionnaire was distributed to 100 medical personnel, from 8 different disciplines, who performed video endoscopic surgeries. Participants were asked to answer 13 questions related to physical, perceptive, and cognitive problems. Eighty-two questionnaires were returned. Although there were differences among the disciplines, participants assessment of various problems ranged from 32% to 72% owing to poor ergonomic conditions. As the problems encountered by the staff during video endoscopic surgery and the poor ergonomic conditions of the operating room affect the productivity of the surgical team and the safety and efficiency of the surgery, redesigning of the instruments and the operating room is required.
POCIT portable optical communicators: VideoBeam and EtherBeam
NASA Astrophysics Data System (ADS)
Mecherle, G. Stephen; Holcomb, Terry L.
1999-12-01
LDSC is developing the POCITTM (Portable Optical Communication Integrated Transceiver) family of products which now includes VideoBeamTM and the latest addition, EtherBeamTM. Each is a full duplex portable laser communicator: VideoBeamTM providing near-broadcast- quality analog video and stereo audio, and EtherBeamTM providing standard Ethernet connectivity. Each POCITTM transceiver consists of a 3.5-pound unit with a binocular- type form factor, which can be manually pointed, tripod- mounted or gyro-stabilized. Both units have an operational range of over two miles (clear air) with excellent jam- resistance and low probability of interception characteristics. The transmission wavelength of 1550 nm enables Class I eyesafe operation (ANSI, IEC). The POCITTM units are ideally suited for numerous miliary scenarios, surveillance/espionage, industrial precious mineral exploration, and campus video teleconferencing applications.
Analysis of dynamic smile and upper lip curvature in young Chinese
Liang, Ling-Zhi; Hu, Wen-Jie; Zhang, Yan-Ling; Chung, Kwok-Hung
2013-01-01
During smile evaluation and anterior esthetic construction, the anatomic and racial variations should be considered in order to achieve better matching results. The aims of this study were to validate an objective method for recording spontaneous smile process and to categorize the smile and upper lip curvature of Chinese Han-nationality youth. One hundred and eighty-eight Chinese Han-nationality youths (88 males and 100 females) ranged from 20 to 35 years of age were selected. Spontaneous smiles were elicited by watching comical movies and the dynamics of the spontaneous smile were captured continuously with a digital video camera. All subjects' smiles were categorized into three types: commissure, cuspid and gummy smile based on video editing software and final images. Subjects' upper lip curvatures were also measured and divided into three groups: upward, straight and downward. Reliability analysis was conducted to obtain intra-rater reliabilities on twice measurements. The Pearson Chi-square test was used to compare differences for each parameters (α=0.05). In smile classification, 60.6% commissure smile, 33.5% cuspid smile and 5.9% gummy smile were obtained. In upper lip measurement, 26.1% upward, 39.9% straight and 34.0% downward upper lip curvature were determined. The commissure smile group showed statistically significant higher percentage of straight (46.5%) and upward (40.4%) in upper lip curvatures (P<0.05), while cuspid smile group (65.1%) and gummy smile group (72.7%) showed statistically significant higher frequency in downward upper lip curvature (P<0.05). It is evident that differences in upper lip curvature and smile classification exist based on race, when comparing Chinese subjects with those of Caucasian descent, and gender. PMID:23558343
Speiser, Jodi J; Hughes, Ian; Mehta, Vikas; Wojcik, Eva M; Hutchens, Kelli A
2014-01-01
: Dermatopathology has relatively few studies regarding teledermatopathology and none have addressed the use of new technologies, such as the tablet PC. We hypothesized that the combination of our existing dynamic nonrobotic system with a tablet PC could provide a novel and cost-efficient method to remotely diagnose dermatopathology cases. 93 cases diagnosed by conventional light microscopy at least 5 months earlier by the participating dermatopathologist were retrieved by an electronic pathology database search. A high-resolution video camera (Nikon DS-L2, version 4.4) mounted on a microscope was used to transmit digital video of a slide to an Apple iPAD2 (Apple Inc, Cupertino, CA) at the pathologist's remote location via live streaming at an interval time of 500 ms and a resolution of 1280/960 pixels. Concordance to the original diagnosis and the seconds elapsed to reaching the diagnosis were recorded. 24.7% (23/93) of cases were melanocytic, 70.9% (66/93) were nonmelanocytic, and 4.4% (4/93) were inflammatory. About 92.5% (86/93) of cases were diagnosed on immediate viewing (<5 seconds), with the average time to diagnosis at 40.2 seconds (range: 10-218 seconds). Of the cases diagnosed immediately, 98.8% (85/86) of the telediagnoses were concordant with the original. Telepathology performed via a tablet PC may serve as a reliable and rapid technique for the diagnosis of routine cases with some diagnostic caveats in mind. Our study established a novel and cost-efficient solution for those institutions that may not have the capital to purchase either a dynamic robotic system or a virtual slide system.
The Computer-Assisted Brief Intervention for Tobacco (CABIT) program: a pilot study.
Boudreaux, Edwin D; Bedek, Kristyna L; Byrne, Nelson J; Baumann, Brigitte M; Lord, Sherrill A; Grissom, Grant
2012-12-03
Health care providers do not routinely carry out brief counseling for tobacco cessation despite the evidence for its effectiveness. For this intervention to be routinely used, it must be brief, be convenient, require little investment of resources, require little specialized training, and be perceived as efficacious by providers. Technological advances hold much potential for addressing the barriers preventing the integration of brief interventions for tobacco cessation into the health care setting. This paper describes the development and initial evaluation of the Computer-Assisted Brief Intervention for Tobacco (CABIT) program, a web-based, multimedia tobacco intervention for use in opportunistic settings. The CABIT uses a self-administered, computerized assessment to produce personalized health care provider and patient reports, and cue a stage-matched video intervention. Respondents interested in changing their tobacco use are offered a faxed referral to a "best matched" tobacco treatment provider (ie, dynamic referral). During 2008, the CABIT program was evaluated in an emergency department, an employee assistance program, and a tobacco dependence program in New Jersey. Participants and health care providers completed semistructured interviews and satisfaction ratings of the assessment, reports, video intervention, and referrals using a 5-point scale. Mean patient satisfaction scores (n = 67) for all domains ranged from 4.00 (Good) to 5.00 (Excellent; Mean = 4.48). Health care providers completed satisfaction forms for 39 patients. Of these 39 patients, 34 (87%) received tobacco resources and referrals they would not have received under standard care. Of the 45 participants offered a dynamic referral, 28 (62%) accepted. The CABIT program provided a user-friendly, desirable service for tobacco users and their health care providers. Further development and clinical trial testing is warranted to establish its effectiveness in promoting treatment engagement and tobacco cessation.
The Computer-Assisted Brief Intervention for Tobacco (CABIT) Program: A Pilot Study
Bedek, Kristyna L; Byrne, Nelson J; Baumann, Brigitte M; Lord, Sherrill A; Grissom, Grant
2012-01-01
Background Health care providers do not routinely carry out brief counseling for tobacco cessation despite the evidence for its effectiveness. For this intervention to be routinely used, it must be brief, be convenient, require little investment of resources, require little specialized training, and be perceived as efficacious by providers. Technological advances hold much potential for addressing the barriers preventing the integration of brief interventions for tobacco cessation into the health care setting. Objective This paper describes the development and initial evaluation of the Computer-Assisted Brief Intervention for Tobacco (CABIT) program, a web-based, multimedia tobacco intervention for use in opportunistic settings. Methods The CABIT uses a self-administered, computerized assessment to produce personalized health care provider and patient reports, and cue a stage-matched video intervention. Respondents interested in changing their tobacco use are offered a faxed referral to a “best matched” tobacco treatment provider (ie, dynamic referral). During 2008, the CABIT program was evaluated in an emergency department, an employee assistance program, and a tobacco dependence program in New Jersey. Participants and health care providers completed semistructured interviews and satisfaction ratings of the assessment, reports, video intervention, and referrals using a 5-point scale. Results Mean patient satisfaction scores (n = 67) for all domains ranged from 4.00 (Good) to 5.00 (Excellent; Mean = 4.48). Health care providers completed satisfaction forms for 39 patients. Of these 39 patients, 34 (87%) received tobacco resources and referrals they would not have received under standard care. Of the 45 participants offered a dynamic referral, 28 (62%) accepted. Conclusions The CABIT program provided a user-friendly, desirable service for tobacco users and their health care providers. Further development and clinical trial testing is warranted to establish its effectiveness in promoting treatment engagement and tobacco cessation. PMID:23208070
McCoy, Scott W.; Coe, Jeffrey A.; Kean, Jason W.; Tucker, Greg E.; Staley, Dennis M.; Wasklewicz, Thad A.
2011-01-01
Debris flows initiated by surface-water runoff during short duration, moderate- to high-intensity rainfall are common in steep, rocky, and sparsely vegetated terrain. Yet large uncertainties remain about the potential for a flow to grow through entrainment of loose debris, which make formulation of accurate mechanical models of debris-flow routing difficult. Using a combination of in situ measurements of debris flow dynamics, video imagery, tracer rocks implanted with passive integrated transponders (PIT) and pre- and post-flow 2-cm resolution digital terrain models (terrain data presented in a companion paper by STALEY et alii, 2011), we investigated the entrainment and transport response of debris flows at Chalk Cliffs, CO, USA. Four monitored events during the summer of 2009 all initiated from surface-water runoff, generally less than an hour after the first measurable rain. Despite reach-scale morphology that remained relatively constant, the four flow events displayed a range of responses, from long-runout flows that entrained significant amounts of channel sediment and dammed the main-stem river, to smaller, short-runout flows that were primarily depositional in the upper basin. Tracer-rock travel-distance distributions for these events were bimodal; particles either remained immobile or they travelled the entire length of the catchment. The long-runout, large-entrainment flow differed from the other smaller flows by the following controlling factors: peak 10-minute rain intensity; duration of significant flow in the channel; and to a lesser extent, peak surge depth and velocity. Our growing database of natural debris-flow events can be used to develop linkages between observed debris-flow transport and entrainment responses and the controlling rainstorm characteristics and flow properties.
Blood Sampling in Newborns: A Systematic Review of YouTube Videos.
Bueno, Mariana; Nishi, Érika Tihemi; Costa, Taine; Freire, Laís Machado; Harrison, Denise
Objective of this study was to conduct a systematic review of YouTube videos showing neonatal blood sampling, and to evaluate pain management and comforting interventions used. Selected videos were consumer- or professional-produced videos showing human newborns undergoing heel lancing or venipuncture for blood sampling, videos showing the entire blood sampling procedure (from the first attempt or puncture to the time of application of a cotton ball or bandage), publication date prior to October 2014, Portuguese titles, available audio. Search terms included "neonate," "newborn," "neonatal screening," and "blood collection." Two reviewers independently screened the videos and extracted the following data. A total of 13 140 videos were retrieved, of which 1354 were further evaluated, and 68 were included. Videos were mostly consumer produced (97%). Heel lancing was performed in 62 (91%). Forty-nine infants (72%) were held by an adult during the procedure. Median pain score immediately after puncture was 4 (interquartile range [IQR] = 0-5), and median length of cry throughout the procedure was 61 seconds (IQR = 88). Breastfeeding (3%) and swaddling (1.5%) were rarely implemented. Posted YouTube videos in Portuguese of newborns undergoing blood collection demonstrate minimal use of pain treatment, and maximal distress during procedures. Knowledge translation strategies are needed to implement effective measures for neonatal pain relief and comfort.
Engelhardt, Christopher R; Mazurek, Micah O
2014-07-01
Environmental correlates of problem behavior among individuals with autism spectrum disorder remain relatively understudied. The current study examined the contribution of in-room (i.e. bedroom) access to a video game console as one potential correlate of problem behavior among a sample of 169 boys with autism spectrum disorder (ranging from 8 to 18 years of age). Parents of these children reported on (1) whether they had specific rules regulating their child's video game use, (2) whether their child had in-room access to a variety of screen-based media devices (television, computer, and video game console), and (3) their child's oppositional behaviors. Multivariate regression models showed that in-room access to a video game console predicted oppositional behavior while controlling for in-room access to other media devices (computer and television) and relevant variables (e.g. average number of video game hours played per day). Additionally, the association between in-room access to a video game console and oppositional behavior was particularly large when parents reported no rules on their child's video game use. The current findings indicate that both access and parental rules regarding video games warrant future experimental and longitudinal research as they relate to problem behavior in boys with autism spectrum disorder. © The Author(s) 2013.
Electronic magnification and perceived contrast of video
Haun, Andrew; Woods, Russell L; Peli, Eli
2012-01-01
Electronic magnification of an image results in a decrease in its perceived contrast. The decrease in perceived contrast could be due to a perceived blur or to limited sampling of the range of contrasts in the original image. We measured the effect on perceived contrast of magnification in two contexts: either a small video was enlarged to fill a larger area, or a portion of a larger video was enlarged to fill the same area as the original. Subjects attenuated the source video contrast to match the perceived contrast of the magnified videos, with the effect increasing with magnification and decreasing with viewing distance. These effects are consistent with expectations based on both the contrast statistics of natural images and the contrast sensitivity of the human visual system. We demonstrate that local regions within videos usually have lower physical contrast than the whole, and that this difference accounts for a minor part of the perceived differences. Instead, visibility of ‘missing content’ (blur) in a video is misinterpreted as a decrease in contrast. We detail how the effects of magnification on perceived contrast can be measured while avoiding confounding factors. PMID:23483111
DOE Office of Scientific and Technical Information (OSTI.GOV)
Atari, N.A.; Svensson, G.K.
1986-05-01
A high-resolution digital dosimetric system has been developed for the spatial characterization of radiation fields. The system comprises the following: 0.5-mm-thick, 25-mm-diam CaF/sub 2/:Dy thermoluminescent crystal; intensified charge coupled device video camera; video cassette recorder; and a computerized image processing subsystem. The optically flat single crystal is used as a radiation imaging device and the subsequent thermally stimulated phosphorescence is viewed by the intensified camera for further processing and analysis. Parameters governing the performance characteristics of the system were measured. A spatial resolution limit of 31 +- 2 ..mu..m (1sigma) corresponding to 16 +- 1 line pair/mm measured at themore » 4% level of the modulation transfer function has been achieved. The full width at half maximum of the line spread function measured independently by the slit method or derived from the edge response function was found to be 69 +- 4 ..mu..m (1sigma). The high resolving power, speed of readout, good precision, wide dynamic range, and the large image storage capacity make the system suitable for the digital mapping of the relative distribution of absorbed doses for various small radiation fields and the edges of larger fields.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Atari, N.A.; Svensson, G.K.
1986-05-01
A high-resolution digital dosimetric system has been developed for the spatial characterization of radiation fields. The system comprises the following: 0.5-mm-thick, 25-mm-diam CaF2:Dy thermoluminescent crystal; intensified charge coupled device video camera; video cassette recorder; and a computerized image processing subsystem. The optically flat single crystal is used as a radiation imaging device and the subsequent thermally stimulated phosphorescence is viewed by the intensified camera for further processing and analysis. Parameters governing the performance characteristics of the system were measured. A spatial resolution limit of 31 +/- 2 microns (1 sigma) corresponding to 16 +/- 1 line pairs/mm measured at themore » 4% level of the modulation transfer function has been achieved. The full width at half maximum of the line spread function measured independently by the slit method or derived from the edge response function was found to be 69 +/- 4 microns (1 sigma). The high resolving power, speed of readout, good precision, wide dynamic range, and the large image storage capacity make the system suitable for the digital mapping of the relative distribution of absorbed doses for various small radiation fields and the edges of larger fields.« less
The scope of nonsuicidal self-injury on YouTube.
Lewis, Stephen P; Heath, Nancy L; St Denis, Jill M; Noble, Rick
2011-03-01
Nonsuicidal self-injury, the deliberate destruction of one's body tissue (eg, self-cutting, burning) without suicidal intent, has consistent rates ranging from 14% to 24% among youth and young adults. With more youth using video-sharing Web sites (eg, YouTube), this study examined the accessibility and scope of nonsuicidal self-injury videos online. Using YouTube's search engine (and the following key words: "self-injury" and "self-harm"), the 50 most viewed character (ie, with a live individual) and noncharacter videos (100 total) were selected and examined across key quantitative and qualitative variables. The top 100 videos analyzed were viewed over 2 million times, and most (80%) were accessible to a general audience. Viewers rated the videos positively (M = 4.61; SD: 0.61 out of 5.0) and selected videos as a favorite over 12 000 times. The videos' tones were largely factual or educational (53%) or melancholic (51%). Explicit imagery of self-injury was common. Specifically, 90% of noncharacter videos had nonsuicidal self-injury photographs, whereas 28% of character videos had in-action nonsuicidal self-injury. For both, cutting was the most common method. Many videos (58%) do not warn about this content. The nature of nonsuicidal self-injury videos on YouTube may foster normalization of nonsuicidal self-injury and may reinforce the behavior through regular viewing of nonsuicidal self-injury-themed videos. Graphic videos showing nonsuicidal self-injury are frequently accessed and received positively by viewers. These videos largely provide nonsuicidal self-injury information and/or express a hopeless or melancholic message. Professionals working with youth and young adults who enact nonsuicidal self-injury need to be aware of the scope and nature of nonsuicidal self-injury on YouTube.
Mitre, Naim; Foster, Randal C; Lanningham-Foster, Lorraine; Levine, James A.
2014-01-01
Background Screen time continues to be a major contributing factor to sedentariness in children. There have been more creative approaches to increase physical over the last few years. One approach has been through the use of video games. In the present study we investigated the effect of television watching and the use of activity-promoting video games on energy expenditure and movement in lean and obese children. Our primary hypothesis was that energy expenditure and movement decreases while watching television, in lean and obese children. Our secondary hypothesis was that energy expenditure and movement increases when playing the same game with an activity-promoting video game console compared to a sedentary video game console, in lean and obese children. Methods Eleven boys (10 ± 1 year) and eight girls (9 ± 1 year) ranging in BMI from 14–29 kg/m2 (eleven lean and eight overweight or obese) were recruited. Energy expenditure and physical activity were measured while participants were watching television, playing a video game on a traditional sedentary video game console, and while playing the same video game on an activity-promoting video game (Nintendo Wii) console. Results Energy expenditure was significantly greater than television watching and playing video games on a sedentary video game console when children played the video game on the activity-promoting console(125.3 ± 38.2 Kcal/hr vs. 79.7 ± 20.1 and 79.4 ±15.7, P<0.0001, respectively). When examining movement with accelerometry, children moved significantly more when playing the video game on the Nintendo Wii console (p<0.0001). Conclusion The amount of movement and energy expenditure of television watching and playing video games on a sedentary video game console is not different. Activity-promoting video games have shown to increase movement, and be an important tool to raise energy expenditure by 50% when compared to sedentary activities of daily living. PMID:22145458
Performance enhancing water skipping: successive free surface impacts of elastic spheres
NASA Astrophysics Data System (ADS)
Hurd, Randy; Truscott, Tadd; Belden, Jesse
2014-11-01
From naval gunners skipping cannonballs to children skipping stones, physicists have long been enamored with the repeated ricochet of objects on the water surface. Elastic spheres, such as the toy Waboba ball, make water skipping more accessible to the masses by expanding the range of impact parameters over which objects can be skipped. For example, it is not difficult to achieve more than twenty skips with such spheres, where skipping a stone twenty times is very difficult. In this talk we discuss the dynamics of water skipping elastic spheres over several successive skips. High-speed video captured using a unique experimental setup reveals how dynamics change with each skip as a result of lost kinetic energy. We place these observations in the context of previous work on single oblique impacts to identify material vibration modes that are excited during ricochet. The material modes excited with each successive impact are seen to decay from high-energy modes to low energy modes until water entry finally occurs. A model for estimating skipping outcome from initial conditions is proposed.
Multi-mode sensor processing on a dynamically reconfigurable massively parallel processor array
NASA Astrophysics Data System (ADS)
Chen, Paul; Butts, Mike; Budlong, Brad; Wasson, Paul
2008-04-01
This paper introduces a novel computing architecture that can be reconfigured in real time to adapt on demand to multi-mode sensor platforms' dynamic computational and functional requirements. This 1 teraOPS reconfigurable Massively Parallel Processor Array (MPPA) has 336 32-bit processors. The programmable 32-bit communication fabric provides streamlined inter-processor connections with deterministically high performance. Software programmability, scalability, ease of use, and fast reconfiguration time (ranging from microseconds to milliseconds) are the most significant advantages over FPGAs and DSPs. This paper introduces the MPPA architecture, its programming model, and methods of reconfigurability. An MPPA platform for reconfigurable computing is based on a structural object programming model. Objects are software programs running concurrently on hundreds of 32-bit RISC processors and memories. They exchange data and control through a network of self-synchronizing channels. A common application design pattern on this platform, called a work farm, is a parallel set of worker objects, with one input and one output stream. Statically configured work farms with homogeneous and heterogeneous sets of workers have been used in video compression and decompression, network processing, and graphics applications.