Sample records for speed video analysis

  1. Terminal Performance of Lead Free Pistol Bullets in Ballistic Gelatin Using Retarding Force Analysis from High Speed Video

    DTIC Science & Technology

    2016-04-04

    Terminal Performance of Lead-Free Pistol Bullets in Ballistic Gelatin Using Retarding Force Analysis from High Speed Video ELIJAH COURTNEY, AMY...quantified using high speed video . The temporary stretch cavities and permanent wound cavities are also characterized. Two factors tend to re- duce the...Performance of Lead-Free Pistol Bullets in Ballistic Gelatin Using Retarding Force Analysis from High Speed Video cavity. In addition, stretching can also

  2. High-Speed Video Analysis of Damped Harmonic Motion

    ERIC Educational Resources Information Center

    Poonyawatpornkul, J.; Wattanakasiwich, P.

    2013-01-01

    In this paper, we acquire and analyse high-speed videos of a spring-mass system oscillating in glycerin at different temperatures. Three cases of damped harmonic oscillation are investigated and analysed by using high-speed video at a rate of 120 frames s[superscript -1] and Tracker Video Analysis (Tracker) software. We present empirical data for…

  3. Efficient subtle motion detection from high-speed video for sound recovery and vibration analysis using singular value decomposition-based approach

    NASA Astrophysics Data System (ADS)

    Zhang, Dashan; Guo, Jie; Jin, Yi; Zhu, Chang'an

    2017-09-01

    High-speed cameras provide full field measurement of structure motions and have been applied in nondestructive testing and noncontact structure monitoring. Recently, a phase-based method has been proposed to extract sound-induced vibrations from phase variations in videos, and this method provides insights into the study of remote sound surveillance and material analysis. An efficient singular value decomposition (SVD)-based approach is introduced to detect sound-induced subtle motions from pixel intensities in silent high-speed videos. A high-speed camera is initially applied to capture a video of the vibrating objects stimulated by sound fluctuations. Then, subimages collected from a small region on the captured video are reshaped into vectors and reconstructed to form a matrix. Orthonormal image bases (OIBs) are obtained from the SVD of the matrix; available vibration signal can then be obtained by projecting subsequent subimages onto specific OIBs. A simulation test is initiated to validate the effectiveness and efficiency of the proposed method. Two experiments are conducted to demonstrate the potential applications in sound recovery and material analysis. Results show that the proposed method efficiently detects subtle motions from the video.

  4. Using High Speed Smartphone Cameras and Video Analysis Techniques to Teach Mechanical Wave Physics

    ERIC Educational Resources Information Center

    Bonato, Jacopo; Gratton, Luigi M.; Onorato, Pasquale; Oss, Stefano

    2017-01-01

    We propose the use of smartphone-based slow-motion video analysis techniques as a valuable tool for investigating physics concepts ruling mechanical wave propagation. The simple experimental activities presented here, suitable for both high school and undergraduate students, allows one to measure, in a simple yet rigorous way, the speed of pulses…

  5. Reliability verification of vehicle speed estimate method in forensic videos.

    PubMed

    Kim, Jong-Hyuk; Oh, Won-Taek; Choi, Ji-Hun; Park, Jong-Chan

    2018-06-01

    In various types of traffic accidents, including car-to-car crash, vehicle-pedestrian collision, and hit-and-run accident, driver overspeed is one of the critical issues of traffic accident analysis. Hence, analysis of vehicle speed at the moment of accident is necessary. The present article proposes a vehicle speed estimate method (VSEM) applying a virtual plane and a virtual reference line to a forensic video. The reliability of the VSEM was verified by comparing the results obtained by applying the VSEM to videos from a test vehicle driving with a global positioning system (GPS)-based Vbox speed. The VSEM verified by these procedures was applied to real traffic accident examples to evaluate the usability of the VSEM. Copyright © 2018 Elsevier B.V. All rights reserved.

  6. Temporal compressive imaging for video

    NASA Astrophysics Data System (ADS)

    Zhou, Qun; Zhang, Linxia; Ke, Jun

    2018-01-01

    In many situations, imagers are required to have higher imaging speed, such as gunpowder blasting analysis and observing high-speed biology phenomena. However, measuring high-speed video is a challenge to camera design, especially, in infrared spectrum. In this paper, we reconstruct a high-frame-rate video from compressive video measurements using temporal compressive imaging (TCI) with a temporal compression ratio T=8. This means that, 8 unique high-speed temporal frames will be obtained from a single compressive frame using a reconstruction algorithm. Equivalently, the video frame rates is increased by 8 times. Two methods, two-step iterative shrinkage/threshold (TwIST) algorithm and the Gaussian mixture model (GMM) method, are used for reconstruction. To reduce reconstruction time and memory usage, each frame of size 256×256 is divided into patches of size 8×8. The influence of different coded mask to reconstruction is discussed. The reconstruction qualities using TwIST and GMM are also compared.

  7. High-Speed Video Analysis in a Conceptual Physics Class

    ERIC Educational Resources Information Center

    Desbien, Dwain M.

    2011-01-01

    The use of probe ware and computers has become quite common in introductory physics classrooms. Video analysis is also becoming more popular and is available to a wide range of students through commercially available and/or free software. Video analysis allows for the study of motions that cannot be easily measured in the traditional lab setting…

  8. A functional video-based anthropometric measuring system

    NASA Technical Reports Server (NTRS)

    Nixon, J. H.; Cater, J. P.

    1982-01-01

    A high-speed anthropometric three dimensional measurement system using the Selcom Selspot motion tracking instrument for visual data acquisition is discussed. A three-dimensional scanning system was created which collects video, audio, and performance data on a single standard video cassette recorder. Recording rates of 1 megabit per second for periods of up to two hours are possible with the system design. A high-speed off-the-shelf motion analysis system for collecting optical information as used. The video recording adapter (VRA) is interfaced to the Selspot data acquisition system.

  9. High speed photography, videography, and photonics IV; Proceedings of the Meeting, San Diego, CA, Aug. 19, 20, 1986

    NASA Technical Reports Server (NTRS)

    Ponseggi, B. G. (Editor)

    1986-01-01

    Various papers on high-speed photography, videography, and photonics are presented. The general topics addressed include: photooptical and video instrumentation, streak camera data acquisition systems, photooptical instrumentation in wind tunnels, applications of holography and interferometry in wind tunnel research programs, and data analysis for photooptical and video instrumentation.

  10. Speed Biases With Real-Life Video Clips

    PubMed Central

    Rossi, Federica; Montanaro, Elisa; de’Sperati, Claudio

    2018-01-01

    We live almost literally immersed in an artificial visual world, especially motion pictures. In this exploratory study, we asked whether the best speed for reproducing a video is its original, shooting speed. By using adjustment and double staircase methods, we examined speed biases in viewing real-life video clips in three experiments, and assessed their robustness by manipulating visual and auditory factors. With the tested stimuli (short clips of human motion, mixed human-physical motion, physical motion and ego-motion), speed underestimation was the rule rather than the exception, although it depended largely on clip content, ranging on average from 2% (ego-motion) to 32% (physical motion). Manipulating display size or adding arbitrary soundtracks did not modify these speed biases. Estimated speed was not correlated with estimated duration of these same video clips. These results indicate that the sense of speed for real-life video clips can be systematically biased, independently of the impression of elapsed time. Measuring subjective visual tempo may integrate traditional methods that assess time perception: speed biases may be exploited to develop a simple, objective test of reality flow, to be used for example in clinical and developmental contexts. From the perspective of video media, measuring speed biases may help to optimize video reproduction speed and validate “natural” video compression techniques based on sub-threshold temporal squeezing. PMID:29615875

  11. Speed Biases With Real-Life Video Clips.

    PubMed

    Rossi, Federica; Montanaro, Elisa; de'Sperati, Claudio

    2018-01-01

    We live almost literally immersed in an artificial visual world, especially motion pictures. In this exploratory study, we asked whether the best speed for reproducing a video is its original, shooting speed. By using adjustment and double staircase methods, we examined speed biases in viewing real-life video clips in three experiments, and assessed their robustness by manipulating visual and auditory factors. With the tested stimuli (short clips of human motion, mixed human-physical motion, physical motion and ego-motion), speed underestimation was the rule rather than the exception, although it depended largely on clip content, ranging on average from 2% (ego-motion) to 32% (physical motion). Manipulating display size or adding arbitrary soundtracks did not modify these speed biases. Estimated speed was not correlated with estimated duration of these same video clips. These results indicate that the sense of speed for real-life video clips can be systematically biased, independently of the impression of elapsed time. Measuring subjective visual tempo may integrate traditional methods that assess time perception: speed biases may be exploited to develop a simple, objective test of reality flow, to be used for example in clinical and developmental contexts. From the perspective of video media, measuring speed biases may help to optimize video reproduction speed and validate "natural" video compression techniques based on sub-threshold temporal squeezing.

  12. "Diagnosis by behavioral observation" home-videosomnography - a rigorous ethnographic approach to sleep of children with neurodevelopmental conditions.

    PubMed

    Ipsiroglu, Osman S; Hung, Yi-Hsuan Amy; Chan, Forson; Ross, Michelle L; Veer, Dorothee; Soo, Sonja; Ho, Gloria; Berger, Mai; McAllister, Graham; Garn, Heinrich; Kloesch, Gerhard; Barbosa, Adriano Vilela; Stockler, Sylvia; McKellin, William; Vatikiotis-Bateson, Eric

    2015-01-01

    Advanced video technology is available for sleep-laboratories. However, low-cost equipment for screening in the home setting has not been identified and tested, nor has a methodology for analysis of video recordings been suggested. We investigated different combinations of hardware/software for home-videosomnography (HVS) and established a process for qualitative and quantitative analysis of HVS-recordings. A case vignette (HVS analysis for a 5.5-year-old girl with major insomnia and several co-morbidities) demonstrates how methodological considerations were addressed and how HVS added value to clinical assessment. We suggest an "ideal set of hardware/software" that is reliable, affordable (∼$500) and portable (=2.8 kg) to conduct non-invasive HVS, which allows time-lapse analyses. The equipment consists of a net-book, a camera with infrared optics, and a video capture device. (1) We present an HVS-analysis protocol consisting of three steps of analysis at varying replay speeds: (a) basic overview and classification at 16× normal speed; (b) second viewing and detailed descriptions at 4-8× normal speed, and (c) viewing, listening, and in-depth descriptions at real-time speed. (2) We also present a custom software program that facilitates video analysis and note-taking (Annotator(©)), and Optical Flow software that automatically quantifies movement for internal quality control of the HVS-recording. The case vignette demonstrates how the HVS-recordings revealed the dimension of insomnia caused by restless legs syndrome, and illustrated the cascade of symptoms, challenging behaviors, and resulting medications. The strategy of using HVS, although requiring validation and reliability testing, opens the floor for a new "observational sleep medicine," which has been useful in describing discomfort-related behavioral movement patterns in patients with communication difficulties presenting with challenging/disruptive sleep/wake behaviors.

  13. Considerations in video playback design: using optic flow analysis to examine motion characteristics of live and computer-generated animation sequences.

    PubMed

    Woo, Kevin L; Rieucau, Guillaume

    2008-07-01

    The increasing use of the video playback technique in behavioural ecology reveals a growing need to ensure better control of the visual stimuli that focal animals experience. Technological advances now allow researchers to develop computer-generated animations instead of using video sequences of live-acting demonstrators. However, care must be taken to match the motion characteristics (speed and velocity) of the animation to the original video source. Here, we presented a tool based on the use of an optic flow analysis program to measure the resemblance of motion characteristics of computer-generated animations compared to videos of live-acting animals. We examined three distinct displays (tail-flick (TF), push-up body rock (PUBR), and slow arm wave (SAW)) exhibited by animations of Jacky dragons (Amphibolurus muricatus) that were compared to the original video sequences of live lizards. We found no significant differences between the motion characteristics of videos and animations across all three displays. Our results showed that our animations are similar the speed and velocity features of each display. Researchers need to ensure that similar motion characteristics in animation and video stimuli are represented, and this feature is a critical component in the future success of the video playback technique.

  14. Using high speed smartphone cameras and video analysis techniques to teach mechanical wave physics

    NASA Astrophysics Data System (ADS)

    Bonato, Jacopo; Gratton, Luigi M.; Onorato, Pasquale; Oss, Stefano

    2017-07-01

    We propose the use of smartphone-based slow-motion video analysis techniques as a valuable tool for investigating physics concepts ruling mechanical wave propagation. The simple experimental activities presented here, suitable for both high school and undergraduate students, allows one to measure, in a simple yet rigorous way, the speed of pulses along a spring and the period of transverse standing waves generated in the same spring. These experiments can be helpful in addressing several relevant concepts about the physics of mechanical waves and in overcoming some of the typical student misconceptions in this same field.

  15. Application of Integral Optical Flow for Determining Crowd Movement from Video Images Obtained Using Video Surveillance Systems

    NASA Astrophysics Data System (ADS)

    Chen, H.; Ye, Sh.; Nedzvedz, O. V.; Ablameyko, S. V.

    2018-03-01

    Study of crowd movement is an important practical problem, and its solution is used in video surveillance systems for preventing various emergency situations. In the general case, a group of fast-moving people is of more interest than a group of stationary or slow-moving people. We propose a new method for crowd movement analysis using a video sequence, based on integral optical flow. We have determined several characteristics of a moving crowd such as density, speed, direction of motion, symmetry, and in/out index. These characteristics are used for further analysis of a video scene.

  16. Video image processing to create a speed sensor

    DOT National Transportation Integrated Search

    1999-11-01

    Image processing has been applied to traffic analysis in recent years, with different goals. In the report, a new approach is presented for extracting vehicular speed information, given a sequence of real-time traffic images. We extract moving edges ...

  17. As time passes by: Observed motion-speed and psychological time during video playback.

    PubMed

    Nyman, Thomas Jonathan; Karlsson, Eric Per Anders; Antfolk, Jan

    2017-01-01

    Research shows that psychological time (i.e., the subjective experience and assessment of the passage of time) is malleable and that the central nervous system re-calibrates temporal information in accordance with situational factors so that psychological time flows slower or faster. Observed motion-speed (e.g., the visual perception of a rolling ball) is an important situational factor which influences the production of time estimates. The present study examines previous findings showing that observed slow and fast motion-speed during video playback respectively results in over- and underproductions of intervals of time. Here, we investigated through three separate experiments: a) the main effect of observed motion-speed during video playback on a time production task and b) the interactive effect of the frame rate (frames per second; fps) and motion-speed during video playback on a time production task. No main effect of video playback-speed or interactive effect between video playback-speed and frame rate was found on time production.

  18. As time passes by: Observed motion-speed and psychological time during video playback

    PubMed Central

    Karlsson, Eric Per Anders; Antfolk, Jan

    2017-01-01

    Research shows that psychological time (i.e., the subjective experience and assessment of the passage of time) is malleable and that the central nervous system re-calibrates temporal information in accordance with situational factors so that psychological time flows slower or faster. Observed motion-speed (e.g., the visual perception of a rolling ball) is an important situational factor which influences the production of time estimates. The present study examines previous findings showing that observed slow and fast motion-speed during video playback respectively results in over- and underproductions of intervals of time. Here, we investigated through three separate experiments: a) the main effect of observed motion-speed during video playback on a time production task and b) the interactive effect of the frame rate (frames per second; fps) and motion-speed during video playback on a time production task. No main effect of video playback-speed or interactive effect between video playback-speed and frame rate was found on time production. PMID:28614353

  19. Does the Podcast Video Playback Speed Affect Comprehension for Novel Curriculum Delivery? A Randomized Trial.

    PubMed

    Song, Kristine; Chakraborty, Amit; Dawson, Matthew; Dugan, Adam; Adkins, Brian; Doty, Christopher

    2018-01-01

    Medical education is a rapidly evolving field that has been using new technology to improve how medical students learn. One of the recent implementations in medical education is the recording of lectures for the purpose of playback at various speeds. Though previous studies done via surveys have shown a subjective increase in the rate of knowledge acquisition when learning from sped-up lectures, no quantitative studies have measured information retention. The purpose of this study was to compare mean test scores on written assessments to objectively determine if watching a video of a recorded lecture at 1.5× speed was significantly different than 1.0× speed for the immediate retention of novel material. Fifty-four University of Kentucky medical students volunteered to participate in this study. The subjects were divided into two separate groups: Group A and Group B. Each group watched two separate videos, the first at 1.5× speed and the second at 1.0× speed, then completed assessments following each. The topics of the two videos were ultrasonography artifacts and transducers. Group A watched the artifacts video first at 1.5× speed followed by the transducers video at 1.0× speed. Group B watched the transducers video first at 1.5× speed followed by the artifacts video at 1.0× speed. The percentage correct on the written assessment were calculated for each subject at each video speed. The mean and standard deviation were also calculated using a t-test to determine if there was a significant difference in assessment scores between 1.5× and 1.0× speeds. There was a significant (p=0.0188) detriment in performance on the artifacts quiz at 1.5× speed (mean 61.4; 95% confidence interval [CI]-53.9, 68.9) compared to the control group at normal speed (mean 72.7; 95% CI-66.8, 78.6). On the transducers assessment, there was not a significant (p=0.1365) difference in performance in the 1.5× speed group (mean 66.9; CI- 59.8, 74.0) compared to the control group (mean 73.8; CI- 67.7, 79.8). These findings suggest that, unlike previously published studies that showed subjective improvement in performance with sped-up video-recorded lectures compared to normal speed, objective performance may be worse.

  20. Field Test Data for Detecting Vibrations of a Building Using High-Speed Video Cameras

    DTIC Science & Technology

    2017-10-01

    ARL-TR-8185 ● OCT 2017 US Army Research Laboratory Field Test Data for Detecting Vibrations of a Building Using High -Speed Video...Field Test Data for Detecting Vibrations of a Building Using High -Speed Video Cameras by Caitlin P Conn and Geoffrey H Goldman Sensors and...June 2016 – October 2017 4. TITLE AND SUBTITLE Field Test Data for Detecting Vibrations of a Building Using High -Speed Video Cameras 5a. CONTRACT

  1. Estimating Burst Swim Speeds and Jumping Characteristics of Silver Carp (Hypophthalmichthys molitrix) Using Video Analyses and Principles of Projectile Physics

    DTIC Science & Technology

    2016-09-01

    Characteristics of Silver Carp (Hypophthalmichthys molitrix) Using Video Analyses and Principles of Projectile Physics by Glenn R. Parsons, Ehlana Stell...2002) estimated maximum swim speeds of videotaped, captive, and free-ranging dolphins, Delphinidae, by timed sequential analyses of video frames... videos to estimate the swim speeds and leap characteristics of carp as they exit the waters’ surface. We used both direct estimates of swim speeds as

  2. “Diagnosis by Behavioral Observation” Home-Videosomnography – A Rigorous Ethnographic Approach to Sleep of Children with Neurodevelopmental Conditions

    PubMed Central

    Ipsiroglu, Osman S.; Hung, Yi-Hsuan Amy; Chan, Forson; Ross, Michelle L.; Veer, Dorothee; Soo, Sonja; Ho, Gloria; Berger, Mai; McAllister, Graham; Garn, Heinrich; Kloesch, Gerhard; Barbosa, Adriano Vilela; Stockler, Sylvia; McKellin, William; Vatikiotis-Bateson, Eric

    2015-01-01

    Introduction: Advanced video technology is available for sleep-laboratories. However, low-cost equipment for screening in the home setting has not been identified and tested, nor has a methodology for analysis of video recordings been suggested. Methods: We investigated different combinations of hardware/software for home-videosomnography (HVS) and established a process for qualitative and quantitative analysis of HVS-recordings. A case vignette (HVS analysis for a 5.5-year-old girl with major insomnia and several co-morbidities) demonstrates how methodological considerations were addressed and how HVS added value to clinical assessment. Results: We suggest an “ideal set of hardware/software” that is reliable, affordable (∼$500) and portable (=2.8 kg) to conduct non-invasive HVS, which allows time-lapse analyses. The equipment consists of a net-book, a camera with infrared optics, and a video capture device. (1) We present an HVS-analysis protocol consisting of three steps of analysis at varying replay speeds: (a) basic overview and classification at 16× normal speed; (b) second viewing and detailed descriptions at 4–8× normal speed, and (c) viewing, listening, and in-depth descriptions at real-time speed. (2) We also present a custom software program that facilitates video analysis and note-taking (Annotator©), and Optical Flow software that automatically quantifies movement for internal quality control of the HVS-recording. The case vignette demonstrates how the HVS-recordings revealed the dimension of insomnia caused by restless legs syndrome, and illustrated the cascade of symptoms, challenging behaviors, and resulting medications. Conclusion: The strategy of using HVS, although requiring validation and reliability testing, opens the floor for a new “observational sleep medicine,” which has been useful in describing discomfort-related behavioral movement patterns in patients with communication difficulties presenting with challenging/disruptive sleep/wake behaviors. PMID:25852578

  3. The experiments and analysis of several selective video encryption methods

    NASA Astrophysics Data System (ADS)

    Zhang, Yue; Yang, Cheng; Wang, Lei

    2013-07-01

    This paper presents four methods for selective video encryption based on the MPEG-2 video compression,including the slices, the I-frames, the motion vectors, and the DCT coefficients. We use the AES encryption method for simulation experiment for the four methods on VS2010 Platform, and compare the video effects and the processing speed of each frame after the video encrypted. The encryption depth can be arbitrarily selected, and design the encryption depth by using the double limit counting method, so the accuracy can be increased.

  4. Does the Podcast Video Playback Speed Affect Comprehension for Novel Curriculum Delivery? A Randomized Trial

    PubMed Central

    Song, Kristine; Chakraborty, Amit; Dawson, Matthew; Dugan, Adam; Adkins, Brian; Doty, Christopher

    2018-01-01

    Introduction Medical education is a rapidly evolving field that has been using new technology to improve how medical students learn. One of the recent implementations in medical education is the recording of lectures for the purpose of playback at various speeds. Though previous studies done via surveys have shown a subjective increase in the rate of knowledge acquisition when learning from sped-up lectures, no quantitative studies have measured information retention. The purpose of this study was to compare mean test scores on written assessments to objectively determine if watching a video of a recorded lecture at 1.5× speed was significantly different than 1.0× speed for the immediate retention of novel material. Methods Fifty-four University of Kentucky medical students volunteered to participate in this study. The subjects were divided into two separate groups: Group A and Group B. Each group watched two separate videos, the first at 1.5× speed and the second at 1.0× speed, then completed assessments following each. The topics of the two videos were ultrasonography artifacts and transducers. Group A watched the artifacts video first at 1.5× speed followed by the transducers video at 1.0× speed. Group B watched the transducers video first at 1.5× speed followed by the artifacts video at 1.0× speed. The percentage correct on the written assessment were calculated for each subject at each video speed. The mean and standard deviation were also calculated using a t-test to determine if there was a significant difference in assessment scores between 1.5× and 1.0× speeds. Results There was a significant (p=0.0188) detriment in performance on the artifacts quiz at 1.5× speed (mean 61.4; 95% confidence interval [CI]-53.9, 68.9) compared to the control group at normal speed (mean 72.7; 95% CI−66.8, 78.6). On the transducers assessment, there was not a significant (p=0.1365) difference in performance in the 1.5× speed group (mean 66.9; CI− 59.8, 74.0) compared to the control group (mean 73.8; CI− 67.7, 79.8). Conclusion These findings suggest that, unlike previously published studies that showed subjective improvement in performance with sped-up video-recorded lectures compared to normal speed, objective performance may be worse. PMID:29383063

  5. Music video shot segmentation using independent component analysis and keyframe extraction based on image complexity

    NASA Astrophysics Data System (ADS)

    Li, Wei; Chen, Ting; Zhang, Wenjun; Shi, Yunyu; Li, Jun

    2012-04-01

    In recent years, Music video data is increasing at an astonishing speed. Shot segmentation and keyframe extraction constitute a fundamental unit in organizing, indexing, retrieving video content. In this paper a unified framework is proposed to detect the shot boundaries and extract the keyframe of a shot. Music video is first segmented to shots by illumination-invariant chromaticity histogram in independent component (IC) analysis feature space .Then we presents a new metric, image complexity, to extract keyframe in a shot which is computed by ICs. Experimental results show the framework is effective and has a good performance.

  6. High-Speed Video Analysis in a Conceptual Physics Class

    NASA Astrophysics Data System (ADS)

    Desbien, Dwain M.

    2011-09-01

    The use of probe ware and computers has become quite common in introductory physics classrooms. Video analysis is also becoming more popular and is available to a wide range of students through commercially available and/or free software.2,3 Video analysis allows for the study of motions that cannot be easily measured in the traditional lab setting and also allows real-world situations to be analyzed. Many motions are too fast to easily be captured at the standard video frame rate of 30 frames per second (fps) employed by most video cameras. This paper will discuss using a consumer camera that can record high-frame-rate video in a college-level conceptual physics class. In particular this will involve the use of model rockets to determine the acceleration during the boost period right at launch and compare it to a simple model of the expected acceleration.

  7. Automated tracking of whiskers in videos of head fixed rodents.

    PubMed

    Clack, Nathan G; O'Connor, Daniel H; Huber, Daniel; Petreanu, Leopoldo; Hires, Andrew; Peron, Simon; Svoboda, Karel; Myers, Eugene W

    2012-01-01

    We have developed software for fully automated tracking of vibrissae (whiskers) in high-speed videos (>500 Hz) of head-fixed, behaving rodents trimmed to a single row of whiskers. Performance was assessed against a manually curated dataset consisting of 1.32 million video frames comprising 4.5 million whisker traces. The current implementation detects whiskers with a recall of 99.998% and identifies individual whiskers with 99.997% accuracy. The average processing rate for these images was 8 Mpx/s/cpu (2.6 GHz Intel Core2, 2 GB RAM). This translates to 35 processed frames per second for a 640 px×352 px video of 4 whiskers. The speed and accuracy achieved enables quantitative behavioral studies where the analysis of millions of video frames is required. We used the software to analyze the evolving whisking strategies as mice learned a whisker-based detection task over the course of 6 days (8148 trials, 25 million frames) and measure the forces at the sensory follicle that most underlie haptic perception.

  8. Automated Tracking of Whiskers in Videos of Head Fixed Rodents

    PubMed Central

    Clack, Nathan G.; O'Connor, Daniel H.; Huber, Daniel; Petreanu, Leopoldo; Hires, Andrew; Peron, Simon; Svoboda, Karel; Myers, Eugene W.

    2012-01-01

    We have developed software for fully automated tracking of vibrissae (whiskers) in high-speed videos (>500 Hz) of head-fixed, behaving rodents trimmed to a single row of whiskers. Performance was assessed against a manually curated dataset consisting of 1.32 million video frames comprising 4.5 million whisker traces. The current implementation detects whiskers with a recall of 99.998% and identifies individual whiskers with 99.997% accuracy. The average processing rate for these images was 8 Mpx/s/cpu (2.6 GHz Intel Core2, 2 GB RAM). This translates to 35 processed frames per second for a 640 px×352 px video of 4 whiskers. The speed and accuracy achieved enables quantitative behavioral studies where the analysis of millions of video frames is required. We used the software to analyze the evolving whisking strategies as mice learned a whisker-based detection task over the course of 6 days (8148 trials, 25 million frames) and measure the forces at the sensory follicle that most underlie haptic perception. PMID:22792058

  9. High-Speed Videography Overview

    NASA Astrophysics Data System (ADS)

    Miller, C. E.

    1989-02-01

    The field of high-speed videography (HSV) has continued to mature in recent years, due to the introduction of a mixture of new technology and extensions of existing technology. Recent low frame-rate innovations have the potential to dramatically expand the areas of information gathering and motion analysis at all frame-rates. Progress at the 0 - rate is bringing the battle of film versus video to the field of still photography. The pressure to push intermediate frame rates higher continues, although the maximum achievable frame rate has remained stable for several years. Higher maximum recording rates appear technologically practical, but economic factors impose severe limitations to development. The application of diverse photographic techniques to video-based systems is under-exploited. The basics of HSV apply to other fields, such as machine vision and robotics. Present motion analysis systems continue to function mainly as an instant replay replacement for high-speed movie film cameras. The interrelationship among lighting, shuttering and spatial resolution is examined.

  10. Management of a patient's gait abnormality using smartphone technology in-clinic for improved qualitative analysis: A case report.

    PubMed

    VanWye, William R; Hoover, Donald L

    2018-05-01

    Qualitative analysis has its limitations as the speed of human movement often occurs more quickly than can be comprehended. Digital video allows for frame-by-frame analysis, and therefore likely more effective interventions for gait dysfunction. Although the use of digital video outside laboratory settings, just a decade ago, was challenging due to cost and time constraints, rapid use of smartphones and software applications has made this technology much more practical for clinical usage. A 35-year-old man presented for evaluation with the chief complaint of knee pain 24 months status-post triple arthrodesis following a work-related crush injury. In-clinic qualitative gait analysis revealed gait dysfunction, which was augmented by using a standard IPhone® 3GS camera. After video capture, an IPhone® application (Speed Up TV®, https://itunes.apple.com/us/app/speeduptv/id386986953?mt=8 ) allowed for frame-by-frame analysis. Corrective techniques were employed using in-clinic equipment to develop and apply a temporary heel-to-toe rocker sole (HTRS) to the patient's shoe. Post-intervention video revealed significantly improved gait efficiency with a decrease in pain. The patient was promptly fitted with a permanent HTRS orthosis. This intervention enabled the patient to successfully complete a work conditioning program and progress to job retraining. Video allows for multiple views, which can be further enhanced by using applications for frame-by-frame analysis and zoom capabilities. This is especially useful for less experienced observers of human motion, as well as for establishing comparative signs prior to implementation of training and/or permanent devices.

  11. Visual adaptation alters the apparent speed of real-world actions.

    PubMed

    Mather, George; Sharman, Rebecca J; Parsons, Todd

    2017-07-27

    The apparent physical speed of an object in the field of view remains constant despite variations in retinal velocity due to viewing conditions (velocity constancy). For example, people and cars appear to move across the field of view at the same objective speed regardless of distance. In this study a series of experiments investigated the visual processes underpinning judgements of objective speed using an adaptation paradigm and video recordings of natural human locomotion. Viewing a video played in slow-motion for 30 seconds caused participants to perceive subsequently viewed clips played at standard speed as too fast, so playback had to be slowed down in order for it to appear natural; conversely after viewing fast-forward videos for 30 seconds, playback had to be speeded up in order to appear natural. The perceived speed of locomotion shifted towards the speed depicted in the adapting video ('re-normalisation'). Results were qualitatively different from those obtained in previously reported studies of retinal velocity adaptation. Adapting videos that were scrambled to remove recognizable human figures or coherent motion caused significant, though smaller shifts in apparent locomotion speed, indicating that both low-level and high-level visual properties of the adapting stimulus contributed to the changes in apparent speed.

  12. High Speed Videometric Monitoring of Rock Breakage

    NASA Astrophysics Data System (ADS)

    Allemand, J.; Shortis, M. R.; Elmouttie, M. K.

    2018-05-01

    Estimation of rock breakage characteristics plays an important role in optimising various industrial and mining processes used for rock comminution. Although little research has been undertaken into 3D photogrammetric measurement of the progeny kinematics, there is promising potential to improve the efficacy of rock breakage characterisation. In this study, the observation of progeny kinematics was conducted using a high speed, stereo videometric system based on laboratory experiments with a drop weight impact testing system. By manually tracking individual progeny through the captured video sequences, observed progeny coordinates can be used to determine 3D trajectories and velocities, supporting the idea that high speed video can be used for rock breakage characterisation purposes. An analysis of the results showed that the high speed videometric system successfully observed progeny trajectories and showed clear projection of the progeny away from the impact location. Velocities of the progeny could also be determined based on the trajectories and the video frame rate. These results were obtained despite the limitations of the photogrammetric system and experiment processes observed in this study. Accordingly there is sufficient evidence to conclude that high speed videometric systems are capable of observing progeny kinematics from drop weight impact tests. With further optimisation of the systems and processes used, there is potential for improving the efficacy of rock breakage characterisation from measurements with high speed videometric systems.

  13. Analysis of the learning curve for transurethral resection of the prostate. Is there any influence of musical instrument and video game skills on surgical performance?

    PubMed

    Yamaçake, Kleiton Gabriel Ribeiro; Nakano, Elcio Tadashi; Soares, Iva Barbosa; Cordeiro, Paulo; Srougi, Miguel; Antunes, Alberto Azoubel

    2015-09-01

    To evaluate the learning curve for transurethral resection of the prostate (TURP) among urology residents and study the impact of video game and musical instrument playing abilities on its performance. A prospective study was performed from July 2009 to January 2013 with patients submitted to TURP for benign prostatic hyperplasia. Fourteen residents operated on 324 patients. The following parameters were analyzed: age, prostate-specific antigen levels, prostate weight on ultrasound, pre- and postoperative serum sodium and hemoglobin levels, weight of resected tissue, operation time, speed of resection, and incidence of capsular lesions. Gender, handedness, and prior musical instrument and video game playing experience were recorded using survey responses. The mean resection speed in the first 10 procedures was 0.36 g/min and reached a mean of 0.51 g/min after the 20(th) procedure. The incidence of capsular lesions decreased progressively. The operation time decreased progressively for each subgroup regardless of the difference in the weight of tissue resected. Those experienced in playing video games presented superior resection speed (0.45 g/min) when compared with the novice (0.35 g/min) and intermediate (0.38 g/min) groups (p=0.112). Musical instrument playing abilities did not affect the surgical performance. Speed of resection, weight of resected tissue, and percentage of resected tissue improve significantly and the incidence of capsular lesions reduces after the performance of 10 TURP procedures. Experience in playing video games or musical instruments does not have a significant effect on outcomes.

  14. Eyelid contour detection and tracking for startle research related eye-blink measurements from high-speed video records.

    PubMed

    Bernard, Florian; Deuter, Christian Eric; Gemmar, Peter; Schachinger, Hartmut

    2013-10-01

    Using the positions of the eyelids is an effective and contact-free way for the measurement of startle induced eye-blinks, which plays an important role in human psychophysiological research. To the best of our knowledge, no methods for an efficient detection and tracking of the exact eyelid contours in image sequences captured at high-speed exist that are conveniently usable by psychophysiological researchers. In this publication a semi-automatic model-based eyelid contour detection and tracking algorithm for the analysis of high-speed video recordings from an eye tracker is presented. As a large number of images have been acquired prior to method development it was important that our technique is able to deal with images that are recorded without any special parametrisation of the eye tracker. The method entails pupil detection, specular reflection removal and makes use of dynamic model adaption. In a proof-of-concept study we could achieve a correct detection rate of 90.6%. With this approach, we provide a feasible method to accurately assess eye-blinks from high-speed video recordings. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  15. High-frequency video capture and a computer program with frame-by-frame angle determination functionality as tools that support judging in artistic gymnastics.

    PubMed

    Omorczyk, Jarosław; Nosiadek, Leszek; Ambroży, Tadeusz; Nosiadek, Andrzej

    2015-01-01

    The main aim of this study was to verify the usefulness of selected simple methods of recording and fast biomechanical analysis performed by judges of artistic gymnastics in assessing a gymnast's movement technique. The study participants comprised six artistic gymnastics judges, who assessed back handsprings using two methods: a real-time observation method and a frame-by-frame video analysis method. They also determined flexion angles of knee and hip joints using the computer program. In the case of the real-time observation method, the judges gave a total of 5.8 error points with an arithmetic mean of 0.16 points for the flexion of the knee joints. In the high-speed video analysis method, the total amounted to 8.6 error points and the mean value amounted to 0.24 error points. For the excessive flexion of hip joints, the sum of the error values was 2.2 error points and the arithmetic mean was 0.06 error points during real-time observation. The sum obtained using frame-by-frame analysis method equaled 10.8 and the mean equaled 0.30 error points. Error values obtained through the frame-by-frame video analysis of movement technique were higher than those obtained through the real-time observation method. The judges were able to indicate the number of the frame in which the maximal joint flexion occurred with good accuracy. Using the real-time observation method as well as the high-speed video analysis performed without determining the exact angle for assessing movement technique were found to be insufficient tools for improving the quality of judging.

  16. Action video games do not improve the speed of information processing in simple perceptual tasks.

    PubMed

    van Ravenzwaaij, Don; Boekel, Wouter; Forstmann, Birte U; Ratcliff, Roger; Wagenmakers, Eric-Jan

    2014-10-01

    Previous research suggests that playing action video games improves performance on sensory, perceptual, and attentional tasks. For instance, Green, Pouget, and Bavelier (2010) used the diffusion model to decompose data from a motion detection task and estimate the contribution of several underlying psychological processes. Their analysis indicated that playing action video games leads to faster information processing, reduced response caution, and no difference in motor responding. Because perceptual learning is generally thought to be highly context-specific, this transfer from gaming is surprising and warrants corroborative evidence from a large-scale training study. We conducted 2 experiments in which participants practiced either an action video game or a cognitive game in 5 separate, supervised sessions. Prior to each session and following the last session, participants performed a perceptual discrimination task. In the second experiment, we included a third condition in which no video games were played at all. Behavioral data and diffusion model parameters showed similar practice effects for the action gamers, the cognitive gamers, and the nongamers and suggest that, in contrast to earlier reports, playing action video games does not improve the speed of information processing in simple perceptual tasks.

  17. Action Video Games Do Not Improve the Speed of Information Processing in Simple Perceptual Tasks

    PubMed Central

    van Ravenzwaaij, Don; Boekel, Wouter; Forstmann, Birte U.; Ratcliff, Roger; Wagenmakers, Eric-Jan

    2015-01-01

    Previous research suggests that playing action video games improves performance on sensory, perceptual, and attentional tasks. For instance, Green, Pouget, and Bavelier (2010) used the diffusion model to decompose data from a motion detection task and estimate the contribution of several underlying psychological processes. Their analysis indicated that playing action video games leads to faster information processing, reduced response caution, and no difference in motor responding. Because perceptual learning is generally thought to be highly context-specific, this transfer from gaming is surprising and warrants corroborative evidence from a large-scale training study. We conducted 2 experiments in which participants practiced either an action video game or a cognitive game in 5 separate, supervised sessions. Prior to each session and following the last session, participants performed a perceptual discrimination task. In the second experiment, we included a third condition in which no video games were played at all. Behavioral data and diffusion model parameters showed similar practice effects for the action gamers, the cognitive gamers, and the nongamers and suggest that, in contrast to earlier reports, playing action video games does not improve the speed of information processing in simple perceptual tasks. PMID:24933517

  18. An innovative experiment on superconductivity, based on video analysis and non-expensive data acquisition

    NASA Astrophysics Data System (ADS)

    Bonanno, A.; Bozzo, G.; Camarca, M.; Sapia, P.

    2015-07-01

    In this paper we present a new experiment on superconductivity, designed for university undergraduate students, based on the high-speed video analysis of a magnet falling through a ceramic superconducting cylinder (Tc = 110 K). The use of an Atwood’s machine allows us to vary the magnet’s speed and acceleration during its interaction with the superconductor. In this way, we highlight the existence of two interaction regimes: for low crossing energy, the magnet is levitated by the superconductor after a transient oscillatory damping; for higher crossing energy, the magnet passes through the superconducting cylinder. The use of a commercial-grade high speed imaging system, together with video analysis performed using the Tracker software, allows us to attain a good precision in space and time measurements. Four sensing coils, mounted inside and outside the superconducting cylinder, allow us to study the magnetic flux variations in connection with the magnet’s passage through the superconductor, permitting us to shed light on a didactically relevant topic as the behaviour of magnetic field lines in the presence of a superconductor. The critical discussion of experimental data allows undergraduate university students to grasp useful insights on the basic phenomenology of superconductivity as well as on relevant conceptual topics such as the difference between the Meissner effect and the Faraday-like ‘perfect’ induction.

  19. Spatio-temporal analysis of blood perfusion by imaging photoplethysmography

    NASA Astrophysics Data System (ADS)

    Zaunseder, Sebastian; Trumpp, Alexander; Ernst, Hannes; Förster, Michael; Malberg, Hagen

    2018-02-01

    Imaging photoplethysmography (iPPG) has attracted much attention over the last years. The vast majority of works focuses on methods to reliably extract the heart rate from videos. Only a few works addressed iPPGs ability to exploit spatio-temporal perfusion pattern to derive further diagnostic statements. This work directs at the spatio-temporal analysis of blood perfusion from videos. We present a novel algorithm that bases on the two-dimensional representation of the blood pulsation (perfusion map). The basic idea behind the proposed algorithm consists of a pairwise estimation of time delays between photoplethysmographic signals of spatially separated regions. The probabilistic approach yields a parameter denoted as perfusion speed. We compare the perfusion speed versus two parameters, which assess the strength of blood pulsation (perfusion strength and signal to noise ratio). Preliminary results using video data with different physiological stimuli (cold pressure test, cold face test) show that all measures are influenced by those stimuli (some of them with statistical certainty). The perfusion speed turned out to be more sensitive than the other measures in some cases. However, our results also show that the intraindividual stability and interindividual comparability of all used measures remain critical points. This work proves the general feasibility of employing the perfusion speed as novel iPPG quantity. Future studies will address open points like the handling of ballistocardiographic effects and will try to deepen the understanding of the predominant physiological mechanisms and their relation to the algorithmic performance.

  20. Kinematics of the field hockey penalty corner push-in.

    PubMed

    Kerr, Rebecca; Ness, Kevin

    2006-01-01

    The aims of the study were to determine those variables that significantly affect push-in execution and thereby formulate coaching recommendations specific to the push-in. Two 50 Hz video cameras recorded transverse and longitudinal views of push-in trials performed by eight experienced and nine inexperienced male push-in performers. Video footage was digitized for data analysis of ball speed, stance width, drag distance, drag time, drag speed, centre of massy displacement and segment and stick displacements and velocities. Experienced push-in performers demonstrated a significantly greater (p < 0.05) stance width, a significantly greater distance between the ball and the front foot at the start of the push-in and a significantly faster ball speed than inexperienced performers. In addition, the experienced performers showed a significant positive correlation between ball speed and playing experience and tended to adopt a combination of simultaneous and sequential segment rotation to achieve accuracy and fast ball speed. The study yielded the following coaching recommendations for enhanced push-in performance: maximize drag distance by maximizing front foot-ball distance at the start of the push-in; use a combination of simultaneous and sequential segment rotations to optimise both accuracy and ball speed and maximize drag speed.

  1. Analysis of the learning curve for transurethral resection of the prostate. Is there any influence of musical instrument and video game skills on surgical performance?

    PubMed Central

    Yamaçake, Kleiton Gabriel Ribeiro; Nakano, Elcio Tadashi; Soares, Iva Barbosa; Cordeiro, Paulo; Srougi, Miguel; Antunes, Alberto Azoubel

    2015-01-01

    Objective To evaluate the learning curve for transurethral resection of the prostate (TURP) among urology residents and study the impact of video game and musical instrument playing abilities on its performance. Material and methods A prospective study was performed from July 2009 to January 2013 with patients submitted to TURP for benign prostatic hyperplasia. Fourteen residents operated on 324 patients. The following parameters were analyzed: age, prostate-specific antigen levels, prostate weight on ultrasound, pre- and postoperative serum sodium and hemoglobin levels, weight of resected tissue, operation time, speed of resection, and incidence of capsular lesions. Gender, handedness, and prior musical instrument and video game playing experience were recorded using survey responses. Results The mean resection speed in the first 10 procedures was 0.36 g/min and reached a mean of 0.51 g/min after the 20th procedure. The incidence of capsular lesions decreased progressively. The operation time decreased progressively for each subgroup regardless of the difference in the weight of tissue resected. Those experienced in playing video games presented superior resection speed (0.45 g/min) when compared with the novice (0.35 g/min) and intermediate (0.38 g/min) groups (p=0.112). Musical instrument playing abilities did not affect the surgical performance. Conclusion Speed of resection, weight of resected tissue, and percentage of resected tissue improve significantly and the incidence of capsular lesions reduces after the performance of 10 TURP procedures. Experience in playing video games or musical instruments does not have a significant effect on outcomes. PMID:26516596

  2. Development of high-speed video cameras

    NASA Astrophysics Data System (ADS)

    Etoh, Takeharu G.; Takehara, Kohsei; Okinaka, Tomoo; Takano, Yasuhide; Ruckelshausen, Arno; Poggemann, Dirk

    2001-04-01

    Presented in this paper is an outline of the R and D activities on high-speed video cameras, which have been done in Kinki University since more than ten years ago, and are currently proceeded as an international cooperative project with University of Applied Sciences Osnabruck and other organizations. Extensive marketing researches have been done, (1) on user's requirements on high-speed multi-framing and video cameras by questionnaires and hearings, and (2) on current availability of the cameras of this sort by search of journals and websites. Both of them support necessity of development of a high-speed video camera of more than 1 million fps. A video camera of 4,500 fps with parallel readout was developed in 1991. A video camera with triple sensors was developed in 1996. The sensor is the same one as developed for the previous camera. The frame rate is 50 million fps for triple-framing and 4,500 fps for triple-light-wave framing, including color image capturing. Idea on a video camera of 1 million fps with an ISIS, In-situ Storage Image Sensor, was proposed in 1993 at first, and has been continuously improved. A test sensor was developed in early 2000, and successfully captured images at 62,500 fps. Currently, design of a prototype ISIS is going on, and, hopefully, will be fabricated in near future. Epoch-making cameras in history of development of high-speed video cameras by other persons are also briefly reviewed.

  3. Increasing Speed of Processing With Action Video Games

    PubMed Central

    Dye, Matthew W.G.; Green, C. Shawn; Bavelier, Daphne

    2010-01-01

    In many everyday situations, speed is of the essence. However, fast decisions typically mean more mistakes. To this day, it remains unknown whether reaction times can be reduced with appropriate training, within one individual, across a range of tasks, and without compromising accuracy. Here we review evidence that the very act of playing action video games significantly reduces reaction times without sacrificing accuracy. Critically, this increase in speed is observed across various tasks beyond game situations. Video gaming may therefore provide an efficient training regimen to induce a general speeding of perceptual reaction times without decreases in accuracy of performance. PMID:20485453

  4. STAPP: Spatiotemporal analysis of plantar pressure measurements using statistical parametric mapping.

    PubMed

    Booth, Brian G; Keijsers, Noël L W; Sijbers, Jan; Huysmans, Toon

    2018-05-03

    Pedobarography produces large sets of plantar pressure samples that are routinely subsampled (e.g. using regions of interest) or aggregated (e.g. center of pressure trajectories, peak pressure images) in order to simplify statistical analysis and provide intuitive clinical measures. We hypothesize that these data reductions discard gait information that can be used to differentiate between groups or conditions. To test the hypothesis of null information loss, we created an implementation of statistical parametric mapping (SPM) for dynamic plantar pressure datasets (i.e. plantar pressure videos). Our SPM software framework brings all plantar pressure videos into anatomical and temporal correspondence, then performs statistical tests at each sampling location in space and time. Novelly, we introduce non-linear temporal registration into the framework in order to normalize for timing differences within the stance phase. We refer to our software framework as STAPP: spatiotemporal analysis of plantar pressure measurements. Using STAPP, we tested our hypothesis on plantar pressure videos from 33 healthy subjects walking at different speeds. As walking speed increased, STAPP was able to identify significant decreases in plantar pressure at mid-stance from the heel through the lateral forefoot. The extent of these plantar pressure decreases has not previously been observed using existing plantar pressure analysis techniques. We therefore conclude that the subsampling of plantar pressure videos - a task which led to the discarding of gait information in our study - can be avoided using STAPP. Copyright © 2018 Elsevier B.V. All rights reserved.

  5. Slow Speed--Fast Motion: Time-Lapse Recordings in Physics Education

    ERIC Educational Resources Information Center

    Vollmer, Michael; Möllmann, Klaus-Peter

    2018-01-01

    Video analysis with a 30 Hz frame rate is the standard tool in physics education. The development of affordable high-speed-cameras has extended the capabilities of the tool for much smaller time scales to the 1 ms range, using frame rates of typically up to 1000 frames s[superscript -1], allowing us to study transient physics phenomena happening…

  6. Non-contact Real-time heart rate measurements based on high speed circuit technology research

    NASA Astrophysics Data System (ADS)

    Wu, Jizhe; Liu, Xiaohua; Kong, Lingqin; Shi, Cong; Liu, Ming; Hui, Mei; Dong, Liquan; Zhao, Yuejin

    2015-08-01

    In recent years, morbidity and mortality of the cardiovascular or cerebrovascular disease, which threaten human health greatly, increased year by year. Heart rate is an important index of these diseases. To address this status, the paper puts forward a kind of simple structure, easy operation, suitable for large populations of daily monitoring non-contact heart rate measurement. In the method we use imaging equipment video sensitive areas. The changes of light intensity reflected through the image grayscale average. The light change is caused by changes in blood volume. We video the people face which include the sensitive areas (ROI), and use high-speed processing circuit to save the video as AVI format into memory. After processing the whole video of a period of time, we draw curve of each color channel with frame number as horizontal axis. Then get heart rate from the curve. We use independent component analysis (ICA) to restrain noise of sports interference, realized the accurate extraction of heart rate signal under the motion state. We design an algorithm, based on high-speed processing circuit, for face recognition and tracking to automatically get face region. We do grayscale average processing to the recognized image, get RGB three grayscale curves, and extract a clearer pulse wave curves through independent component analysis, and then we get the heart rate under the motion state. At last, by means of compare our system with Fingertip Pulse Oximeter, result show the system can realize a more accurate measurement, the error is less than 3 pats per minute.

  7. Dashboard Videos

    NASA Astrophysics Data System (ADS)

    Gleue, Alan D.; Depcik, Chris; Peltier, Ted

    2012-11-01

    Last school year, I had a web link emailed to me entitled "A Dashboard Physics Lesson." The link, created and posted by Dale Basier on his Lab Out Loud blog, illustrates video of a car's speedometer synchronized with video of the road. These two separate video streams are compiled into one video that students can watch and analyze. After seeing this website and video, I decided to create my own dashboard videos to show to my high school physics students. I have produced and synchronized 12 separate dashboard videos, each about 10 minutes in length, driving around the city of Lawrence, KS, and Douglas County, and posted them to a website.2 Each video reflects different types of driving: both positive and negative accelerations and constant speeds. As shown in Fig. 1, I was able to capture speed, distance, and miles per gallon from my dashboard instrumentation. By linking this with a stopwatch, each of these quantities can be graphed with respect to time. I anticipate and hope that teachers will find these useful in their own classrooms, i.e., having physics students watch the videos and create their own motion maps (distance-time, speed-time) for study.

  8. Prediction of shot success for basketball free throws: visual search strategy.

    PubMed

    Uchida, Yusuke; Mizuguchi, Nobuaki; Honda, Masaaki; Kanosue, Kazuyuki

    2014-01-01

    In ball games, players have to pay close attention to visual information in order to predict the movements of both the opponents and the ball. Previous studies have indicated that players primarily utilise cues concerning the ball and opponents' body motion. The information acquired must be effective for observing players to select the subsequent action. The present study evaluated the effects of changes in the video replay speed on the spatial visual search strategy and ability to predict free throw success. We compared eye movements made while observing a basketball free throw by novices and experienced basketball players. Correct response rates were close to chance (50%) at all video speeds for the novices. The correct response rate of experienced players was significantly above chance (and significantly above that of the novices) at the normal speed, but was not different from chance at both slow and fast speeds. Experienced players gazed more on the lower part of the player's body when viewing a normal speed video than the novices. The players likely detected critical visual information to predict shot success by properly moving their gaze according to the shooter's movements. This pattern did not change when the video speed was decreased, but changed when it was increased. These findings suggest that temporal information is important for predicting action outcomes and that such outcomes are sensitive to video speed.

  9. High-speed video analysis of forward and backward spattered blood droplets

    NASA Astrophysics Data System (ADS)

    Comiskey, Patrick; Yarin, Alexander; Attinger, Daniel

    2017-11-01

    High-speed videos of blood spatter due to a gunshot taken by the Ames Laboratory Midwest Forensics Resource Center are analyzed. The videos used in this analysis were focused on a variety of targets hit by a bullet which caused either forward, backward, or both types of blood spatter. The analysis process utilized particle image velocimetry and particle analysis software to measure drop velocities as well as the distributions of the number of droplets and their respective side view area. This analysis revealed that forward spatter results in drops travelling twice as fast compared to backward spatter, while both types of spatter contain drops of approximately the same size. Moreover, the close-to-cone domain in which drops are issued is larger in forward spatter than in the backward one. The inclination angle of the bullet as it penetrates the target is seen to play a significant role in the directional preference of the spattered blood. Also, the aerodynamic drop-drop interaction, muzzle gases, bullet impact angle, as well as the aerodynamic wake of the bullet are seen to greatly influence the flight of the drops. The aim of this study is to provide a quantitative basis for current and future research on bloodstain pattern analysis. This work was financially supported by the United States National Institute of Justice (award NIJ 2014-DN-BXK036).

  10. ciliaFA: a research tool for automated, high-throughput measurement of ciliary beat frequency using freely available software

    PubMed Central

    2012-01-01

    Background Analysis of ciliary function for assessment of patients suspected of primary ciliary dyskinesia (PCD) and for research studies of respiratory and ependymal cilia requires assessment of both ciliary beat pattern and beat frequency. While direct measurement of beat frequency from high-speed video recordings is the most accurate and reproducible technique it is extremely time consuming. The aim of this study was to develop a freely available automated method of ciliary beat frequency analysis from digital video (AVI) files that runs on open-source software (ImageJ) coupled to Microsoft Excel, and to validate this by comparison to the direct measuring high-speed video recordings of respiratory and ependymal cilia. These models allowed comparison to cilia beating between 3 and 52 Hz. Methods Digital video files of motile ciliated ependymal (frequency range 34 to 52 Hz) and respiratory epithelial cells (frequency 3 to 18 Hz) were captured using a high-speed digital video recorder. To cover the range above between 18 and 37 Hz the frequency of ependymal cilia were slowed by the addition of the pneumococcal toxin pneumolysin. Measurements made directly by timing a given number of individual ciliary beat cycles were compared with those obtained using the automated ciliaFA system. Results The overall mean difference (± SD) between the ciliaFA and direct measurement high-speed digital imaging methods was −0.05 ± 1.25 Hz, the correlation coefficient was shown to be 0.991 and the Bland-Altman limits of agreement were from −1.99 to 1.49 Hz for respiratory and from −2.55 to 3.25 Hz for ependymal cilia. Conclusions A plugin for ImageJ was developed that extracts pixel intensities and performs fast Fourier transformation (FFT) using Microsoft Excel. The ciliaFA software allowed automated, high throughput measurement of respiratory and ependymal ciliary beat frequency (range 3 to 52 Hz) and avoids operator error due to selection bias. We have included free access to the ciliaFA plugin and installation instructions in Additional file 1 accompanying this manuscript that other researchers may use. PMID:23351276

  11. Analysis of United States’ Broadband Policy

    DTIC Science & Technology

    2007-03-01

    compared with the minimum speed the FCC uses in its definition of broadband access. For example, using a 56K modem connection to download a 10...transmission rates multiple times faster than a 56K modem , users can view video or download software and other data-intensive files in a matter of seconds...boast download speeds from 144Kbps (roughly three times faster than a 56K dial-up modem connection) to 2.4Mbps (close to cable- modem speed). Although

  12. Digital image processing of bone - Problems and potentials

    NASA Technical Reports Server (NTRS)

    Morey, E. R.; Wronski, T. J.

    1980-01-01

    The development of a digital image processing system for bone histomorphometry and fluorescent marker monitoring is discussed. The system in question is capable of making measurements of UV or light microscope features on a video screen with either video or computer-generated images, and comprises a microscope, low-light-level video camera, video digitizer and display terminal, color monitor, and PDP 11/34 computer. Capabilities demonstrated in the analysis of an undecalcified rat tibia include the measurement of perimeter and total bone area, and the generation of microscope images, false color images, digitized images and contoured images for further analysis. Software development will be based on an existing software library, specifically the mini-VICAR system developed at JPL. It is noted that the potentials of the system in terms of speed and reliability far exceed any problems associated with hardware and software development.

  13. Excitability of the Primary Motor Cortex Increases More Strongly with Slow- than with Normal-Speed Presentation of Actions

    PubMed Central

    Moriuchi, Takefumi; Iso, Naoki; Sagari, Akira; Ogahara, Kakuya; Kitajima, Eiji; Tanaka, Koji; Tabira, Takayuki; Higashi, Toshio

    2014-01-01

    Introduction The aim of the present study was to investigate how the speed of observed action affects the excitability of the primary motor cortex (M1), as assessed by the size of motor evoked potentials (MEPs) induced by transcranial magnetic stimulation (TMS). Methods Eighteen healthy subjects watched a video clip of a person catching a ball, played at three different speeds (normal-, half-, and quarter-speed). MEPs were induced by TMS when the model's hand had opened to the widest extent just before catching the ball (“open”) and when the model had just caught the ball (“catch”). These two events were locked to specific frames of the video clip (“phases”), rather than occurring at specific absolute times, so that they could easily be compared across different speeds. MEPs were recorded from the thenar (TH) and abductor digiti minimi (ADM) muscles of the right hand. Results The MEP amplitudes were higher when the subjects watched the video clip at low speed than when they watched the clip at normal speed. A repeated-measures ANOVA, with the factor VIDEO-SPEED, showed significant main effects. Bonferroni's post hoc test showed that the following MEP amplitude differences were significant: TH, normal vs. quarter; ADM, normal vs. half; and ADM, normal vs. quarter. Paired t-tests showed that the significant MEP amplitude differences between TMS phases under each speed condition were TH, “catch” higher than “open” at quarter speed; ADM, “catch” higher than “open” at half speed. Conclusions These results indicate that the excitability of M1 was higher when the observed action was played at low speed. Our findings suggest that the action observation system became more active when the subjects observed the video clip at low speed, because the subjects could then recognize the elements of action and intention in others. PMID:25479161

  14. A high sensitivity 20Mfps CMOS image sensor with readout speed of 1Tpixel/sec for visualization of ultra-high speed phenomena

    NASA Astrophysics Data System (ADS)

    Kuroda, R.; Sugawa, S.

    2017-02-01

    Ultra-high speed (UHS) CMOS image sensors with on-chop analog memories placed on the periphery of pixel array for the visualization of UHS phenomena are overviewed in this paper. The developed UHS CMOS image sensors consist of 400H×256V pixels and 128 memories/pixel, and the readout speed of 1Tpixel/sec is obtained, leading to 10 Mfps full resolution video capturing with consecutive 128 frames, and 20 Mfps half resolution video capturing with consecutive 256 frames. The first development model has been employed in the high speed video camera and put in practical use in 2012. By the development of dedicated process technologies, photosensitivity improvement and power consumption reduction were simultaneously achieved, and the performance improved version has been utilized in the commercialized high-speed video camera since 2015 that offers 10 Mfps with ISO16,000 photosensitivity. Due to the improved photosensitivity, clear images can be captured and analyzed even under low light condition, such as under a microscope as well as capturing of UHS light emission phenomena.

  15. A novel Kalman filter based video image processing scheme for two-photon fluorescence microscopy

    NASA Astrophysics Data System (ADS)

    Sun, Wenqing; Huang, Xia; Li, Chunqiang; Xiao, Chuan; Qian, Wei

    2016-03-01

    Two-photon fluorescence microscopy (TPFM) is a perfect optical imaging equipment to monitor the interaction between fast moving viruses and hosts. However, due to strong unavoidable background noises from the culture, videos obtained by this technique are too noisy to elaborate this fast infection process without video image processing. In this study, we developed a novel scheme to eliminate background noises, recover background bacteria images and improve video qualities. In our scheme, we modified and implemented the following methods for both host and virus videos: correlation method, round identification method, tree-structured nonlinear filters, Kalman filters, and cell tracking method. After these procedures, most of noises were eliminated and host images were recovered with their moving directions and speed highlighted in the videos. From the analysis of the processed videos, 93% bacteria and 98% viruses were correctly detected in each frame on average.

  16. "Flash" dance: how speed modulates percieved duration in dancers and non-dancers.

    PubMed

    Sgouramani, Helena; Vatakis, Argiro

    2014-03-01

    Speed has been proposed as a modulating factor on duration estimation. However, the different measurement methodologies and experimental designs used have led to inconsistent results across studies, and, thus, the issue of how speed modulates time estimation remains unresolved. Additionally, no studies have looked into the role of expertise on spatiotemporal tasks (tasks requiring high temporal and spatial acuity; e.g., dancing) and susceptibility to modulations of speed in timing judgments. In the present study, therefore, using naturalistic, dynamic dance stimuli, we aimed at defining the role of speed and the interaction of speed and experience on time estimation. We presented videos of a dancer performing identical ballet steps in fast and slow versions, while controlling for the number of changes present. Professional dancers and non-dancers performed duration judgments through a production and a reproduction task. Analysis revealed a significantly larger underestimation of fast videos as compared to slow ones during reproduction. The exact opposite result was true for the production task. Dancers were significantly less variable in their time estimations as compared to non-dancers. Speed and experience, therefore, affect the participants' estimates of time. Results are discussed in association to the theoretical framework of current models by focusing on the role of attention. © 2013 Elsevier B.V. All rights reserved.

  17. Testing fine motor coordination via telehealth: effects of video characteristics on reliability and validity.

    PubMed

    Hoenig, Helen M; Amis, Kristopher; Edmonds, Carol; Morgan, Michelle S; Landerman, Lawrence; Caves, Kevin

    2017-01-01

    Background There is limited research about the effects of video quality on the accuracy of assessments of physical function. Methods A repeated measures study design was used to assess reliability and validity of the finger-nose test (FNT) and the finger-tapping test (FTT) carried out with 50 veterans who had impairment in gross and/or fine motor coordination. Videos were scored by expert raters under eight differing conditions, including in-person, high definition video with slow motion review and standard speed videos with varying bit rates and frame rates. Results FTT inter-rater reliability was excellent with slow motion video (ICC 0.98-0.99) and good (ICC 0.59) under the normal speed conditions. Inter-rater reliability for FNT 'attempts' was excellent (ICC 0.97-0.99) for all viewing conditions; for FNT 'misses' it was good to excellent (ICC 0.89) with slow motion review but substantially worse (ICC 0.44) on the normal speed videos. FTT criterion validity (i.e. compared to slow motion review) was excellent (β = 0.94) for the in-person rater and good ( β = 0.77) on normal speed videos. Criterion validity for FNT 'attempts' was excellent under all conditions ( r ≥ 0.97) and for FNT 'misses' it was good to excellent under all conditions ( β = 0.61-0.81). Conclusions In general, the inter-rater reliability and validity of the FNT and FTT assessed via video technology is similar to standard clinical practices, but is enhanced with slow motion review and/or higher bit rate.

  18. Integrating TV/digital data spectrograph system

    NASA Technical Reports Server (NTRS)

    Duncan, B. J.; Fay, T. D.; Miller, E. R.; Wamsteker, W.; Brown, R. M.; Neely, P. L.

    1975-01-01

    A 25-mm vidicon camera was previously modified to allow operation in an integration mode for low-light-level astronomical work. The camera was then mated to a low-dispersion spectrograph for obtaining spectral information in the 400 to 750 nm range. A high speed digital video image system was utilized to digitize the analog video signal, place the information directly into computer-type memory, and record data on digital magnetic tape for permanent storage and subsequent analysis.

  19. A unified and efficient framework for court-net sports video analysis using 3D camera modeling

    NASA Astrophysics Data System (ADS)

    Han, Jungong; de With, Peter H. N.

    2007-01-01

    The extensive amount of video data stored on available media (hard and optical disks) necessitates video content analysis, which is a cornerstone for different user-friendly applications, such as, smart video retrieval and intelligent video summarization. This paper aims at finding a unified and efficient framework for court-net sports video analysis. We concentrate on techniques that are generally applicable for more than one sports type to come to a unified approach. To this end, our framework employs the concept of multi-level analysis, where a novel 3-D camera modeling is utilized to bridge the gap between the object-level and the scene-level analysis. The new 3-D camera modeling is based on collecting features points from two planes, which are perpendicular to each other, so that a true 3-D reference is obtained. Another important contribution is a new tracking algorithm for the objects (i.e. players). The algorithm can track up to four players simultaneously. The complete system contributes to summarization by various forms of information, of which the most important are the moving trajectory and real-speed of each player, as well as 3-D height information of objects and the semantic event segments in a game. We illustrate the performance of the proposed system by evaluating it for a variety of court-net sports videos containing badminton, tennis and volleyball, and we show that the feature detection performance is above 92% and events detection about 90%.

  20. Small Particle Impact Damage on Different Glass Substrates

    NASA Technical Reports Server (NTRS)

    Waxman, R.; Guven, I.; Gray, P.

    2017-01-01

    Impact experiments using sand particles were performed on four distinct glass substrates. The sand particles were characterized using the X-Ray micro-CT technique; 3-D reconstruction of the particles was followed by further size and shape analyses. High-speed video footage from impact tests was used to calculate the incoming and rebound velocities of the individual sand impact events, as well as particle volume. Further, video analysis was used in conjunction with optical and scanning electron microscopy to relate the incoming velocity and shape of the particles to subsequent fractures, including both radial and lateral cracks. Analysis was performed using peridynamic simulations.

  1. Video quality assessment using a statistical model of human visual speed perception.

    PubMed

    Wang, Zhou; Li, Qiang

    2007-12-01

    Motion is one of the most important types of information contained in natural video, but direct use of motion information in the design of video quality assessment algorithms has not been deeply investigated. Here we propose to incorporate a recent model of human visual speed perception [Nat. Neurosci. 9, 578 (2006)] and model visual perception in an information communication framework. This allows us to estimate both the motion information content and the perceptual uncertainty in video signals. Improved video quality assessment algorithms are obtained by incorporating the model as spatiotemporal weighting factors, where the weight increases with the information content and decreases with the perceptual uncertainty. Consistent improvement over existing video quality assessment algorithms is observed in our validation with the video quality experts group Phase I test data set.

  2. PC-based high-speed video-oculography for measuring rapid eye movements in mice.

    PubMed

    Sakatani, Tomoya; Isa, Tadashi

    2004-05-01

    We newly developed an infrared video-oculographic system for on-line tracking of the eye position in awake and head-fixed mice, with high temporal resolution (240 Hz). The system consists of a commercially available high-speed CCD camera and an image processing software written in LabVIEW run on IBM-PC with a plug-in video grabber board. This software calculates the center and area of the pupil by fitting circular function to the pupil boundary, and allows robust and stable tracking of the eye position in small animals like mice. On-line calculation is performed to obtain reasonable circular fitting of the pupil boundary even if a part of the pupil is covered with shadows or occluded by eyelids or corneal reflections. The pupil position in the 2-D video plane is converted to the rotation angle of the eyeball by estimating its rotation center based on the anatomical eyeball model. By this recording system, it is possible to perform quantitative analysis of rapid eye movements such as saccades in mice. This will provide a powerful tool for analyzing molecular basis of oculomotor and cognitive functions by using various lines of mutant mice.

  3. News, music videos and action movie exposure and adolescents' intentions to take risks in traffic.

    PubMed

    Beullens, Kathleen; Van den Bulck, Jan

    2008-01-01

    This study explored the relationship between adolescents' viewing of specific television genres (action movies, news and music videos) and the intention to take risks in traffic. Participants were 2194 adolescent boys and girls who completed a questionnaire on television viewing, risk perception and the intention to speed and drive after consuming alcohol. As hypothesized, more news viewing was associated with a higher perceived risk of drunk driving and speeding. More music video viewing, on the other hand, was negatively associated with the assessment of the dangers of speeding and driving under the influence of alcohol. Girls regarded speeding and drunk driving as more dangerous than boys did. Contrary to our hypotheses, action movie viewing did not make a significant contribution to our models. Both news and music video viewing were indirectly, via risk perception, related to the intention to drive risky. The more dangerous a particular behavior was perceived to be, the less likely respondents intended to exhibit this behavior in the future.

  4. Identifying sports videos using replay, text, and camera motion features

    NASA Astrophysics Data System (ADS)

    Kobla, Vikrant; DeMenthon, Daniel; Doermann, David S.

    1999-12-01

    Automated classification of digital video is emerging as an important piece of the puzzle in the design of content management systems for digital libraries. The ability to classify videos into various classes such as sports, news, movies, or documentaries, increases the efficiency of indexing, browsing, and retrieval of video in large databases. In this paper, we discuss the extraction of features that enable identification of sports videos directly from the compressed domain of MPEG video. These features include detecting the presence of action replays, determining the amount of scene text in vide, and calculating various statistics on camera and/or object motion. The features are derived from the macroblock, motion,and bit-rate information that is readily accessible from MPEG video with very minimal decoding, leading to substantial gains in processing speeds. Full-decoding of selective frames is required only for text analysis. A decision tree classifier built using these features is able to identify sports clips with an accuracy of about 93 percent.

  5. [Development of a video image system for wireless capsule endoscopes based on DSP].

    PubMed

    Yang, Li; Peng, Chenglin; Wu, Huafeng; Zhao, Dechun; Zhang, Jinhua

    2008-02-01

    A video image recorder to record video picture for wireless capsule endoscopes was designed. TMS320C6211 DSP of Texas Instruments Inc. is the core processor of this system. Images are periodically acquired from Composite Video Broadcast Signal (CVBS) source and scaled by video decoder (SAA7114H). Video data is transported from high speed buffer First-in First-out (FIFO) to Digital Signal Processor (DSP) under the control of Complex Programmable Logic Device (CPLD). This paper adopts JPEG algorithm for image coding, and the compressed data in DSP was stored to Compact Flash (CF) card. TMS320C6211 DSP is mainly used for image compression and data transporting. Fast Discrete Cosine Transform (DCT) algorithm and fast coefficient quantization algorithm are used to accelerate operation speed of DSP and decrease the executing code. At the same time, proper address is assigned for each memory, which has different speed;the memory structure is also optimized. In addition, this system uses plenty of Extended Direct Memory Access (EDMA) to transport and process image data, which results in stable and high performance.

  6. In vivo cross-sectional imaging of the phonating larynx using long-range Doppler optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Coughlan, Carolyn A.; Chou, Li-Dek; Jing, Joseph C.; Chen, Jason J.; Rangarajan, Swathi; Chang, Theodore H.; Sharma, Giriraj K.; Cho, Kyoungrai; Lee, Donghoon; Goddard, Julie A.; Chen, Zhongping; Wong, Brian J. F.

    2016-03-01

    Diagnosis and treatment of vocal fold lesions has been a long-evolving science for the otolaryngologist. Contemporary practice requires biopsy of a glottal lesion in the operating room under general anesthesia for diagnosis. Current in-office technology is limited to visualizing the surface of the vocal folds with fiber-optic or rigid endoscopy and using stroboscopic or high-speed video to infer information about submucosal processes. Previous efforts using optical coherence tomography (OCT) have been limited by small working distances and imaging ranges. Here we report the first full field, high-speed, and long-range OCT images of awake patients’ vocal folds as well as cross-sectional video and Doppler analysis of their vocal fold motions during phonation. These vertical-cavity surface-emitting laser source (VCSEL) OCT images offer depth resolved, high-resolution, high-speed, and panoramic images of both the true and false vocal folds. This technology has the potential to revolutionize in-office imaging of the larynx.

  7. A simple video-based timing system for on-ice team testing in ice hockey: a technical report.

    PubMed

    Larson, David P; Noonan, Benjamin C

    2014-09-01

    The purpose of this study was to describe and evaluate a newly developed on-ice timing system for team evaluation in the sport of ice hockey. We hypothesized that this new, simple, inexpensive, timing system would prove to be highly accurate and reliable. Six adult subjects (age 30.4 ± 6.2 years) performed on ice tests of acceleration and conditioning. The performance times of the subjects were recorded using a handheld stopwatch, photocell, and high-speed (240 frames per second) video. These results were then compared to allow for accuracy calculations of the stopwatch and video as compared with filtered photocell timing that was used as the "gold standard." Accuracy was evaluated using maximal differences, typical error/coefficient of variation (CV), and intraclass correlation coefficients (ICCs) between the timing methods. The reliability of the video method was evaluated using the same variables in a test-retest analysis both within and between evaluators. The video timing method proved to be both highly accurate (ICC: 0.96-0.99 and CV: 0.1-0.6% as compared with the photocell method) and reliable (ICC and CV within and between evaluators: 0.99 and 0.08%, respectively). This video-based timing method provides a very rapid means of collecting a high volume of very accurate and reliable on-ice measures of skating speed and conditioning, and can easily be adapted to other testing surfaces and parameters.

  8. Quantitative high-speed laryngoscopic analysis of vocal fold vibration in fatigued voice of young karaoke singers.

    PubMed

    Yiu, Edwin M-L; Wang, Gaowu; Lo, Andy C Y; Chan, Karen M-K; Ma, Estella P-M; Kong, Jiangping; Barrett, Elizabeth Ann

    2013-11-01

    The present study aimed to determine whether there were physiological differences in the vocal fold vibration between nonfatigued and fatigued voices using high-speed laryngoscopic imaging and quantitative analysis. Twenty participants aged from 18 to 23 years (mean, 21.2 years; standard deviation, 1.3 years) with normal voice were recruited to participate in an extended singing task. Vocal fatigue was induced using a singing task. High-speed laryngoscopic image recordings of /i/ phonation were taken before and after the singing task. The laryngoscopic images were semiautomatically analyzed with the quantitative high-speed video processing program to extract indices related to the anteroposterior dimension (length), transverse dimension (width), and the speed of opening and closing. Significant reduction in the glottal length-to-width ratio index was found after vocal fatigue. Physiologically, this indicated either a significantly shorter (anteroposteriorly) or a wider (transversely) glottis after vocal fatigue. The high-speed imaging technique using quantitative analysis has the potential for early identification of vocally fatigued voice. Copyright © 2013 The Voice Foundation. All rights reserved.

  9. The flight of Ruellia ciliatiflora seeds

    NASA Astrophysics Data System (ADS)

    Cooper, Eric; Mosher, Molly; Whitaker, Dwight

    2017-11-01

    Fruits of Ruellia ciliatiflora explosively launch seeds at velocities over 10 m/s, reaching distances of over 7 m. Through high speed video analysis of the seeds' flight, we have observed high rates of backspin of up to 1660 Hz, one of the fastest known rotational rates in the natural world. Analytical calculations that model the torques on the seeds as those of a Rayleigh Disk and incorporate the effects of gravity of the seeds' angles of attack, show that the seeds' backspin orientation is stable under gyroscopic procession. This stable backspin orientation maintains a small area in direction of motion, decreasing drag force on the seeds and thus increasing dispersal distance. From careful analysis of high-speed video of the seeds' flight we experimentally determine the seeds' drag coefficients and find that they are consistent with drag predicted for the streamlined orientation. By using backspin to ensure a streamlined orientation, the seeds are able to reduce the energy costs for seed dispersal by up to a factor of ten.

  10. A gradient method for the quantitative analysis of cell movement and tissue flow and its application to the analysis of multicellular Dictyostelium development.

    PubMed

    Siegert, F; Weijer, C J; Nomura, A; Miike, H

    1994-01-01

    We describe the application of a novel image processing method, which allows quantitative analysis of cell and tissue movement in a series of digitized video images. The result is a vector velocity field showing average direction and velocity of movement for every pixel in the frame. We apply this method to the analysis of cell movement during different stages of the Dictyostelium developmental cycle. We analysed time-lapse video recordings of cell movement in single cells, mounds and slugs. The program can correctly assess the speed and direction of movement of either unlabelled or labelled cells in a time series of video images depending on the illumination conditions. Our analysis of cell movement during multicellular development shows that the entire morphogenesis of Dictyostelium is characterized by rotational cell movement. The analysis of cell and tissue movement by the velocity field method should be applicable to the analysis of morphogenetic processes in other systems such as gastrulation and neurulation in vertebrate embryos.

  11. Using Video Analysis and Biomechanics to Engage Life Science Majors in Introductory Physics

    NASA Astrophysics Data System (ADS)

    Stephens, Jeff

    There is an interest in Introductory Physics for the Life Sciences (IPLS) as a way to better engage students in what may be their only physical science course. In this talk I will present some low cost and readily available technologies for video analysis and how they have been implemented in classes and in student research projects. The technologies include software like Tracker and LoggerPro for video analysis and low cost high speed cameras for capturing real world events. The focus of the talk will be on content created by students including two biomechanics research projects performed over the summer by pre-physical therapy majors. One project involved assessing medial knee displacement (MKD), a situation where the subject's knee becomes misaligned during a squatting motion and is a contributing factor in ACL and other knee injuries. The other project looks at the difference in landing forces experienced by gymnasts and cheer-leaders while performing on foam mats versus spring floors. The goal of this talk is to demonstrate how easy it can be to engage life science majors through the use of video analysis and topics like biomechanics and encourage others to try it for themselves.

  12. Enhanced visual short-term memory in action video game players.

    PubMed

    Blacker, Kara J; Curby, Kim M

    2013-08-01

    Visual short-term memory (VSTM) is critical for acquiring visual knowledge and shows marked individual variability. Previous work has illustrated a VSTM advantage among action video game players (Boot et al. Acta Psychologica 129:387-398, 2008). A growing body of literature has suggested that action video game playing can bolster visual cognitive abilities in a domain-general manner, including abilities related to visual attention and the speed of processing, providing some potential bases for this VSTM advantage. In the present study, we investigated the VSTM advantage among video game players and assessed whether enhanced processing speed can account for this advantage. Experiment 1, using simple colored stimuli, revealed that action video game players demonstrate a similar VSTM advantage over nongamers, regardless of whether they are given limited or ample time to encode items into memory. Experiment 2, using complex shapes as the stimuli to increase the processing demands of the task, replicated this VSTM advantage, irrespective of encoding duration. These findings are inconsistent with a speed-of-processing account of this advantage. An alternative, attentional account, grounded in the existing literature on the visuo-cognitive consequences of video game play, is discussed.

  13. Field-based high-speed imaging of explosive eruptions

    NASA Astrophysics Data System (ADS)

    Taddeucci, J.; Scarlato, P.; Freda, C.; Moroni, M.

    2012-12-01

    Explosive eruptions involve, by definition, physical processes that are highly dynamic over short time scales. Capturing and parameterizing such processes is a major task in eruption understanding and forecasting, and a task that necessarily requires observational systems capable of high sampling rates. Seismic and acoustic networks are a prime tool for high-frequency observation of eruption, recently joined by Doppler radar and electric sensors. In comparison with the above monitoring systems, imaging techniques provide more complete and direct information of surface processes, but usually at a lower sampling rate. However, recent developments in high-speed imaging systems now allow such information to be obtained with a spatial and temporal resolution suitable for the analysis of several key eruption processes. Our most recent set up for high-speed imaging of explosive eruptions (FAMoUS - FAst, MUltiparametric Set-up,) includes: 1) a monochrome high speed camera, capable of 500 frames per second (fps) at high-definition (1280x1024 pixel) resolution and up to 200000 fps at reduced resolution; 2) a thermal camera capable of 50-200 fps at 480-120x640 pixel resolution; and 3) two acoustic to infrasonic sensors. All instruments are time-synchronized via a data logging system, a hand- or software-operated trigger, and via GPS, allowing signals from other instruments or networks to be directly recorded by the same logging unit or to be readily synchronized for comparison. FAMoUS weights less than 20 kg, easily fits into four, hand-luggage-sized backpacks, and can be deployed in less than 20' (and removed in less than 2', if needed). So far, explosive eruptions have been recorded in high-speed at several active volcanoes, including Fuego and Santiaguito (Guatemala), Stromboli (Italy), Yasur (Vanuatu), and Eyjafiallajokull (Iceland). Image processing and analysis from these eruptions helped illuminate several eruptive processes, including: 1) Pyroclasts ejection. High-speed videos reveal multiple, discrete ejection pulses within a single Strombolian explosion, with ejection velocities twice as high as previously recorded. Video-derived information on ejection velocity and ejecta mass can be combined with analytical and experimental models to constrain the physical parameters of the gas driving individual pulses. 2) Jet development. The ejection trajectory of pyroclasts can also be used to outline the spatial and temporal development of the eruptive jet and the dynamics of gas-pyroclast coupling within the jet, while high-speed thermal images add information on the temperature evolution in the jet itself as a function of the pyroclast size and content. 2) Pyroclasts settling. High-speed videos can be used to investigate the aerodynamic settling behavior of pyroclasts from bomb to ash in size and including ash aggregates, providing key parameters such as drag coefficient as a function of Re, and particle density. 3) The generation and propagation of acoustic and shock waves. Phase condensation in volcanic and atmospheric aerosol is triggered by the transit of pressure waves and can be recorded in high-speed videos, allowing the speed and wavelength of the waves to be measured and compared with the corresponding infrasonic signals and theoretical predictions.

  14. Multilevel analysis of sports video sequences

    NASA Astrophysics Data System (ADS)

    Han, Jungong; Farin, Dirk; de With, Peter H. N.

    2006-01-01

    We propose a fully automatic and flexible framework for analysis and summarization of tennis broadcast video sequences, using visual features and specific game-context knowledge. Our framework can analyze a tennis video sequence at three levels, which provides a broad range of different analysis results. The proposed framework includes novel pixel-level and object-level tennis video processing algorithms, such as a moving-player detection taking both the color and the court (playing-field) information into account, and a player-position tracking algorithm based on a 3-D camera model. Additionally, we employ scene-level models for detecting events, like service, base-line rally and net-approach, based on a number real-world visual features. The system can summarize three forms of information: (1) all court-view playing frames in a game, (2) the moving trajectory and real-speed of each player, as well as relative position between the player and the court, (3) the semantic event segments in a game. The proposed framework is flexible in choosing the level of analysis that is desired. It is effective because the framework makes use of several visual cues obtained from the real-world domain to model important events like service, thereby increasing the accuracy of the scene-level analysis. The paper presents attractive experimental results highlighting the system efficiency and analysis capabilities.

  15. The Accuracy of Conventional 2D Video for Quantifying Upper Limb Kinematics in Repetitive Motion Occupational Tasks

    PubMed Central

    Chen, Chia-Hsiung; Azari, David; Hu, Yu Hen; Lindstrom, Mary J.; Thelen, Darryl; Yen, Thomas Y.; Radwin, Robert G.

    2015-01-01

    Objective Marker-less 2D video tracking was studied as a practical means to measure upper limb kinematics for ergonomics evaluations. Background Hand activity level (HAL) can be estimated from speed and duty cycle. Accuracy was measured using a cross correlation template-matching algorithm for tracking a region of interest on the upper extremities. Methods Ten participants performed a paced load transfer task while varying HAL (2, 4, and 5) and load (2.2 N, 8.9 N and 17.8 N). Speed and acceleration measured from 2D video were compared against ground truth measurements using 3D infrared motion capture. Results The median absolute difference between 2D video and 3D motion capture was 86.5 mm/s for speed, and 591 mm/s2 for acceleration, and less than 93 mm/s for speed and 656 mm/s2 for acceleration when camera pan and tilt were within ±30 degrees. Conclusion Single-camera 2D video had sufficient accuracy (< 100 mm/s) for evaluating HAL. Practitioner Summary This study demonstrated that 2D video tracking had sufficient accuracy to measure HAL for ascertaining the American Conference of Government Industrial Hygienists Threshold Limit Value® for repetitive motion when the camera is located within ±30 degrees off the plane of motion when compared against 3D motion capture for a simulated repetitive motion task. PMID:25978764

  16. Video flowmeter

    DOEpatents

    Lord, D.E.; Carter, G.W.; Petrini, R.R.

    1983-08-02

    A video flowmeter is described that is capable of specifying flow nature and pattern and, at the same time, the quantitative value of the rate of volumetric flow. An image of a determinable volumetric region within a fluid containing entrained particles is formed and positioned by a rod optic lens assembly on the raster area of a low-light level television camera. The particles are illuminated by light transmitted through a bundle of glass fibers surrounding the rod optic lens assembly. Only particle images having speeds on the raster area below the raster line scanning speed may be used to form a video picture which is displayed on a video screen. The flowmeter is calibrated so that the locus of positions of origin of the video picture gives a determination of the volumetric flow rate of the fluid. 4 figs.

  17. Hypervelocity High Speed Projectile Imagery and Video

    NASA Technical Reports Server (NTRS)

    Henderson, Donald J.

    2009-01-01

    This DVD contains video showing the results of hypervelocity impact. One is showing a projectile impact on a Kevlar wrapped Aluminum bottle containing 3000 psi gaseous oxygen. One video show animations of a two stage light gas gun.

  18. Active Learning Approaches by Visualizing ICT Devices with Milliseconds Resolution for Deeper Understanding in Physics

    NASA Astrophysics Data System (ADS)

    Kobayashi, Akizo; Okiharu, Fumiko

    2010-07-01

    We are developing various modularized materials in physics education to overcome students' misconceptions by use of ICT, i.e. video analysis software and ultra-high-speed digital movies, motion detector, force sensors, current and voltage probes, temperature sensors etc. Furthermore, we also present some new modules of active learning approaches on electric circuit using high speed camera and voltage probes with milliseconds resolution. We are now especially trying to improve conceptual understanding by use of ICT devices with milliseconds resolution in various areas of physics education We give some modules of mass measurements by video analysis of collision phenomena by using high speed cameras—Casio EX-F1(1200 fps), EX-FH20(1000 fps) and EX-FC100/150(1000 fps). We present several new modules on collision phenomena to establish deeper understanding of conservation laws of momentum. We discuss some effective results of trial on a physics education training courses for science educators, and those for science teachers during the renewal years of teacher's license after every ten years in Japan. Finally, we discuss on some typical results of pre-test and post-test in our active learning approaches based on ICT, i.e. some evidence on improvements of physics education (increasing ratio of correct answer are 50%-level).

  19. Fiber-channel audio video standard for military and commercial aircraft product lines

    NASA Astrophysics Data System (ADS)

    Keller, Jack E.

    2002-08-01

    Fibre channel is an emerging high-speed digital network technology that combines to make inroads into the avionics arena. The suitability of fibre channel for such applications is largely due to its flexibility in these key areas: Network topologies can be configured in point-to-point, arbitrated loop or switched fabric connections. The physical layer supports either copper or fiber optic implementations with a Bit Error Rate of less than 10-12. Multiple Classes of Service are available. Multiple Upper Level Protocols are supported. Multiple high speed data rates offer open ended growth paths providing speed negotiation within a single network. Current speeds supported by commercially available hardware are 1 and 2 Gbps providing effective data rates of 100 and 200 MBps respectively. Such networks lend themselves well to the transport of digital video and audio data. This paper summarizes an ANSI standard currently in the final approval cycle of the InterNational Committee for Information Technology Standardization (INCITS). This standard defines a flexible mechanism whereby digital video, audio and ancillary data are systematically packaged for transport over a fibre channel network. The basic mechanism, called a container, houses audio and video content functionally grouped as elements of the container called objects. Featured in this paper is a specific container mapping called Simple Parametric Digital Video (SPDV) developed particularly to address digital video in avionics systems. SPDV provides pixel-based video with associated ancillary data typically sourced by various sensors to be processed and/or distributed in the cockpit for presentation via high-resolution displays. Also highlighted in this paper is a streamlined Upper Level Protocol (ULP) called Frame Header Control Procedure (FHCP) targeted for avionics systems where the functionality of a more complex ULP is not required.

  20. Analysis of brook trout spatial behavior during passage attempts in corrugated culverts using near-infrared illumination video imagery

    USGS Publications Warehouse

    Bergeron, Normand E.; Constantin, Pierre-Marc; Goerig, Elsa; Castro-Santos, Theodore R.

    2016-01-01

    We used video recording and near-infrared illumination to document the spatial behavior of brook trout of various sizes attempting to pass corrugated culverts under different hydraulic conditions. Semi-automated image analysis was used to digitize fish position at high temporal resolution inside the culvert, which allowed calculation of various spatial behavior metrics, including instantaneous ground and swimming speed, path complexity, distance from side walls, velocity preference ratio (mean velocity at fish lateral position/mean crosssectional velocity) as well as number and duration of stops in forward progression. The presentation summarizes the main results and discusses how they could be used to improve fish passage performance in culverts.

  1. Wireless live streaming video of laparoscopic surgery: a bandwidth analysis for handheld computers.

    PubMed

    Gandsas, Alex; McIntire, Katherine; George, Ivan M; Witzke, Wayne; Hoskins, James D; Park, Adrian

    2002-01-01

    Over the last six years, streaming media has emerged as a powerful tool for delivering multimedia content over networks. Concurrently, wireless technology has evolved, freeing users from desktop boundaries and wired infrastructures. At the University of Kentucky Medical Center, we have integrated these technologies to develop a system that can wirelessly transmit live surgery from the operating room to a handheld computer. This study establishes the feasibility of using our system to view surgeries and describes the effect of bandwidth on image quality. A live laparoscopic ventral hernia repair was transmitted to a single handheld computer using five encoding speeds at a constant frame rate, and the quality of the resulting streaming images was evaluated. No video images were rendered when video data were encoded at 28.8 kilobytes per second (Kbps), the slowest encoding bitrate studied. The highest quality images were rendered at encoding speeds greater than or equal to 150 Kbps. Of note, a 15 second transmission delay was experienced using all four encoding schemes that rendered video images. We believe that the wireless transmission of streaming video to handheld computers has tremendous potential to enhance surgical education. For medical students and residents, the ability to view live surgeries, lectures, courses and seminars on handheld computers means a larger number of learning opportunities. In addition, we envision that wireless enabled devices may be used to telemonitor surgical procedures. However, bandwidth availability and streaming delay are major issues that must be addressed before wireless telementoring becomes a reality.

  2. High-speed holographic correlation system for video identification on the internet

    NASA Astrophysics Data System (ADS)

    Watanabe, Eriko; Ikeda, Kanami; Kodate, Kashiko

    2013-12-01

    Automatic video identification is important for indexing, search purposes, and removing illegal material on the Internet. By combining a high-speed correlation engine and web-scanning technology, we developed the Fast Recognition Correlation system (FReCs), a video identification system for the Internet. FReCs is an application thatsearches through a number of websites with user-generated content (UGC) and detects video content that violates copyright law. In this paper, we describe the FReCs configuration and an approach to investigating UGC websites using FReCs. The paper also illustrates the combination of FReCs with an optical correlation system, which is capable of easily replacing a digital authorization sever in FReCs with optical correlation.

  3. Onboard Systems Record Unique Videos of Space Missions

    NASA Technical Reports Server (NTRS)

    2010-01-01

    Ecliptic Enterprises Corporation, headquartered in Pasadena, California, provided onboard video systems for rocket and space shuttle launches before it was tasked by Ames Research Center to craft the Data Handling Unit that would control sensor instruments onboard the Lunar Crater Observation and Sensing Satellite (LCROSS) spacecraft. The technological capabilities the company acquired on this project, as well as those gained developing a high-speed video system for monitoring the parachute deployments for the Orion Pad Abort Test Program at Dryden Flight Research Center, have enabled the company to offer high-speed and high-definition video for geosynchronous satellites and commercial space missions, providing remarkable footage that both informs engineers and inspires the imagination of the general public.

  4. High-speed video analysis of forward and backward spattered blood droplets.

    PubMed

    Comiskey, P M; Yarin, A L; Attinger, D

    2017-07-01

    High-speed videos of blood spatter due to a gunshot taken by the Ames Laboratory Midwest Forensics Resource Center (MFRC) [1] are analyzed. The videos used in this analysis were focused on a variety of targets hit by a bullet which caused either forward, backward, or both types of blood spatter. The analysis process utilized particle image velocimetry (PIV) and particle analysis software to measure drop velocities as well as the distributions of the number of droplets and their respective side view area. The results of this analysis revealed that the maximal velocity in the forward spatter can be about 47±5m/s and for the backward spatter - about 24±8m/s. Moreover, our measurements indicate that the number of droplets produced is larger in forward spatter than it is in backward spatter. In the forward and backward spatter the droplet area in the side-view images is approximately the same. The upper angles of the close-to-cone domain in which droplets are issued in forward and backward spatter are, 27±9° and 57±7°, respectively, whereas the lower angles of the close-to-cone domain are 28±12° and 30±18°, respectively. The inclination angle of the bullet as it penetrates the target is seen to play a large role in the directional preference of the spattered blood. Also, muzzle gases, bullet impact angle, as well as the aerodynamic wake of the bullet are seen to greatly influence the flight of the droplets. The intent of this investigation is to provide a quantitative basis for current and future research on bloodstain pattern analysis (BPA) of either forward or backward blood spatter due to a gunshot. Published by Elsevier B.V.

  5. Dynamic strain distribution of FRP plate under blast loading

    NASA Astrophysics Data System (ADS)

    Saburi, T.; Yoshida, M.; Kubota, S.

    2017-02-01

    The dynamic strain distribution of a fiber re-enforced plastic (FRP) plate under blast loading was investigated using a Digital Image Correlation (DIC) image analysis method. The testing FRP plates were mounted in parallel to each other on a steel frame. 50 g of composition C4 explosive was used as a blast loading source and set in the center of the FRP plates. The dynamic behavior of the FRP plate under blast loading were observed by two high-speed video cameras. The set of two high-speed video image sequences were used to analyze the FRP three-dimensional strain distribution by means of DIC method. A point strain profile extracted from the analyzed strain distribution data was compared with a directly observed strain profile using a strain gauge and it was shown that the strain profile under the blast loading by DIC method is quantitatively accurate.

  6. Reliability of a Qualitative Video Analysis for Running.

    PubMed

    Pipkin, Andrew; Kotecki, Kristy; Hetzel, Scott; Heiderscheit, Bryan

    2016-07-01

    Study Design Reliability study. Background Video analysis of running gait is frequently performed in orthopaedic and sports medicine practices to assess biomechanical factors that may contribute to injury. However, the reliability of a whole-body assessment has not been determined. Objective To determine the intrarater and interrater reliability of the qualitative assessment of specific running kinematics from a 2-dimensional video. Methods Running-gait analysis was performed on videos recorded from 15 individuals (8 male, 7 female) running at a self-selected pace (3.17 ± 0.40 m/s, 8:28 ± 1:04 min/mi) using a high-speed camera (120 frames per second). These videos were independently rated on 2 occasions by 3 experienced physical therapists using a standardized qualitative assessment. Fifteen sagittal and frontal plane kinematic variables were rated on a 3- or 5-point categorical scale at specific events of the gait cycle, including initial contact (n = 3) and midstance (n = 9), or across the full gait cycle (n = 3). The video frame number corresponding to each gait event was also recorded. Intrarater and interrater reliability values were calculated for gait-event detection (intraclass correlation coefficient [ICC] and standard error of measurement [SEM]) and the individual kinematic variables (weighted kappa [κw]). Results Gait-event detection was highly reproducible within raters (ICC = 0.94-1.00; SEM, 0.3-1.0 frames) and between raters (ICC = 0.77-1.00; SEM, 0.4-1.9 frames). Eleven of the 15 kinematic variables demonstrated substantial (κw = 0.60-0.799) or excellent (κw>0.80) intrarater agreement, with the exception of foot-to-center-of-mass position (κw = 0.59), forefoot position (κw = 0.58), ankle dorsiflexion at midstance (κw = 0.49), and center-of-mass vertical excursion (κw = 0.36). Interrater agreement for the kinematic measures varied more widely (κw = 0.00-0.85), with 5 variables showing substantial or excellent reliability. Conclusion The qualitative assessment of specific kinematic measures during running can be reliably performed with the use of a high-speed video camera. Detection of specific gait events was highly reproducible, as were common kinematic variables such as rearfoot position, foot-strike pattern, tibial inclination angle, knee flexion angle, and forward trunk lean. Other variables should be used with caution. J Orthop Sports Phys Ther 2016;46(7):556-561. Epub 6 Jun 2016. doi:10.2519/jospt.2016.6280.

  7. Eulerian frequency analysis of structural vibrations from high-speed video

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Venanzoni, Andrea; Siemens Industry Software NV, Interleuvenlaan 68, B-3001 Leuven; De Ryck, Laurent

    An approach for the analysis of the frequency content of structural vibrations from high-speed video recordings is proposed. The techniques and tools proposed rely on an Eulerian approach, that is, using the time history of pixels independently to analyse structural motion, as opposed to Lagrangian approaches, where the motion of the structure is tracked in time. The starting point is an existing Eulerian motion magnification method, which consists in decomposing the video frames into a set of spatial scales through a so-called Laplacian pyramid [1]. Each scale — or level — can be amplified independently to reconstruct a magnified motionmore » of the observed structure. The approach proposed here provides two analysis tools or pre-amplification steps. The first tool provides a representation of the global frequency content of a video per pyramid level. This may be further enhanced by applying an angular filter in the spatial frequency domain to each frame of the video before the Laplacian pyramid decomposition, which allows for the identification of the frequency content of the structural vibrations in a particular direction of space. This proposed tool complements the existing Eulerian magnification method by amplifying selectively the levels containing relevant motion information with respect to their frequency content. This magnifies the displacement while limiting the noise contribution. The second tool is a holographic representation of the frequency content of a vibrating structure, yielding a map of the predominant frequency components across the structure. In contrast to the global frequency content representation of the video, this tool provides a local analysis of the periodic gray scale intensity changes of the frame in order to identify the vibrating parts of the structure and their main frequencies. Validation cases are provided and the advantages and limits of the approaches are discussed. The first validation case consists of the frequency content retrieval of the tip of a shaker, excited at selected fixed frequencies. The goal of this setup is to retrieve the frequencies at which the tip is excited. The second validation case consists of two thin metal beams connected to a randomly excited bar. It is shown that the holographic representation visually highlights the predominant frequency content of each pixel and locates the global frequencies of the motion, thus retrieving the natural frequencies for each beam.« less

  8. Video-processing-based system for automated pedestrian data collection and analysis when crossing the street

    NASA Astrophysics Data System (ADS)

    Mansouri, Nabila; Watelain, Eric; Ben Jemaa, Yousra; Motamed, Cina

    2018-03-01

    Computer-vision techniques for pedestrian detection and tracking have progressed considerably and become widely used in several applications. However, a quick glance at the literature shows a minimal use of these techniques in pedestrian behavior and safety analysis, which might be due to the technical complexities facing the processing of pedestrian videos. To extract pedestrian trajectories from a video automatically, all road users must be detected and tracked during sequences, which is a challenging task, especially in a congested open-outdoor urban space. A multipedestrian tracker based on an interframe-detection-association process was proposed and evaluated. The tracker results are used to implement an automatic tool for pedestrians data collection when crossing the street based on video processing. The variations in the instantaneous speed allowed the detection of the street crossing phases (approach, waiting, and crossing). These were addressed for the first time in the pedestrian road security analysis to illustrate the causal relationship between pedestrian behaviors in the different phases. A comparison with a manual data collection method, by computing the root mean square error and the Pearson correlation coefficient, confirmed that the procedures proposed have significant potential to automate the data collection process.

  9. 3-D Velocimetry of Strombolian Explosions

    NASA Astrophysics Data System (ADS)

    Taddeucci, J.; Gaudin, D.; Orr, T. R.; Scarlato, P.; Houghton, B. F.; Del Bello, E.

    2014-12-01

    Using two synchronized high-speed cameras we were able to reconstruct the three-dimensional displacement and velocity field of bomb-sized pyroclasts in Strombolian explosions at Stromboli Volcano. Relatively low-intensity Strombolian-style activity offers a rare opportunity to observe volcanic processes that remain hidden from view during more violent explosive activity. Such processes include the ejection and emplacement of bomb-sized clasts along pure or drag-modified ballistic trajectories, in-flight bomb collision, and gas liberation dynamics. High-speed imaging of Strombolian activity has already opened new windows for the study of the abovementioned processes, but to date has only utilized two-dimensional analysis with limited motion detection and ability to record motion towards or away from the observer. To overcome this limitation, we deployed two synchronized high-speed video cameras at Stromboli. The two cameras, located sixty meters apart, filmed Strombolian explosions at 500 and 1000 frames per second and with different resolutions. Frames from the two cameras were pre-processed and combined into a single video showing frames alternating from one to the other camera. Bomb-sized pyroclasts were then manually identified and tracked in the combined video, together with fixed reference points located as close as possible to the vent. The results from manual tracking were fed to a custom software routine that, knowing the relative position of the vent and cameras, and the field of view of the latter, provided the position of each bomb relative to the reference points. By tracking tens of bombs over five to ten frames at different intervals during one explosion, we were able to reconstruct the three-dimensional evolution of the displacement and velocity fields of bomb-sized pyroclasts during individual Strombolian explosions. Shifting jet directivity and dispersal angle clearly appear from the three-dimensional analysis.

  10. Video Analysis of a Plucked String: An Example of Problem-based Learning

    NASA Astrophysics Data System (ADS)

    Wentworth, Christopher D.; Buse, Eric

    2009-11-01

    Problem-based learning is a teaching methodology that grounds learning within the context of solving a real problem. Typically the problem initiates learning of concepts rather than simply being an application of the concept, and students take the lead in identifying what must be developed to solve the problem. Problem-based learning in upper-level physics courses can be challenging, because of the time and financial requirements necessary to generate real data. Here, we present a problem that motivates learning about partial differential equations and their solution in a mathematical methods for physics course. Students study a plucked elastic cord using high speed digital video. After creating video clips of the cord motion under different tensions they are asked to create a mathematical model. Ultimately, students develop and solve a model that includes damping effects that are clearly visible in the videos. The digital video files used in this project are available on the web at http://physics.doane.edu .

  11. Video flowmeter

    DOEpatents

    Lord, David E.; Carter, Gary W.; Petrini, Richard R.

    1983-01-01

    A video flowmeter is described that is capable of specifying flow nature and pattern and, at the same time, the quantitative value of the rate of volumetric flow. An image of a determinable volumetric region within a fluid (10) containing entrained particles (12) is formed and positioned by a rod optic lens assembly (31) on the raster area of a low-light level television camera (20). The particles (12) are illuminated by light transmitted through a bundle of glass fibers (32) surrounding the rod optic lens assembly (31). Only particle images having speeds on the raster area below the raster line scanning speed may be used to form a video picture which is displayed on a video screen (40). The flowmeter is calibrated so that the locus of positions of origin of the video picture gives a determination of the volumetric flow rate of the fluid (10).

  12. Does improved decision-making ability reduce the physiological demands of game-based activities in field sport athletes?

    PubMed

    Gabbett, Tim J; Carius, Josh; Mulvey, Mike

    2008-11-01

    This study investigated the effects of video-based perceptual training on pattern recognition and pattern prediction ability in elite field sport athletes and determined whether enhanced perceptual skills influenced the physiological demands of game-based activities. Sixteen elite women soccer players (mean +/- SD age, 18.3 +/- 2.8 years) were allocated to either a video-based perceptual training group (N = 8) or a control group (N = 8). The video-based perceptual training group watched video footage of international women's soccer matches. Twelve training sessions, each 15 minutes in duration, were conducted during a 4-week period. Players performed assessments of speed (5-, 10-, and 20-m sprint), repeated-sprint ability (6 x 20-m sprints, with active recovery on a 15-second cycle), estimated maximal aerobic power (V O2 max, multistage fitness test), and a game-specific video-based perceptual test of pattern recognition and pattern prediction before and after the 4 weeks of video-based perceptual training. The on-field assessments included time-motion analysis completed on all players during a standardized 45-minute small-sided training game, and assessments of passing, shooting, and dribbling decision-making ability. No significant changes were detected in speed, repeated-sprint ability, or estimated V O2 max during the training period. However, video-based perceptual training improved decision accuracy and reduced the number of recall errors, indicating improved game awareness and decision-making ability. Importantly, the improvements in pattern recognition and prediction ability transferred to on-field improvements in passing, shooting, and dribbling decision-making skills. No differences were detected between groups for the time spent standing, walking, jogging, striding, and sprinting during the small-sided training game. These findings demonstrate that video-based perceptual training can be used effectively to enhance the decision-making ability of field sport athletes; however, it has no effect on the physiological demands of game-based activities.

  13. Using behavioral skills training and video rehearsal to teach blackjack skills.

    PubMed

    Speelman, Ryan C; Whiting, Seth W; Dixon, Mark R

    2015-09-01

    A behavioral skills training procedure that consisted of video instructions, video rehearsal, and video testing was used to teach 4 recreational gamblers a specific skill in playing blackjack (sometimes called card counting). A multiple baseline design was used to evaluate intervention effects on card-counting accuracy and chips won or lost across participants. Before training, no participant counted cards accurately. Each participant completed all phases of the training protocol, counting cards fluently with 100% accuracy during changing speed criterion training exercises. Generalization probes were conducted while participants played blackjack in a mock casino following each training phase. Afterwards, all 4 participants were able to count cards while they played blackjack. In conjunction with count accuracy, total winnings were tracked to determine the monetary advantages associated with counting cards. After losing money during baseline, 3 of 4 participants won a substantial amount of money playing blackjack after the intervention. © Society for the Experimental Analysis of Behavior.

  14. Chaos based video encryption using maps and Ikeda time delay system

    NASA Astrophysics Data System (ADS)

    Valli, D.; Ganesan, K.

    2017-12-01

    Chaos based cryptosystems are an efficient method to deal with improved speed and highly secured multimedia encryption because of its elegant features, such as randomness, mixing, ergodicity, sensitivity to initial conditions and control parameters. In this paper, two chaos based cryptosystems are proposed: one is the higher-dimensional 12D chaotic map and the other is based on the Ikeda delay differential equation (DDE) suitable for designing a real-time secure symmetric video encryption scheme. These encryption schemes employ a substitution box (S-box) to diffuse the relationship between pixels of plain video and cipher video along with the diffusion of current input pixel with the previous cipher pixel, called cipher block chaining (CBC). The proposed method enhances the robustness against statistical, differential and chosen/known plain text attacks. Detailed analysis is carried out in this paper to demonstrate the security and uniqueness of the proposed scheme.

  15. Ramp It Up and Down

    ERIC Educational Resources Information Center

    Heck, André; van Buuren, Onne

    2017-01-01

    We describe a simple experiment about sliding friction of an object moving with non-constant speed along an inclined plane. This experiment can be used to study the entire dynamical process of force and motion in various ways, depending on the mathematical level of the students. We discuss how video measurement and analysis, and mathematical…

  16. An ultrahigh-speed color video camera operating at 1,000,000 fps with 288 frame memories

    NASA Astrophysics Data System (ADS)

    Kitamura, K.; Arai, T.; Yonai, J.; Hayashida, T.; Kurita, T.; Maruyama, H.; Namiki, J.; Yanagi, T.; Yoshida, T.; van Kuijk, H.; Bosiers, Jan T.; Saita, A.; Kanayama, S.; Hatade, K.; Kitagawa, S.; Etoh, T. Goji

    2008-11-01

    We developed an ultrahigh-speed color video camera that operates at 1,000,000 fps (frames per second) and had capacity to store 288 frame memories. In 2005, we developed an ultrahigh-speed, high-sensitivity portable color camera with a 300,000-pixel single CCD (ISIS-V4: In-situ Storage Image Sensor, Version 4). Its ultrahigh-speed shooting capability of 1,000,000 fps was made possible by directly connecting CCD storages, which record video images, to the photodiodes of individual pixels. The number of consecutive frames was 144. However, longer capture times were demanded when the camera was used during imaging experiments and for some television programs. To increase ultrahigh-speed capture times, we used a beam splitter and two ultrahigh-speed 300,000-pixel CCDs. The beam splitter was placed behind the pick up lens. One CCD was located at each of the two outputs of the beam splitter. The CCD driving unit was developed to separately drive two CCDs, and the recording period of the two CCDs was sequentially switched. This increased the recording capacity to 288 images, an increase of a factor of two over that of conventional ultrahigh-speed camera. A problem with the camera was that the incident light on each CCD was reduced by a factor of two by using the beam splitter. To improve the light sensitivity, we developed a microlens array for use with the ultrahigh-speed CCDs. We simulated the operation of the microlens array in order to optimize its shape and then fabricated it using stamping technology. Using this microlens increased the light sensitivity of the CCDs by an approximate factor of two. By using a beam splitter in conjunction with the microlens array, it was possible to make an ultrahigh-speed color video camera that has 288 frame memories but without decreasing the camera's light sensitivity.

  17. Video image processing greatly enhances contrast, quality, and speed in polarization-based microscopy

    PubMed Central

    1981-01-01

    Video cameras with contrast and black level controls can yield polarized light and differential interference contrast microscope images with unprecedented image quality, resolution, and recording speed. The theoretical basis and practical aspects of video polarization and differential interference contrast microscopy are discussed and several applications in cell biology are illustrated. These include: birefringence of cortical structures and beating cilia in Stentor, birefringence of rotating flagella on a single bacterium, growth and morphogenesis of echinoderm skeletal spicules in culture, ciliary and electrical activity in a balancing organ of a nudibranch snail, and acrosomal reaction in activated sperm. PMID:6788777

  18. Magnetic Thin Films for Perpendicular Magnetic Recording Systems

    NASA Astrophysics Data System (ADS)

    Sugiyama, Atsushi; Hachisu, Takuma; Osaka, Tetsuya

    In the advanced information society of today, information storage technology, which helps to store a mass of electronic data and offers high-speed random access to the data, is indispensable. Against this background, hard disk drives (HDD), which are magnetic recording devices, have gained in importance because of their advantages in capacity, speed, reliability, and production cost. These days, the uses of HDD extend not only to personal computers and network servers but also to consumer electronics products such as personal video recorders, portable music players, car navigation systems, video games, video cameras, and personal digital assistances.

  19. First high speed imaging of lightning from summer thunderstorms over India: Preliminary results based on amateur recording using a digital camera

    NASA Astrophysics Data System (ADS)

    Narayanan, V. L.

    2017-12-01

    For the first time, high speed imaging of lightning from few isolated tropical thunderstorms are observed from India. The recordings are made from Tirupati (13.6oN, 79.4oE, 180 m above mean sea level) during summer months with a digital camera capable of recording high speed videos up to 480 fps. At 480 fps, each individual video file is recorded for 30 s resulting in 14400 deinterlaced images per video file. An automatic processing algorithm is developed for quick identification and analysis of the lightning events which will be discussed in detail. Preliminary results indicating different types of phenomena associated with lightning like stepped leader, dart leader, luminous channels corresponding to continuing current and M components are discussed. While most of the examples show cloud to ground discharges, few interesting cases of intra-cloud, inter-cloud and cloud-air discharges will also be displayed. This indicates that though high speed cameras with few 1000 fps are preferred for a detailed study on lightning, moderate range CMOS sensor based digital cameras can provide important information as well. The lightning imaging activity presented herein is initiated as an amateur effort and currently plans are underway to propose a suite of supporting instruments to conduct coordinated campaigns. The images discussed here are acquired from normal residential area and indicate how frequent lightning strikes are in such tropical locations during thunderstorms, though no towering structures are nearby. It is expected that popularizing of such recordings made with affordable digital cameras will trigger more interest in lightning research and provide a possible data source from amateur observers paving the way for citizen science.

  20. Fluid dynamics, cavitation, and tip-to-tissue interaction of longitudinal and torsional ultrasound modes during phacoemulsification.

    PubMed

    Zacharias, Jaime; Ohl, Claus-Dieter

    2013-04-01

    To describe the fluidic events that occur in a test chamber during phacoemulsification with longitudinal and torsional ultrasound (US) modalities. Pasteur Ophthalmic Clinic Phacodynamics Laboratory, Santiago, Chile, and Nanyang Technological University, Singapore. Experimental study. Ultra-high-speed videos of a phacoemulsifying tip were recorded while the tip operated in longitudinal and torsional US modalities using variable US power. Two high-speed video cameras were used to record videos up to 625,000 frames per second. A high-intensity spotlight source was used for illumination to engage shadowgraphy techniques. Particle image velocimetry was used to evaluate fluidic patterns while a hyperbaric environmental system allowed the evaluation of cavitation effects. Tip-to-tissue interaction at high speed was evaluated using human cataract fragments. Particle imaging velocimetry showed the following flow patterns for longitudinal and torsional modes at high US powers: forward-directed streaming with longitudinal mode and backward-directed streaming with torsional mode. The ultrasound power threshold for the appearance of cavitation was 60% for longitudinal mode and 80% for torsional mode. Cavitation was suppressed with pressure of 1.0 bar for longitudinal mode and 0.3 bar for torsional mode. Generation of previously unseen stable gaseous microbubbles was noted. Tip-to-tissue interaction analysis showed the presence of cavitation bubbles close to the site of fragmentation with no apparent effect on cutting. High-speed imaging and particle image velocimetry yielded a better understanding and differentiated the fluidic pattern behavior between longitudinal and torsional US during phacoemulsification. These recordings also showed more detailed aspects of cavitation that clarified its role in lens material cutting for both modalities. Copyright © 2013 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.

  1. Development Of A Dynamic Radiographic Capability Using High-Speed Video

    NASA Astrophysics Data System (ADS)

    Bryant, Lawrence E.

    1985-02-01

    High-speed video equipment can be used to optically image up to 2,000 full frames per second or 12,000 partial frames per second. X-ray image intensifiers have historically been used to image radiographic images at 30 frames per second. By combining these two types of equipment, it is possible to perform dynamic x-ray imaging of up to 2,000 full frames per second. The technique has been demonstrated using conventional, industrial x-ray sources such as 150 Kv and 300 Kv constant potential x-ray generators, 2.5 MeV Van de Graaffs, and linear accelerators. A crude form of this high-speed radiographic imaging has been shown to be possible with a cobalt 60 source. Use of a maximum aperture lens makes best use of the available light output from the image intensifier. The x-ray image intensifier input and output fluors decay rapidly enough to allow the high frame rate imaging. Data are presented on the maximum possible video frame rates versus x-ray penetration of various thicknesses of aluminum and steel. Photographs illustrate typical radiographic setups using the high speed imaging method. Video recordings show several demonstrations of this technique with the played-back x-ray images slowed down up to 100 times as compared to the actual event speed. Typical applications include boiling type action of liquids in metal containers, compressor operation with visualization of crankshaft, connecting rod and piston movement and thermal battery operation. An interesting aspect of this technique combines both the optical and x-ray capabilities to observe an object or event with both external and internal details with one camera in a visual mode and the other camera in an x-ray mode. This allows both kinds of video images to appear side by side in a synchronized presentation.

  2. Probabilistic Methods for Image Generation and Encoding.

    DTIC Science & Technology

    1993-10-15

    video and graphics lab at Georgia Tech, linking together Silicon Graphics workstations, a laser video recorder, a Betacam video recorder, scanner...computer laboratory at Georgia Tech, based on two Silicon Graphics Personal Iris workstations, a SONY laser video recorder, a SONY Betacam SP video...laser disk in component RGB form, with variable speed playback. From the laser recorder the images can be dubbed to the Betacam or the VHS recorder in

  3. EVA: laparoscopic instrument tracking based on Endoscopic Video Analysis for psychomotor skills assessment.

    PubMed

    Oropesa, Ignacio; Sánchez-González, Patricia; Chmarra, Magdalena K; Lamata, Pablo; Fernández, Alvaro; Sánchez-Margallo, Juan A; Jansen, Frank Willem; Dankelman, Jenny; Sánchez-Margallo, Francisco M; Gómez, Enrique J

    2013-03-01

    The EVA (Endoscopic Video Analysis) tracking system is a new system for extracting motions of laparoscopic instruments based on nonobtrusive video tracking. The feasibility of using EVA in laparoscopic settings has been tested in a box trainer setup. EVA makes use of an algorithm that employs information of the laparoscopic instrument's shaft edges in the image, the instrument's insertion point, and the camera's optical center to track the three-dimensional position of the instrument tip. A validation study of EVA comprised a comparison of the measurements achieved with EVA and the TrEndo tracking system. To this end, 42 participants (16 novices, 22 residents, and 4 experts) were asked to perform a peg transfer task in a box trainer. Ten motion-based metrics were used to assess their performance. Construct validation of the EVA has been obtained for seven motion-based metrics. Concurrent validation revealed that there is a strong correlation between the results obtained by EVA and the TrEndo for metrics, such as path length (ρ = 0.97), average speed (ρ = 0.94), or economy of volume (ρ = 0.85), proving the viability of EVA. EVA has been successfully validated in a box trainer setup, showing the potential of endoscopic video analysis to assess laparoscopic psychomotor skills. The results encourage further implementation of video tracking in training setups and image-guided surgery.

  4. Slow speed—fast motion: time-lapse recordings in physics education

    NASA Astrophysics Data System (ADS)

    Vollmer, Michael; Möllmann, Klaus-Peter

    2018-05-01

    Video analysis with a 30 Hz frame rate is the standard tool in physics education. The development of affordable high-speed-cameras has extended the capabilities of the tool for much smaller time scales to the 1 ms range, using frame rates of typically up to 1000 frames s-1, allowing us to study transient physics phenomena happening too fast for the naked eye. Here we want to extend the range of phenomena which may be studied by video analysis in the opposite direction by focusing on much longer time scales ranging from minutes, hours to many days or even months. We discuss this time-lapse method, needed equipment and give a few hints of how to produce respective recordings for two specific experiments.

  5. Multi-scale approaches for high-speed imaging and analysis of large neural populations

    PubMed Central

    Ahrens, Misha B.; Yuste, Rafael; Peterka, Darcy S.; Paninski, Liam

    2017-01-01

    Progress in modern neuroscience critically depends on our ability to observe the activity of large neuronal populations with cellular spatial and high temporal resolution. However, two bottlenecks constrain efforts towards fast imaging of large populations. First, the resulting large video data is challenging to analyze. Second, there is an explicit tradeoff between imaging speed, signal-to-noise, and field of view: with current recording technology we cannot image very large neuronal populations with simultaneously high spatial and temporal resolution. Here we describe multi-scale approaches for alleviating both of these bottlenecks. First, we show that spatial and temporal decimation techniques based on simple local averaging provide order-of-magnitude speedups in spatiotemporally demixing calcium video data into estimates of single-cell neural activity. Second, once the shapes of individual neurons have been identified at fine scale (e.g., after an initial phase of conventional imaging with standard temporal and spatial resolution), we find that the spatial/temporal resolution tradeoff shifts dramatically: after demixing we can accurately recover denoised fluorescence traces and deconvolved neural activity of each individual neuron from coarse scale data that has been spatially decimated by an order of magnitude. This offers a cheap method for compressing this large video data, and also implies that it is possible to either speed up imaging significantly, or to “zoom out” by a corresponding factor to image order-of-magnitude larger neuronal populations with minimal loss in accuracy or temporal resolution. PMID:28771570

  6. Determination of the static friction coefficient from circular motion

    NASA Astrophysics Data System (ADS)

    Molina-Bolívar, J. A.; Cabrerizo-Vílchez, M. A.

    2014-07-01

    This paper describes a physics laboratory exercise for determining the coefficient of static friction between two surfaces. The circular motion of a coin placed on the surface of a rotating turntable has been studied. For this purpose, the motion is recorded with a high-speed digital video camera recording at 240 frames s-1, and the videos are analyzed using Tracker video-analysis software, allowing the students to dynamically model the motion of the coin. The students have to obtain the static friction coefficient by comparing the centripetal and maximum static friction forces. The experiment only requires simple and inexpensive materials. The dynamics of circular motion and static friction forces are difficult for many students to understand. The proposed laboratory exercise addresses these topics, which are relevant to the physics curriculum.

  7. High Speed Video Observations of Natural Lightning and Their Implications to Fractal Description of Lightning

    NASA Astrophysics Data System (ADS)

    Liu, N.; Tilles, J.; Boggs, L.; Bozarth, A.; Rassoul, H.; Riousset, J. A.

    2016-12-01

    Recent high speed video observations of triggered and natural lightning flashes have significantly advanced our understanding of lightning initiation and propagation. For example, they have helped resolve the initiation of lightning leaders [Stolzenburg et al., JGR, 119, 12198, 2014; Montanyà et al, Sci. Rep., 5, 15180, 2015], the stepping of negative leaders [Hill et al., JGR, 116, D16117, 2011], the structure of streamer zone around the leader [Gamerota et al., GRL, 42, 1977, 2015], and transient rebrightening processes occurring during the leader propagation [Stolzenburg et al., JGR, 120, 3408, 2015]. We started an observational campaign in the summer of 2016 to study lightning by using a Phantom high-speed camera on the campus of Florida Institute of Technology, Melbourne, FL. A few interesting natural cloud-to-ground and intracloud lightning discharges have been recorded, including a couple of 8-9 stroke flashes, high peak current flashes, and upward propagating return stroke waves from ground to cloud. The videos show that the propagation of the downward leaders of cloud-to-ground lightning discharges is very complex, particularly for the high-peak current flashes. They tend to develop as multiple branches, and each of them splits repeatedly. For some cases, the propagation characteristics of the leader, such as speed, are subject to sudden changes. In this talk, we present several selected cases to show the complexity of the leader propagation. One of the effective approaches to characterize the structure and propagation of lightning leaders is the fractal description [Mansell et al., JGR, 107, 4075, 2002; Riousset et al., JGR, 112, D15203, 2007; Riousset et al., JGR, 115, A00E10, 2010]. We also present a detailed analysis of the high-speed images of our observations and formulate useful constraints to the fractal description. Finally, we compare the obtained results with fractal simulations conducted by using the model reported in [Riousset et al., 2007, 2010].

  8. High-speed imaging system for observation of discharge phenomena

    NASA Astrophysics Data System (ADS)

    Tanabe, R.; Kusano, H.; Ito, Y.

    2008-11-01

    A thin metal electrode tip instantly changes its shape into a sphere or a needlelike shape in a single electrical discharge of high current. These changes occur within several hundred microseconds. To observe these high-speed phenomena in a single discharge, an imaging system using a high-speed video camera and a high repetition rate pulse laser was constructed. A nanosecond laser, the wavelength of which was 532 nm, was used as the illuminating source of a newly developed high-speed video camera, HPV-1. The time resolution of our system was determined by the laser pulse width and was about 80 nanoseconds. The system can take one hundred pictures at 16- or 64-microsecond intervals in a single discharge event. A band-pass filter at 532 nm was placed in front of the camera to block the emission of the discharge arc at other wavelengths. Therefore, clear images of the electrode were recorded even during the discharge. If the laser was not used, only images of plasma during discharge and thermal radiation from the electrode after discharge were observed. These results demonstrate that the combination of a high repetition rate and a short pulse laser with a high speed video camera provides a unique and powerful method for high speed imaging.

  9. Driver behavior analysis for right-turn drivers at signalized intersections using SHRP 2 naturalistic driving study data.

    PubMed

    Wu, Jianqing; Xu, Hao

    2017-12-01

    Understanding driver behavior is important for traffic safety and operation, especially at intersections where different traffic movements conflict. While most driver-behavior studies are based on simulation, this paper documents the analysis of driver-behavior at signalized intersections with the SHRP 2 Naturalistic Driving Study (NDS) data. This study analyzes the different influencing factors on the operation (speed control) and observation of right-turn drivers. A total of 300 NDS trips at six signalized intersections were used, including the NDS time-series sensor data, the forward videos and driver face videos. Different factors of drivers, vehicles, roads and environments were studied for their influence on driver behavior. An influencing index function was developed and the index was calculated for each influencing factor to quantitatively describe its influencing level. The influencing index was applied to prioritize the factors, which facilitates development and selection of safety countermeasures to improve intersection safety. Drivers' speed control was analyzed under different conditions with consideration of the prioritized influencing factors. Vehicle type, traffic signal status, conflicting traffic, conflicting pedestrian and driver age group were identified as the five major influencing factors on driver observation. This research revealed that drivers have high acceleration and low observation frequency under Right-Turn-On-Red (RTOR), which constituted potential danger for other roadway users, especially for pedestrians. As speed has a direct influence on crash rates and severities, the revealed speed patterns of the different situations also benefit selection of safety countermeasures at signalized intersections. Published by Elsevier Ltd.

  10. Online evaluation of a commercial video image analysis system (Computer Vision System) to predict beef carcass red meat yield and for augmenting the assignment of USDA yield grades. United States Department of Agriculture.

    PubMed

    Cannell, R C; Belk, K E; Tatum, J D; Wise, J W; Chapman, P L; Scanga, J A; Smith, G C

    2002-05-01

    Objective quantification of differences in wholesale cut yields of beef carcasses at plant chain speeds is important for the application of value-based marketing. This study was conducted to evaluate the ability of a commercial video image analysis system, the Computer Vision System (CVS) to 1) predict commercially fabricated beef subprimal yield and 2) augment USDA yield grading, in order to improve accuracy of grade assessment. The CVS was evaluated as a fully installed production system, operating on a full-time basis at chain speeds. Steer and heifer carcasses (n = 296) were evaluated using CVS, as well as by USDA expert and online graders, before the fabrication of carcasses into industry-standard subprimal cuts. Expert yield grade (YG), online YG, CVS estimated carcass yield, and CVS measured ribeye area in conjunction with expert grader estimates of the remaining YG factors (adjusted fat thickness, percentage of kidney-pelvic-heart fat, hot carcass weight) accounted for 67, 39, 64, and 65% of the observed variation in fabricated yields of closely trimmed subprimals. The dual component CVS predicted wholesale cut yields more accurately than current online yield grading, and, in an augmentation system, CVS ribeye measurement replaced estimated ribeye area in determination of USDA yield grade, and the accuracy of cutability prediction was improved, under packing plant conditions and speeds, to a level close to that of expert graders applying grades at a comfortable rate of speed offline.

  11. Real time simulation using position sensing

    NASA Technical Reports Server (NTRS)

    Isbell, William B. (Inventor); Taylor, Jason A. (Inventor); Studor, George F. (Inventor); Womack, Robert W. (Inventor); Hilferty, Michael F. (Inventor); Bacon, Bruce R. (Inventor)

    2000-01-01

    An interactive exercise system including exercise equipment having a resistance system, a speed sensor, a controller that varies the resistance setting of the exercise equipment, and a playback device for playing pre-recorded video and audio. The controller, operating in conjunction with speed information from the speed sensor and terrain information from media table files, dynamically varies the resistance setting of the exercise equipment in order to simulate varying degrees of difficulty while the playback device concurrently plays back the video and audio to create the simulation that the user is exercising in a natural setting such as a real-world exercise course.

  12. A portable high-speed camera system for vocal fold examinations.

    PubMed

    Hertegård, Stellan; Larsson, Hans

    2014-11-01

    In this article, we present a new portable low-cost system for high-speed examinations of the vocal folds. Analysis of glottal vibratory parameters from the high-speed recordings is compared with videostroboscopic recordings. The high-speed system is built around a Fastec 1 monochrome camera, which is used with newly developed software, High-Speed Studio (HSS). The HSS has options for video/image recording, contains a database, and has a set of analysis options. The Fastec/HSS system has been used clinically since 2011 in more than 2000 patient examinations and recordings. The Fastec 1 camera has sufficient time resolution (≥4000 frames/s) and light sensitivity (ISO 3200) to produce images for detailed analyses of parameters pertinent to vocal fold function. The camera can be used with both rigid and flexible endoscopes. The HSS software includes options for analyses of glottal vibrations, such as kymogram, phase asymmetry, glottal area variation, open and closed phase, and angle of vocal fold abduction. It can also be used for separate analysis of the left and vocal fold movements, including maximum speed during opening and closing, a parameter possibly related to vocal fold elasticity. A blinded analysis of 32 patients with various voice disorders examined with both the Fastec/HSS system and videostroboscopy showed that the high-speed recordings were significantly better for the analysis of glottal parameters (eg, mucosal wave and vibration asymmetry). The monochrome high-speed system can be used in daily clinical work within normal clinical time limits for patient examinations. A detailed analysis can be made of voice disorders and laryngeal pathology at a relatively low cost. Copyright © 2014 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  13. The interchangeability of global positioning system and semiautomated video-based performance data during elite soccer match play.

    PubMed

    Harley, Jamie A; Lovell, Ric J; Barnes, Christopher A; Portas, Matthew D; Weston, Matthew

    2011-08-01

    In elite-level soccer, player motion characteristics are commonly generated from match play and training situations using semiautomated video analysis systems and global positioning system (GPS) technology, respectively. Before such data are used collectively to quantify global player load, it is necessary to understand both the level of agreement and direction of bias between the systems so that specific interventions can be made based on the reported results. The aim of this report was to compare data derived from both systems for physical match performances. Six elite-level soccer players were analyzed during a competitive match using semiautomated video analysis (ProZone® [PZ]) and GPS (MinimaxX) simultaneously. Total distances (TDs), high speed running (HSR), very high speed running (VHSR), sprinting distance (SPR), and high-intensity running distance (HIR; >4.0 m·s(-1)) were reported in 15-minute match periods. The GPS reported higher values than PZ did for TD (GPS: 1,755.4 ± 245.4 m; PZ: 1,631.3 ± 239.5 m; p < 0.05); PZ reported higher values for SPR and HIR than GPS did (SPR: PZ, 34.1 ± 24.0 m; GPS: 20.3 ± 15.8 m; HIR: PZ, 368.1 ± 129.8 m; GPS: 317.0 ± 92.5 m; p < 0.05). Caution should be exercised when using match-load (PZ) and training-load (GPS) data interchangeably.

  14. Understanding ‘human’ waves: exploiting the physics in a viral video

    NASA Astrophysics Data System (ADS)

    Ferrer-Roca, Chantal

    2018-01-01

    Waves are a relevant part of physics that students find difficult to grasp, even in those cases in which wave propagation kinematics can be visualized. This may hinder a proper understanding of sound, light or quantum physics phenomena that are explained using a wave model. So-called ‘human’ waves, choreographed by people, have proved to be an advisable way to understand basic wave concepts. Videos are widely used as a teaching resource and can be of considerable help in order to watch and discuss ‘human’ waves provided their quality is reasonably good. In this paper we propose and analyse a video that went viral online and has been revealed to be a useful teaching resource for introductory physics students. It shows a unique and very complete series of wave propagations, including pulses with different polarizations and periodic waves that can hardly be found elsewhere. After a proposal on how to discuss the video qualitatively, a quantitative analysis is carried out (no video-tracker needed), including a determination of the main wave magnitudes such as period, wavelength and propagation speed.

  15. ARINC 818 express for high-speed avionics video and power over coax

    NASA Astrophysics Data System (ADS)

    Keller, Tim; Alexander, Jon

    2012-06-01

    CoaXPress is a new standard for high-speed video over coax cabling developed for the machine vision industry. CoaXPress includes both a physical layer and a video protocol. The physical layer has desirable features for aerospace and defense applications: it allows 3Gbps (up to 6Gbps) communication, includes 21Mbps return path allowing for bidirectional communication, and provides up to 13W of power, all over a single coax connection. ARINC 818, titled "Avionics Digital Video Bus" is a protocol standard developed specifically for high speed, mission critical aerospace video systems. ARINC 818 is being widely adopted for new military and commercial display and sensor applications. The ARINC 818 protocol combined with the CoaXPress physical layer provide desirable characteristics for many aerospace systems. This paper presents the results of a technology demonstration program to marry the physical layer from CoaXPress with the ARINC 818 protocol. ARINC 818 is a protocol, not a physical layer. Typically, ARINC 818 is implemented over fiber or copper for speeds of 1 to 2Gbps, but beyond 2Gbps, it has been implemented exclusively over fiber optic links. In many rugged applications, a copper interface is still desired, by implementing ARINC 818 over the CoaXPress physical layer, it provides a path to 3 and 6 Gbps copper interfaces for ARINC 818. Results of the successful technology demonstration dubbed ARINC 818 Express are presented showing 3Gbps communication while powering a remote module over a single coax cable. The paper concludes with suggested next steps for bring this technology to production readiness.

  16. Paddling Mode of Forward Flight in Insects

    NASA Astrophysics Data System (ADS)

    Ristroph, Leif; Bergou, Attila J.; Guckenheimer, John; Wang, Z. Jane; Cohen, Itai

    2011-04-01

    By analyzing high-speed video of the fruit fly, we discover a swimminglike mode of forward flight characterized by paddling wing motions. We develop a new aerodynamic analysis procedure to show that these insects generate drag-based thrust by slicing their wings forward at low angle of attack and pushing backwards at a higher angle. Reduced-order models and simulations reveal that the law for flight speed is determined by these wing motions but is insensitive to material properties of the fluid. Thus, paddling is as effective in air as in water and represents a common strategy for propulsion through aquatic and aerial environments.

  17. Improving truck and speed data using paired video and single-loop sensors

    DOT National Transportation Integrated Search

    2006-12-01

    Real-time speed and truck data are important inputs for modern freeway traffic control and : management systems. However, these data are not directly measurable by single-loop detectors. : Although dual-loop detectors provide speeds and classified ve...

  18. Experimental investigation of the combustion products in an aluminised solid propellant

    NASA Astrophysics Data System (ADS)

    Liu, Zhu; Li, Shipeng; Liu, Mengying; Guan, Dian; Sui, Xin; Wang, Ningfei

    2017-04-01

    Aluminium is widely used as an important additive to improve ballistic and energy performance in solid propellants, but the unburned aluminium does not contribute to the specific impulse and has both thermal and momentum two-phase flow losses. So understanding of aluminium combustion behaviour during solid propellant burning is significant when improving internal ballistic performance. Recent developments and experimental results reported on such combustion behaviour are presented in this paper. A variety of experimental techniques ranging from quenching and dynamic measurement, to high-speed CCD video recording, were used to study aluminium combustion behaviour and the size distribution of the initial agglomerates. This experimental investigation also provides the size distribution of the condensed phase products. Results suggest that the addition of an organic fluoride compound to solid propellant will generate smaller diameter condensed phase products due to sublimation of AlF3. Lastly, a physico-chemical picture of the agglomeration process was also developed based on the results of high-speed CCD video analysis.

  19. A Case Study on the Walking Speed of Pedestrian at the Bus Terminal Area

    NASA Astrophysics Data System (ADS)

    Firdaus Mohamad Ali, Mohd; Salleh Abustan, Muhamad; Hidayah Abu Talib, Siti; Abustan, Ismail; Rahman, Noorhazlinda Abd; Gotoh, Hitoshi

    2018-03-01

    Walking speed is one of the factors in understanding the pedestrian walking behaviours. Every pedestrian has different level of walking speed that are regulated by some factors such as gender and age. This study was conducted at a bus terminal area with two objectives in which the first one was to determine the average walking speed of pedestrian by considering the factors of age, gender, people with and without carrying baggage; and the second one was to make a comparison of the average walking speed that considered age as the factor of comparison between pedestrian at the bus terminal area and crosswalk. Demographic factor of pedestrian walking speed in this study are gender and age consist of male, female, and 7 groups of age categories that are children, adult men and women, senior adult men and women, over 70 and disabled person. Data of experiment was obtained by making a video recording of the movement of people that were walking and roaming around at the main lobby for 45 minutes by using a camcorder. Hence, data analysis was done by using software named Human Behaviour Simulator (HBS) for analysing the data extracted from the video. The result of this study was male pedestrian walked faster than female with the average of walking speed 1.13m/s and 1.07m/s respectively. Averagely, pedestrian that walked without carrying baggage had higher walking speed compared to pedestrian that were carrying baggage with the speed of 1.02m/s and 0.70m/s respectively. Male pedestrian walks faster than female because they have higher level of stamina and they are mostly taller than female pedestrian. Furthermore, pedestrian with baggage walks slower because baggage will cause distractions such as pedestrian will have more weight to carry and people tend to walk slower.

  20. Object tracking using multiple camera video streams

    NASA Astrophysics Data System (ADS)

    Mehrubeoglu, Mehrube; Rojas, Diego; McLauchlan, Lifford

    2010-05-01

    Two synchronized cameras are utilized to obtain independent video streams to detect moving objects from two different viewing angles. The video frames are directly correlated in time. Moving objects in image frames from the two cameras are identified and tagged for tracking. One advantage of such a system involves overcoming effects of occlusions that could result in an object in partial or full view in one camera, when the same object is fully visible in another camera. Object registration is achieved by determining the location of common features in the moving object across simultaneous frames. Perspective differences are adjusted. Combining information from images from multiple cameras increases robustness of the tracking process. Motion tracking is achieved by determining anomalies caused by the objects' movement across frames in time in each and the combined video information. The path of each object is determined heuristically. Accuracy of detection is dependent on the speed of the object as well as variations in direction of motion. Fast cameras increase accuracy but limit the speed and complexity of the algorithm. Such an imaging system has applications in traffic analysis, surveillance and security, as well as object modeling from multi-view images. The system can easily be expanded by increasing the number of cameras such that there is an overlap between the scenes from at least two cameras in proximity. An object can then be tracked long distances or across multiple cameras continuously, applicable, for example, in wireless sensor networks for surveillance or navigation.

  1. Action and puzzle video games prime different speed/accuracy tradeoffs.

    PubMed

    Nelson, Rolf A; Strachan, Ian

    2009-01-01

    To understand the way in which video-game play affects subsequent perception and cognitive strategy, two experiments were performed in which participants played either a fast-action game or a puzzle-solving game. Before and after video-game play, participants performed a task in which both speed and accuracy were emphasized. In experiment 1 participants engaged in a location task in which they clicked a mouse on the spot where a target had appeared, and in experiment 2 they were asked to judge which of four shapes was most similar to a target shape. In both experiments, participants were much faster but less accurate after playing the action game, while they were slower but more accurate after playing the puzzle game. Results are discussed in terms of a taxonomy of video games by their cognitive and perceptual demands.

  2. Video Encryption and Decryption on Quantum Computers

    NASA Astrophysics Data System (ADS)

    Yan, Fei; Iliyasu, Abdullah M.; Venegas-Andraca, Salvador E.; Yang, Huamin

    2015-08-01

    A method for video encryption and decryption on quantum computers is proposed based on color information transformations on each frame encoding the content of the encoding the content of the video. The proposed method provides a flexible operation to encrypt quantum video by means of the quantum measurement in order to enhance the security of the video. To validate the proposed approach, a tetris tile-matching puzzle game video is utilized in the experimental simulations. The results obtained suggest that the proposed method enhances the security and speed of quantum video encryption and decryption, both properties required for secure transmission and sharing of video content in quantum communication.

  3. Motion-Blur-Free High-Speed Video Shooting Using a Resonant Mirror

    PubMed Central

    Inoue, Michiaki; Gu, Qingyi; Takaki, Takeshi; Ishii, Idaku; Tajima, Kenji

    2017-01-01

    This study proposes a novel concept of actuator-driven frame-by-frame intermittent tracking for motion-blur-free video shooting of fast-moving objects. The camera frame and shutter timings are controlled for motion blur reduction in synchronization with a free-vibration-type actuator vibrating with a large amplitude at hundreds of hertz so that motion blur can be significantly reduced in free-viewpoint high-frame-rate video shooting for fast-moving objects by deriving the maximum performance of the actuator. We develop a prototype of a motion-blur-free video shooting system by implementing our frame-by-frame intermittent tracking algorithm on a high-speed video camera system with a resonant mirror vibrating at 750 Hz. It can capture 1024 × 1024 images of fast-moving objects at 750 fps with an exposure time of 0.33 ms without motion blur. Several experimental results for fast-moving objects verify that our proposed method can reduce image degradation from motion blur without decreasing the camera exposure time. PMID:29109385

  4. TxDOT Video Analytics System User Manual

    DOT National Transportation Integrated Search

    2012-08-01

    The TxDOT video analytics demonstration system is designed to monitor traffic conditions by collecting data such as speed and counts, detecting incidents such as stopped vehicles and reporting such incidents to system administrators. : As illustrated...

  5. How much time do drivers need to obtain situation awareness? A laboratory-based study of automated driving.

    PubMed

    Lu, Zhenji; Coster, Xander; de Winter, Joost

    2017-04-01

    Drivers of automated cars may occasionally need to take back manual control after a period of inattentiveness. At present, it is unknown how long it takes to build up situation awareness of a traffic situation. In this study, 34 participants were presented with animated video clips of traffic situations on a three-lane road, from an egocentric viewpoint on a monitor equipped with eye tracker. Each participant viewed 24 videos of different durations (1, 3, 7, 9, 12, or 20 s). After each video, participants reproduced the end of the video by placing cars in a top-down view, and indicated the relative speeds of the placed cars with respect to the ego-vehicle. Results showed that the longer the video length, the lower the absolute error of the number of placed cars, the lower the total distance error between the placed cars and actual cars, and the lower the geometric difference between the placed cars and the actual cars. These effects appeared to be saturated at video lengths of 7-12 s. The total speed error between placed and actual cars also reduced with video length, but showed no saturation up to 20 s. Glance frequencies to the mirrors decreased with observation time, which is consistent with the notion that participants first estimated the spatial pattern of cars after which they directed their attention to individual cars. In conclusion, observers are able to reproduce the layout of a situation quickly, but the assessment of relative speeds takes 20 s or more. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. High-speed imaging using CMOS image sensor with quasi pixel-wise exposure

    NASA Astrophysics Data System (ADS)

    Sonoda, T.; Nagahara, H.; Endo, K.; Sugiyama, Y.; Taniguchi, R.

    2017-02-01

    Several recent studies in compressive video sensing have realized scene capture beyond the fundamental trade-off limit between spatial resolution and temporal resolution using random space-time sampling. However, most of these studies showed results for higher frame rate video that were produced by simulation experiments or using an optically simulated random sampling camera, because there are currently no commercially available image sensors with random exposure or sampling capabilities. We fabricated a prototype complementary metal oxide semiconductor (CMOS) image sensor with quasi pixel-wise exposure timing that can realize nonuniform space-time sampling. The prototype sensor can reset exposures independently by columns and fix these amount of exposure by rows for each 8x8 pixel block. This CMOS sensor is not fully controllable via the pixels, and has line-dependent controls, but it offers flexibility when compared with regular CMOS or charge-coupled device sensors with global or rolling shutters. We propose a method to realize pseudo-random sampling for high-speed video acquisition that uses the flexibility of the CMOS sensor. We reconstruct the high-speed video sequence from the images produced by pseudo-random sampling using an over-complete dictionary.

  7. Instantaneous Assessment Of Athletic Performance Using High Speed Video

    NASA Astrophysics Data System (ADS)

    Hubbard, Mont; Alaways, LeRoy W.

    1988-02-01

    We describe the use of high speed video to provide quantitative assessment of motion in athletic performance. Besides the normal requirement for accuracy, an essential feature is that the information be provided rapidly enough so that it my serve as valuable feedback in the learning process. The general considerations which must be addressed in the development of such a computer based system are discussed. These ideas are illustrated specifically through the description of a prototype system which has been designed for the javelin throw.

  8. Processors for wavelet analysis and synthesis: NIFS and TI-C80 MVP

    NASA Astrophysics Data System (ADS)

    Brooks, Geoffrey W.

    1996-03-01

    Two processors are considered for image quadrature mirror filtering (QMF). The neuromorphic infrared focal-plane sensor (NIFS) is an existing prototype analog processor offering high speed spatio-temporal Gaussian filtering, which could be used for the QMF low- pass function, and difference of Gaussian filtering, which could be used for the QMF high- pass function. Although not designed specifically for wavelet analysis, the biologically- inspired system accomplishes the most computationally intensive part of QMF processing. The Texas Instruments (TI) TMS320C80 Multimedia Video Processor (MVP) is a 32-bit RISC master processor with four advanced digital signal processors (DSPs) on a single chip. Algorithm partitioning, memory management and other issues are considered for optimal performance. This paper presents these considerations with simulated results leading to processor implementation of high-speed QMF analysis and synthesis.

  9. High speed photography, videography, and photonics III; Proceedings of the Meeting, San Diego, CA, August 22, 23, 1985

    NASA Technical Reports Server (NTRS)

    Ponseggi, B. G. (Editor); Johnson, H. C. (Editor)

    1985-01-01

    Papers are presented on the picosecond electronic framing camera, photogrammetric techniques using high-speed cineradiography, picosecond semiconductor lasers for characterizing high-speed image shutters, the measurement of dynamic strain by high-speed moire photography, the fast framing camera with independent frame adjustments, design considerations for a data recording system, and nanosecond optical shutters. Consideration is given to boundary-layer transition detectors, holographic imaging, laser holographic interferometry in wind tunnels, heterodyne holographic interferometry, a multispectral video imaging and analysis system, a gated intensified camera, a charge-injection-device profile camera, a gated silicon-intensified-target streak tube and nanosecond-gated photoemissive shutter tubes. Topics discussed include high time-space resolved photography of lasers, time-resolved X-ray spectrographic instrumentation for laser studies, a time-resolving X-ray spectrometer, a femtosecond streak camera, streak tubes and cameras, and a short pulse X-ray diagnostic development facility.

  10. Accuracy of visual estimates of joint angle and angular velocity using criterion movements.

    PubMed

    Morrison, Craig S; Knudson, Duane; Clayburn, Colby; Haywood, Philip

    2005-06-01

    A descriptive study to document undergraduate physical education majors' (22.8 +/- 2.4 yr. old) estimates of sagittal plane elbow angle and angular velocity of elbow flexion visually was performed. 42 subjects rated videotape replays of 30 movements organized into three speeds of movement and two criterion elbow angles. Video images of the movements were analyzed with Peak Motus to measure actual values of elbow angles and peak angular velocity. Of the subjects 85.7% had speed ratings significantly correlated with true peak elbow angular velocity in all three angular velocity conditions. Few (16.7%) subjects' ratings of elbow angle correlated significantly with actual angles. Analysis of the subjects with good ratings showed the accuracy of visual ratings was significantly related to speed, with decreasing accuracy for slower speeds of movement. The use of criterion movements did not improve the small percentage of novice observers who could accurately estimate body angles during movement.

  11. Method and Apparatus for the Portable Identification of Material Thickness and Defects Using Spatially Controlled Heat Application

    NASA Technical Reports Server (NTRS)

    Cramer, K. Elliott (Inventor); Winfree, William P. (Inventor)

    1999-01-01

    A method and a portable apparatus for the nondestructive identification of defects in structures. The apparatus comprises a heat source and a thermal imager that move at a constant speed past a test surface of a structure. The thermal imager is off set at a predetermined distance from the heat source. The heat source induces a constant surface temperature. The imager follows the heat source and produces a video image of the thermal characteristics of the test surface. Material defects produce deviations from the constant surface temperature that move at the inverse of the constant speed. Thermal noise produces deviations that move at random speed. Computer averaging of the digitized thermal image data with respect to the constant speed minimizes noise and improves the signal of valid defects. The motion of thermographic equipment coupled with the high signal to noise ratio render it suitable for portable, on site analysis.

  12. Exploding Balloons, Deformed Balls, Strange Reflections and Breaking Rods: Slow Motion Analysis of Selected Hands-On Experiments

    ERIC Educational Resources Information Center

    Vollmer, Michael; Mollmann, Klaus-Peter

    2011-01-01

    A selection of hands-on experiments from different fields of physics, which happen too fast for the eye or video cameras to properly observe and analyse the phenomena, is presented. They are recorded and analysed using modern high speed cameras. Two types of cameras were used: the first were rather inexpensive consumer products such as Casio…

  13. Video game experience and its influence on visual attention parameters: an investigation using the framework of the Theory of Visual Attention (TVA).

    PubMed

    Schubert, Torsten; Finke, Kathrin; Redel, Petra; Kluckow, Steffen; Müller, Hermann; Strobach, Tilo

    2015-05-01

    Experts with video game experience, in contrast to non-experienced persons, are superior in multiple domains of visual attention. However, it is an open question which basic aspects of attention underlie this superiority. We approached this question using the framework of Theory of Visual Attention (TVA) with tools that allowed us to assess various parameters that are related to different visual attention aspects (e.g., perception threshold, processing speed, visual short-term memory storage capacity, top-down control, spatial distribution of attention) and that are measurable on the same experimental basis. In Experiment 1, we found advantages of video game experts in perception threshold and visual processing speed; the latter being restricted to the lower positions of the used computer display. The observed advantages were not significantly moderated by general person-related characteristics such as personality traits, sensation seeking, intelligence, social anxiety, or health status. Experiment 2 tested a potential causal link between the expert advantages and video game practice with an intervention protocol. It found no effects of action video gaming on perception threshold, visual short-term memory storage capacity, iconic memory storage, top-down control, and spatial distribution of attention after 15 days of training. However, observations of a selected improvement of processing speed at the lower positions of the computer screen after video game training and of retest effects are suggestive for limited possibilities to improve basic aspects of visual attention (TVA) with practice. Copyright © 2015 Elsevier B.V. All rights reserved.

  14. Limb locomotion--speed distribution analysis as a new method for stance phase detection.

    PubMed

    Peham, C; Scheidl, M; Licka, T

    1999-10-01

    The stance phase is used for the determination of many parameters in motion analysis. In this technical note the authors present a new kinematical method for determination of stance phase. From the high-speed video data, the speed distribution of the horizontal motion of the distal limb is calculated. The speed with the maximum occurrence within the motion cycle defines the stance phase, and this speed is used as threshold for beginning and end of the stance phase. In seven horses the results obtained with the presented method were compared to synchronous stance phase determination using a force plate integrated in a hard track. The mean difference between the results was 10.8 ms, equalling 1.44% of mean stance phase duration. As a test, the presented method was applied to a horse trotting on the treadmill, and to a human walking on concrete. This article describes an easy and safe method for stance phase determination in continuous kinematic data and proves the reliability of the method by comparing it to kinetic stance phase detection. This method may be applied in several species and all gaits, on the treadmill and on firm ground.

  15. Automated High-Speed Video Detection of Small-Scale Explosives Testing

    NASA Astrophysics Data System (ADS)

    Ford, Robert; Guymon, Clint

    2013-06-01

    Small-scale explosives sensitivity test data is used to evaluate hazards of processing, handling, transportation, and storage of energetic materials. Accurate test data is critical to implementation of engineering and administrative controls for personnel safety and asset protection. Operator mischaracterization of reactions during testing contributes to either excessive or inadequate safety protocols. Use of equipment and associated algorithms to aid the operator in reaction determination can significantly reduce operator error. Safety Management Services, Inc. has developed an algorithm to evaluate high-speed video images of sparks from an ESD (Electrostatic Discharge) machine to automatically determine whether or not a reaction has taken place. The algorithm with the high-speed camera is termed GoDetect (patent pending). An operator assisted version for friction and impact testing has also been developed where software is used to quickly process and store video of sensitivity testing. We have used this method for sensitivity testing with multiple pieces of equipment. We present the fundamentals of GoDetect and compare it to other methods used for reaction detection.

  16. Realization and optimization of AES algorithm on the TMS320DM6446 based on DaVinci technology

    NASA Astrophysics Data System (ADS)

    Jia, Wen-bin; Xiao, Fu-hai

    2013-03-01

    The application of AES algorithm in the digital cinema system avoids video data to be illegal theft or malicious tampering, and solves its security problems. At the same time, in order to meet the requirements of the real-time, scene and transparent encryption of high-speed data streams of audio and video in the information security field, through the in-depth analysis of AES algorithm principle, based on the hardware platform of TMS320DM6446, with the software framework structure of DaVinci, this paper proposes the specific realization methods of AES algorithm in digital video system and its optimization solutions. The test results show digital movies encrypted by AES128 can not play normally, which ensures the security of digital movies. Through the comparison of the performance of AES128 algorithm before optimization and after, the correctness and validity of improved algorithm is verified.

  17. Photogrammetric Trajectory Estimation of Foam Debris Ejected From an F-15 Aircraft

    NASA Technical Reports Server (NTRS)

    Smith, Mark S.

    2006-01-01

    Photogrammetric analysis of high-speed digital video data was performed to estimate trajectories of foam debris ejected from an F-15B aircraft. This work was part of a flight test effort to study the transport properties of insulating foam shed by the Space Shuttle external tank during ascent. The conical frustum-shaped pieces of debris, called "divots," were ejected from a flight test fixture mounted underneath the F-15B aircraft. Two onboard cameras gathered digital video data at two thousand frames per second. Time histories of divot positions were determined from the videos post flight using standard photogrammetry techniques. Divot velocities were estimated by differentiating these positions with respect to time. Time histories of divot rotations were estimated using four points on the divot face. Estimated divot position, rotation, and Mach number for selected cases are presented. Uncertainty in the results is discussed.

  18. Harnessing Students' Interest in Physics with Their Own Video Games

    NASA Astrophysics Data System (ADS)

    Like, Christopher

    2011-04-01

    Many physics teachers assign projects where students are asked to measure real-world motion. One purpose of this student-centered activity is to cultivate the relevance of physics in their lives. Typical project topics may include measuring the speed of a student's fastball and calculating how much reaction time batters are given. Another student may find the trajectory of her dive off the blocks at the pool and its effect on race time. Leaving the experimental design to the student's imagination allows for a variety of proposals ranging from stopwatches to highly technical video analysis. The past few years have shown an increase in students' eagerness to tackle the physics behind the motion of virtual characters and phenomena in their own video games. This paper puts forth a method of analyzing the physics behind bringing the games students are playing for enjoyment into the physics classroom.

  19. How to study the Doppler effect with Audacity software

    NASA Astrophysics Data System (ADS)

    Adriano Dias, Marco; Simeão Carvalho, Paulo; Rodrigues Ventura, Daniel

    2016-05-01

    The Doppler effect is one of the recurring themes in college and high school classes. In order to contextualize the topic and engage the students in their own learning process, we propose a simple and easily accessible activity, i.e. the analysis of the videos available on the internet by the students. The sound of the engine of the vehicle passing by the camera is recorded on the video; it is then analyzed with the free software Audacity by measuring the frequency of the sound during approach and recede of the vehicle from the observer. The speed of the vehicle is determined due to the application of Doppler effect equations for acoustic waves.

  20. Exercise Performance and Corticospinal Excitability during Action Observation

    PubMed Central

    Wrightson, James G.; Twomey, Rosie; Smeeton, Nicholas J.

    2016-01-01

    Purpose: Observation of a model performing fast exercise improves simultaneous exercise performance; however, the precise mechanism underpinning this effect is unknown. The aim of the present study was to investigate whether the speed of the observed exercise influenced both upper body exercise performance and the activation of a cortical action observation network (AON). Method: In Experiment 1, 10 participants completed a 5 km time trial on an arm-crank ergometer whilst observing a blank screen (no-video) and a model performing exercise at both a typical (i.e., individual mean cadence during baseline time trial) and 15% faster than typical speed. In Experiment 2, 11 participants performed arm crank exercise whilst observing exercise at typical speed, 15% slower and 15% faster than typical speed. In Experiment 3, 11 participants observed the typical, slow and fast exercise, and a no-video, whilst corticospinal excitability was assessed using transcranial magnetic stimulation. Results: In Experiment 1, performance time decreased and mean power increased, during observation of the fast exercise compared to the no-video condition. In Experiment 2, cadence and power increased during observation of the fast exercise compared to the typical speed exercise but there was no effect of observation of slow exercise on exercise behavior. In Experiment 3, observation of exercise increased corticospinal excitability; however, there was no difference between the exercise speeds. Conclusion: Observation of fast exercise improves simultaneous upper-body exercise performance. However, because there was no effect of exercise speed on corticospinal excitability, these results suggest that these improvements are not solely due to changes in the activity of the AON. PMID:27014037

  1. Combining High-Speed Cameras and Stop-Motion Animation Software to Support Students' Modeling of Human Body Movement

    NASA Astrophysics Data System (ADS)

    Lee, Victor R.

    2015-04-01

    Biomechanics, and specifically the biomechanics associated with human movement, is a potentially rich backdrop against which educators can design innovative science teaching and learning activities. Moreover, the use of technologies associated with biomechanics research, such as high-speed cameras that can produce high-quality slow-motion video, can be deployed in such a way to support students' participation in practices of scientific modeling. As participants in classroom design experiment, fifteen fifth-grade students worked with high-speed cameras and stop-motion animation software (SAM Animation) over several days to produce dynamic models of motion and body movement. The designed series of learning activities involved iterative cycles of animation creation and critique and use of various depictive materials. Subsequent analysis of flipbooks of human jumping movements created by the students at the beginning and end of the unit revealed a significant improvement in both the epistemic fidelity of students' representations. Excerpts from classroom observations highlight the role that the teacher plays in supporting students' thoughtful reflection of and attention to slow-motion video. In total, this design and research intervention demonstrates that the combination of technologies, activities, and teacher support can lead to improvements in some of the foundations associated with students' modeling.

  2. Synchronization of video recording and laser pulses including background light suppression

    NASA Technical Reports Server (NTRS)

    Kalshoven, Jr., James E. (Inventor); Tierney, Jr., Michael (Inventor); Dabney, Philip W. (Inventor)

    2004-01-01

    An apparatus for and a method of triggering a pulsed light source, in particular a laser light source, for predictable capture of the source by video equipment. A frame synchronization signal is derived from the video signal of a camera to trigger the laser and position the resulting laser light pulse in the appropriate field of the video frame and during the opening of the electronic shutter, if such shutter is included in the camera. Positioning of the laser pulse in the proper video field allows, after recording, for the viewing of the laser light image with a video monitor using the pause mode on a standard cassette-type VCR. This invention also allows for fine positioning of the laser pulse to fall within the electronic shutter opening. For cameras with externally controllable electronic shutters, the invention provides for background light suppression by increasing shutter speed during the frame in which the laser light image is captured. This results in the laser light appearing in one frame in which the background scene is suppressed with the laser light being uneffected, while in all other frames, the shutter speed is slower, allowing for the normal recording of the background scene. This invention also allows for arbitrary (manual or external) triggering of the laser with full video synchronization and background light suppression.

  3. 4DCAPTURE: a general purpose software package for capturing and analyzing two- and three-dimensional motion data acquired from video sequences

    NASA Astrophysics Data System (ADS)

    Walton, James S.; Hodgson, Peter; Hallamasek, Karen; Palmer, Jake

    2003-07-01

    4DVideo is creating a general purpose capability for capturing and analyzing kinematic data from video sequences in near real-time. The core element of this capability is a software package designed for the PC platform. The software ("4DCapture") is designed to capture and manipulate customized AVI files that can contain a variety of synchronized data streams -- including audio, video, centroid locations -- and signals acquired from more traditional sources (such as accelerometers and strain gauges.) The code includes simultaneous capture or playback of multiple video streams, and linear editing of the images (together with the ancilliary data embedded in the files). Corresponding landmarks seen from two or more views are matched automatically, and photogrammetric algorithms permit multiple landmarks to be tracked in two- and three-dimensions -- with or without lens calibrations. Trajectory data can be processed within the main application or they can be exported to a spreadsheet where they can be processed or passed along to a more sophisticated, stand-alone, data analysis application. Previous attempts to develop such applications for high-speed imaging have been limited in their scope, or by the complexity of the application itself. 4DVideo has devised a friendly ("FlowStack") user interface that assists the end-user to capture and treat image sequences in a natural progression. 4DCapture employs the AVI 2.0 standard and DirectX technology which effectively eliminates the file size limitations found in older applications. In early tests, 4DVideo has streamed three RS-170 video sources to disk for more than an hour without loss of data. At this time, the software can acquire video sequences in three ways: (1) directly, from up to three hard-wired cameras supplying RS-170 (monochrome) signals; (2) directly, from a single camera or video recorder supplying an NTSC (color) signal; and (3) by importing existing video streams in the AVI 1.0 or AVI 2.0 formats. The latter is particularly useful for high-speed applications where the raw images are often captured and stored by the camera before being downloaded. Provision has been made to synchronize data acquired from any combination of these video sources using audio and visual "tags." Additional "front-ends," designed for digital cameras, are anticipated.

  4. Work zone speed reduction utilizing dynamic speed signs

    DOT National Transportation Integrated Search

    2011-08-30

    Vast quantities of transportation data are automatically recorded by intelligent transportations infrastructure, such as inductive loop detectors, video cameras, and side-fire radar devices. Such devices are typically deployed by traffic management c...

  5. High Speed Video Applications In The Pharmaceutical Industry

    NASA Astrophysics Data System (ADS)

    Stapley, David

    1985-02-01

    The pursuit of quality is essential in the development and production of drugs. The pursuit of excellence is relentless, a never ending search. In the pharmaceutical industry, we all know and apply wide-ranging techniques to assure quality production. We all know that in reality none of these techniques are perfect for all situations. We have all experienced, the damaged foil, blister or tube, the missing leaflet, the 'hard to read' batch code. We are all aware of the need to supplement the traditional techniques of fault finding. This paper shows how high speed video systems can be applied to fully automated filling and packaging operations as a tool to aid the company's drive for high quality and productivity. The range of products involved totals some 350 in approximately 3,000 pack variants, encompassing creams, ointments, lotions, capsules, tablets, parenteral and sterile antibiotics. Pharmaceutical production demands diligence at all stages, with optimum use of the techniques offered by the latest technology. Figure 1 shows typical stages of pharmaceutical production in which quality must be assured, and highlights those stages where the use of high speed video systems have proved of value to date. The use of high speed video systems begins with the very first use of machine and materials: commissioning and validation, (the term used for determining that a process is capable of consistently producing the requisite quality) and continues to support inprocess monitoring, throughout the life of the plant. The activity of validation in the packaging environment is particularly in need of a tool to see the nature of high speed faults, no matter how infrequently they occur, so that informed changes can be made precisely and rapidly. The prime use of this tool is to ensure that machines are less sensitive to minor variations in component characteristics.

  6. High Speed Digital Camera Technology Review

    NASA Technical Reports Server (NTRS)

    Clements, Sandra D.

    2009-01-01

    A High Speed Digital Camera Technology Review (HSD Review) is being conducted to evaluate the state-of-the-shelf in this rapidly progressing industry. Five HSD cameras supplied by four camera manufacturers participated in a Field Test during the Space Shuttle Discovery STS-128 launch. Each camera was also subjected to Bench Tests in the ASRC Imaging Development Laboratory. Evaluation of the data from the Field and Bench Tests is underway. Representatives from the imaging communities at NASA / KSC and the Optical Systems Group are participating as reviewers. A High Speed Digital Video Camera Draft Specification was updated to address Shuttle engineering imagery requirements based on findings from this HSD Review. This draft specification will serve as the template for a High Speed Digital Video Camera Specification to be developed for the wider OSG imaging community under OSG Task OS-33.

  7. Network-linked long-time recording high-speed video camera system

    NASA Astrophysics Data System (ADS)

    Kimura, Seiji; Tsuji, Masataka

    2001-04-01

    This paper describes a network-oriented, long-recording-time high-speed digital video camera system that utilizes an HDD (Hard Disk Drive) as a recording medium. Semiconductor memories (DRAM, etc.) are the most common image data recording media with existing high-speed digital video cameras. They are extensively used because of their advantage of high-speed writing and reading of picture data. The drawback is that their recording time is limited to only several seconds because the data amount is very large. A recording time of several seconds is sufficient for many applications. However, a much longer recording time is required in some applications where an exact prediction of trigger timing is hard to make. In the Late years, the recording density of the HDD has been dramatically improved, which has attracted more attention to its value as a long-recording-time medium. We conceived an idea that we would be able to build a compact system that makes possible a long time recording if the HDD can be used as a memory unit for high-speed digital image recording. However, the data rate of such a system, capable of recording 640 X 480 pixel resolution pictures at 500 frames per second (fps) with 8-bit grayscale is 153.6 Mbyte/sec., and is way beyond the writing speed of the commonly used HDD. So, we developed a dedicated image compression system and verified its capability to lower the data rate from the digital camera to match the HDD writing rate.

  8. Precise color images a high-speed color video camera system with three intensified sensors

    NASA Astrophysics Data System (ADS)

    Oki, Sachio; Yamakawa, Masafumi; Gohda, Susumu; Etoh, Takeharu G.

    1999-06-01

    High speed imaging systems have been used in a large field of science and engineering. Although the high speed camera systems have been improved to high performance, most of their applications are only to get high speed motion pictures. However, in some fields of science and technology, it is useful to get some other information, such as temperature of combustion flame, thermal plasma and molten materials. Recent digital high speed video imaging technology should be able to get such information from those objects. For this purpose, we have already developed a high speed video camera system with three-intensified-sensors and cubic prism image splitter. The maximum frame rate is 40,500 pps (picture per second) at 64 X 64 pixels and 4,500 pps at 256 X 256 pixels with 256 (8 bit) intensity resolution for each pixel. The camera system can store more than 1,000 pictures continuously in solid state memory. In order to get the precise color images from this camera system, we need to develop a digital technique, which consists of a computer program and ancillary instruments, to adjust displacement of images taken from two or three image sensors and to calibrate relationship between incident light intensity and corresponding digital output signals. In this paper, the digital technique for pixel-based displacement adjustment are proposed. Although the displacement of the corresponding circle was more than 8 pixels in original image, the displacement was adjusted within 0.2 pixels at most by this method.

  9. Image processing for safety assessment in civil engineering.

    PubMed

    Ferrer, Belen; Pomares, Juan C; Irles, Ramon; Espinosa, Julian; Mas, David

    2013-06-20

    Behavior analysis of construction safety systems is of fundamental importance to avoid accidental injuries. Traditionally, measurements of dynamic actions in civil engineering have been done through accelerometers, but high-speed cameras and image processing techniques can play an important role in this area. Here, we propose using morphological image filtering and Hough transform on high-speed video sequence as tools for dynamic measurements on that field. The presented method is applied to obtain the trajectory and acceleration of a cylindrical ballast falling from a building and trapped by a thread net. Results show that safety recommendations given in construction codes can be potentially dangerous for workers.

  10. Simultaneous Recordings of Human Microsaccades and Drifts with a Contemporary Video Eye Tracker and the Search Coil Technique

    PubMed Central

    McCamy, Michael B.; Otero-Millan, Jorge; Leigh, R. John; King, Susan A.; Schneider, Rosalyn M.; Macknik, Stephen L.; Martinez-Conde, Susana

    2015-01-01

    Human eyes move continuously, even during visual fixation. These “fixational eye movements” (FEMs) include microsaccades, intersaccadic drift and oculomotor tremor. Research in human FEMs has grown considerably in the last decade, facilitated by the manufacture of noninvasive, high-resolution/speed video-oculography eye trackers. Due to the small magnitude of FEMs, obtaining reliable data can be challenging, however, and depends critically on the sensitivity and precision of the eye tracking system. Yet, no study has conducted an in-depth comparison of human FEM recordings obtained with the search coil (considered the gold standard for measuring microsaccades and drift) and with contemporary, state-of-the art video trackers. Here we measured human microsaccades and drift simultaneously with the search coil and a popular state-of-the-art video tracker. We found that 95% of microsaccades detected with the search coil were also detected with the video tracker, and 95% of microsaccades detected with video tracking were also detected with the search coil, indicating substantial agreement between the two systems. Peak/mean velocities and main sequence slopes of microsaccades detected with video tracking were significantly higher than those of the same microsaccades detected with the search coil, however. Ocular drift was significantly correlated between the two systems, but drift speeds were higher with video tracking than with the search coil. Overall, our combined results suggest that contemporary video tracking now approaches the search coil for measuring FEMs. PMID:26035820

  11. Use of video to facilitate sideline concussion diagnosis and management decision-making.

    PubMed

    Davis, Gavin; Makdissi, Michael

    2016-11-01

    Video analysis can provide critical information to improve diagnostic accuracy and speed of clinical decision-making in potential cases of concussion. The objective of this study was to validate a hierarchical flowchart for the assessment of video signs of concussion, and to determine whether its implementation could improve the process of game day video assessment. Prospective cohort study. All impacts and collisions potentially resulting in a concussion were identified during 2012 and 2013 Australian Football League (AFL) seasons. Consensus definitions were developed for clinical signs associated with concussion. A hierarchical flowchart was developed based on the reliability and validity of the video signs of concussion. Ninety videos were assessed, with 45 incidents of clinically confirmed concussion, and 45 cases where no concussion was sustained. Each video was examined using the hierarchical flowchart, and a single response was given for each video based on the highest-ranking element in the flowchart. No protective action, impact seizure, motor incoordination or blank/vacant look were the highest ranked video signs in almost half of the clinically confirmed concussions, but in only 8.8% of non-concussed individuals. The presence of facial injury, clutching at the head and slow to get up were the highest ranked sign in 77.7% of non-concussed individuals. This study suggests that the implementation of a flowchart model could improve timely assessment of concussion, and it identifies the video signs that should trigger automatic removal from play. Copyright © 2016 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.

  12. Wing and body kinematics of forward flight in drone-flies.

    PubMed

    Meng, Xue Guang; Sun, Mao

    2016-08-15

    Here, we present a detailed analysis of the wing and body kinematics in drone-flies in free flight over a range of speeds from hovering to about 8.5 m s(-1). The kinematics was measured by high-speed video techniques. As the speed increased, the body angle decreased and the stroke plane angle increased; the wingbeat frequency changed little; the stroke amplitude first decreased and then increased; the ratio of the downstroke duration to the upstroke duration increased; the mean positional angle increased at lower speeds but changed little at speeds above 3 m s(-1). At a speed above about 1.5 m s(-1), wing rotation at supination was delayed and that at pronation was advanced, and consequently the wing rotations were mostly performed in the upstroke. In the downstroke, the relative velocity of the wing increased and the effective angle of attack decreased with speed; in the upstroke, they both decreased with speed at lower speeds, and at higher speeds, the relative velocity became larger but the effective angle of attack became very small. As speed increased, the increasing inclination of the stroke plane ensured that the effective angle of attack in the upstroke would not become negative, and that the wing was in suitable orientations for vertical-force and thrust production.

  13. Monitoring system for phreatic eruptions and thermal behavior on Poás volcano hyperacidic lake, with permanent IR and HD cameras

    NASA Astrophysics Data System (ADS)

    Ramirez, C. J.; Mora-Amador, R. A., Sr.; Alpizar Segura, Y.; González, G.

    2015-12-01

    Monitoring volcanoes have been on the past decades an expanding matter, one of the rising techniques that involve new technology is the digital video surveillance, and the automated software that come within, now is possible if you have the budget and some facilities on site, to set up a real-time network of high definition video cameras, some of them even with special features like infrared, thermal, ultraviolet, etc. That can make easier or harder the analysis of volcanic phenomena like lava eruptions, phreatic eruption, plume speed, lava flows, close/open vents, just to mention some of the many application of these cameras. We present the methodology of the installation at Poás volcano of a real-time system for processing and storing HD and thermal images and video, also the process to install and acquired the HD and IR cameras, towers, solar panels and radios to transmit the data on a volcano located at the tropics, plus what volcanic areas are our goal and why. On the other hand we show the hardware and software we consider necessary to carry on our project. Finally we show some early data examples of upwelling areas on the Poás volcano hyperacidic lake and the relation with lake phreatic eruptions, also some data of increasing temperature on an old dome wall and the suddenly wall explosions, and the use of IR video for measuring plume speed and contour for use on combination with DOAS or FTIR measurements.

  14. MPCM: a hardware coder for super slow motion video sequences

    NASA Astrophysics Data System (ADS)

    Alcocer, Estefanía; López-Granado, Otoniel; Gutierrez, Roberto; Malumbres, Manuel P.

    2013-12-01

    In the last decade, the improvements in VLSI levels and image sensor technologies have led to a frenetic rush to provide image sensors with higher resolutions and faster frame rates. As a result, video devices were designed to capture real-time video at high-resolution formats with frame rates reaching 1,000 fps and beyond. These ultrahigh-speed video cameras are widely used in scientific and industrial applications, such as car crash tests, combustion research, materials research and testing, fluid dynamics, and flow visualization that demand real-time video capturing at extremely high frame rates with high-definition formats. Therefore, data storage capability, communication bandwidth, processing time, and power consumption are critical parameters that should be carefully considered in their design. In this paper, we propose a fast FPGA implementation of a simple codec called modulo-pulse code modulation (MPCM) which is able to reduce the bandwidth requirements up to 1.7 times at the same image quality when compared with PCM coding. This allows current high-speed cameras to capture in a continuous manner through a 40-Gbit Ethernet point-to-point access.

  15. Real-time high-level video understanding using data warehouse

    NASA Astrophysics Data System (ADS)

    Lienard, Bruno; Desurmont, Xavier; Barrie, Bertrand; Delaigle, Jean-Francois

    2006-02-01

    High-level Video content analysis such as video-surveillance is often limited by computational aspects of automatic image understanding, i.e. it requires huge computing resources for reasoning processes like categorization and huge amount of data to represent knowledge of objects, scenarios and other models. This article explains how to design and develop a "near real-time adaptive image datamart", used, as a decisional support system for vision algorithms, and then as a mass storage system. Using RDF specification as storing format of vision algorithms meta-data, we can optimise the data warehouse concepts for video analysis, add some processes able to adapt the current model and pre-process data to speed-up queries. In this way, when new data is sent from a sensor to the data warehouse for long term storage, using remote procedure call embedded in object-oriented interfaces to simplified queries, they are processed and in memory data-model is updated. After some processing, possible interpretations of this data can be returned back to the sensor. To demonstrate this new approach, we will present typical scenarios applied to this architecture such as people tracking and events detection in a multi-camera network. Finally we will show how this system becomes a high-semantic data container for external data-mining.

  16. ARINC 818 adds capabilities for high-speed sensors and systems

    NASA Astrophysics Data System (ADS)

    Keller, Tim; Grunwald, Paul

    2014-06-01

    ARINC 818, titled Avionics Digital Video Bus (ADVB), is the standard for cockpit video that has gained wide acceptance in both the commercial and military cockpits including the Boeing 787, the A350XWB, the A400M, the KC- 46A and many others. Initially conceived of for cockpit displays, ARINC 818 is now propagating into high-speed sensors, such as infrared and optical cameras due to its high-bandwidth and high reliability. The ARINC 818 specification that was initially release in the 2006 and has recently undergone a major update that will enhance its applicability as a high speed sensor interface. The ARINC 818-2 specification was published in December 2013. The revisions to the specification include: video switching, stereo and 3-D provisions, color sequential implementations, regions of interest, data-only transmissions, multi-channel implementations, bi-directional communication, higher link rates to 32Gbps, synchronization signals, options for high-speed coax interfaces and optical interface details. The additions to the specification are especially appealing for high-bandwidth, multi sensor systems that have issues with throughput bottlenecks and SWaP concerns. ARINC 818 is implemented on either copper or fiber optic high speed physical layers, and allows for time multiplexing multiple sensors onto a single link. This paper discusses each of the new capabilities in the ARINC 818-2 specification and the benefits for ISR and countermeasures implementations, several examples are provided.

  17. Automated Visual Event Detection, Tracking, and Data Management System for Cabled- Observatory Video

    NASA Astrophysics Data System (ADS)

    Edgington, D. R.; Cline, D. E.; Schlining, B.; Raymond, E.

    2008-12-01

    Ocean observatories and underwater video surveys have the potential to unlock important discoveries with new and existing camera systems. Yet the burden of video management and analysis often requires reducing the amount of video recorded through time-lapse video or similar methods. It's unknown how many digitized video data sets exist in the oceanographic community, but we suspect that many remain under analyzed due to lack of good tools or human resources to analyze the video. To help address this problem, the Automated Visual Event Detection (AVED) software and The Video Annotation and Reference System (VARS) have been under development at MBARI. For detecting interesting events in the video, the AVED software has been developed over the last 5 years. AVED is based on a neuromorphic-selective attention algorithm, modeled on the human vision system. Frames are decomposed into specific feature maps that are combined into a unique saliency map. This saliency map is then scanned to determine the most salient locations. The candidate salient locations are then segmented from the scene using algorithms suitable for the low, non-uniform light and marine snow typical of deep underwater video. For managing the AVED descriptions of the video, the VARS system provides an interface and database for describing, viewing, and cataloging the video. VARS was developed by the MBARI for annotating deep-sea video data and is currently being used to describe over 3000 dives by our remotely operated vehicles (ROV), making it well suited to this deepwater observatory application with only a few modifications. To meet the compute and data intensive job of video processing, a distributed heterogeneous network of computers is managed using the Condor workload management system. This system manages data storage, video transcoding, and AVED processing. Looking to the future, we see high-speed networks and Grid technology as an important element in addressing the problem of processing and accessing large video data sets.

  18. Image and Video Compression with VLSI Neural Networks

    NASA Technical Reports Server (NTRS)

    Fang, W.; Sheu, B.

    1993-01-01

    An advanced motion-compensated predictive video compression system based on artificial neural networks has been developed to effectively eliminate the temporal and spatial redundancy of video image sequences and thus reduce the bandwidth and storage required for the transmission and recording of the video signal. The VLSI neuroprocessor for high-speed high-ratio image compression based upon a self-organization network and the conventional algorithm for vector quantization are compared. The proposed method is quite efficient and can achieve near-optimal results.

  19. 78 FR 76861 - Body-Worn Cameras for Criminal Justice Applications

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-12-19

    ..., Various). 3. Maximum Video Resolution of the BWC (e.g., 640x480, 1080p). 4. Recording Speed of the BWC (e... Photos. 7. Whether the BWC embeds a Time/Date Stamp in the recorded video. 8. The Field of View of the...-person video viewing. 12. The Audio Format of the BWC (e.g., MP2, AAC). 13. Whether the BWC contains...

  20. Experimental study of hydraulic ram effects on a liquid storage tank: Analysis of overpressure and cavitation induced by a high-speed projectile.

    PubMed

    Lecysyn, Nicolas; Bony-Dandrieux, Aurélia; Aprin, Laurent; Heymes, Frédéric; Slangen, Pierre; Dusserre, Gilles; Munier, Laurent; Le Gallic, Christian

    2010-06-15

    This work is part of a project for evaluating catastrophic tank failures caused by impacts with a high-speed solid body. Previous studies on shock overpressure and drag events have provided analytical predictions, but they are not sufficient to explain ejection of liquid from the tank. This study focuses on the hydrodynamic behavior of the liquid after collision to explain subsequent ejection of liquid. The study is characterized by use of high-velocity projectiles and analysis of projectile dynamics in terms of energy loss to tank contents. New tests were performed at two projectile velocities (963 and 1255 m s(-1)) and over a range of viscosities (from 1 to 23.66 mPa s) of the target liquid. Based on data obtained from a high-speed video recorder, a phenomenological description is proposed for the evolution of intense pressure waves and cavitation in the target liquids. Copyright 2010 Elsevier B.V. All rights reserved.

  1. Clinical diagnostic of pleural effusions using a high-speed viscosity measurement method

    NASA Astrophysics Data System (ADS)

    Hurth, Cedric; Klein, Katherine; van Nimwegen, Lena; Korn, Ronald; Vijayaraghavan, Krishnaswami; Zenhausern, Frederic

    2011-08-01

    We present a novel bio-analytical method to discriminate between transudative and exudative pleural effusions based on a high-speed video analysis of a solid glass sphere impacting a liquid. Since the result depends on the solution viscosity, it can ultimately replace the battery of biochemical assays currently used. We present results obtained on a series of 7 pleural effusions obtained from consenting patients by analyzing both the splash observed after the glass impactor hits the liquid surface, and in a configuration reminiscent of the drop ball viscometer with added sensitivity and throughput provided by the high-speed camera. The results demonstrate distinction between the pleural effusions and good correlation with the fluid chemistry analysis to accurately differentiate exudates and transudates for clinical purpose. The exudative effusions display a viscosity around 1.39 ± 0.08 cP whereas the transudative effusion was measured at 0.89 ± 0.09 cP, in good agreement with previous reports.

  2. Video Salient Object Detection via Fully Convolutional Networks.

    PubMed

    Wang, Wenguan; Shen, Jianbing; Shao, Ling

    This paper proposes a deep learning model to efficiently detect salient regions in videos. It addresses two important issues: 1) deep video saliency model training with the absence of sufficiently large and pixel-wise annotated video data and 2) fast video saliency training and detection. The proposed deep video saliency network consists of two modules, for capturing the spatial and temporal saliency information, respectively. The dynamic saliency model, explicitly incorporating saliency estimates from the static saliency model, directly produces spatiotemporal saliency inference without time-consuming optical flow computation. We further propose a novel data augmentation technique that simulates video training data from existing annotated image data sets, which enables our network to learn diverse saliency information and prevents overfitting with the limited number of training videos. Leveraging our synthetic video data (150K video sequences) and real videos, our deep video saliency model successfully learns both spatial and temporal saliency cues, thus producing accurate spatiotemporal saliency estimate. We advance the state-of-the-art on the densely annotated video segmentation data set (MAE of .06) and the Freiburg-Berkeley Motion Segmentation data set (MAE of .07), and do so with much improved speed (2 fps with all steps).This paper proposes a deep learning model to efficiently detect salient regions in videos. It addresses two important issues: 1) deep video saliency model training with the absence of sufficiently large and pixel-wise annotated video data and 2) fast video saliency training and detection. The proposed deep video saliency network consists of two modules, for capturing the spatial and temporal saliency information, respectively. The dynamic saliency model, explicitly incorporating saliency estimates from the static saliency model, directly produces spatiotemporal saliency inference without time-consuming optical flow computation. We further propose a novel data augmentation technique that simulates video training data from existing annotated image data sets, which enables our network to learn diverse saliency information and prevents overfitting with the limited number of training videos. Leveraging our synthetic video data (150K video sequences) and real videos, our deep video saliency model successfully learns both spatial and temporal saliency cues, thus producing accurate spatiotemporal saliency estimate. We advance the state-of-the-art on the densely annotated video segmentation data set (MAE of .06) and the Freiburg-Berkeley Motion Segmentation data set (MAE of .07), and do so with much improved speed (2 fps with all steps).

  3. Video game players show higher performance but no difference in speed of attention shifts.

    PubMed

    Mack, David J; Wiesmann, Helene; Ilg, Uwe J

    2016-09-01

    Video games have become both a widespread leisure activity and a substantial field of research. In a variety of tasks, video game players (VGPs) perform better than non-video game players (NVGPs). This difference is most likely explained by an alteration of the basic mechanisms underlying visuospatial attention. More specifically, the present study hypothesizes that VGPs are able to shift attention faster than NVGPs. Such alterations in attention cannot be disentangled from changes in stimulus-response mappings in reaction time based measurements. Therefore, we used a spatial cueing task with varying cue lead times (CLTs) to investigate the speed of covert attention shifts of 98 male participants divided into 36 NVGPs and 62 VGPs based on their weekly gaming time. VGPs exhibited higher peak and mean performance than NVGPs. However, we did not find any differences in the speed of covert attention shifts as measured by the CLT needed to achieve peak performance. Thus, our results clearly rule out faster stimulus-response mappings as an explanation for the higher performance of VGPs in line with previous studies. More importantly, our data do not support the notion of faster attention shifts in VGPs as another possible explanation. Copyright © 2016 Elsevier B.V. All rights reserved.

  4. Comparative Analysis of THOR-NT ATD vs. Hybrid III ATD in Laboratory Vertical Shock Testing

    DTIC Science & Technology

    2013-09-01

    were taken both pretest and post - test for each test event (figure 5). Figure 5. Rigid fixture placed on the drop table with ATD seated: Hybrid III...6 3. Experimental Procedure 6 3.1 Test Setup...frames per second and with a Vision Research Phantom V9.1 (Wayne, NJ) high-speed video camera, sampling 1000 frames per second. 3. Experimental

  5. No relationship exists between urinary NT-proBNP and GPS technology in professional rugby union.

    PubMed

    Lindsay, Angus; Lewis, John G; Gill, Nicholas; Draper, Nick; Gieseg, Steven P

    2017-08-01

    We investigated the level of cardiovascular stress associated with professional rugby union and whether these changes could be explained through external workload systems like GPS and video analysis. Urine samples (14 in game one and 13 in game two) were collected from professional rugby players before, immediately post- and 36h post-play in two consecutive games. Urine was analysed for NT-proBNP by ELISA. Comparison with GPS (player-load and distance covered at specific speed bands) and video analysis (total impacts) were conducted. There was a significant increase in urinary NT-proBNP during game one (31.6±5.4 to 53.5±10.8pg/mL) and game two (35.4±3.9 to 49.8±11.7pg/mL) that did not correlate with the number of impacts, total distance covered, distance covered at pre-determined speed bands or player-load. Concentrations returned to pre-game concentrations 36h post-game whilst a large inter-individual variation in NT-proBNP was observed among players (p<0.001). Professional rugby union causes a transient increase in cardiovascular stress that seems to be independent of the external workload characteristics of a player. Copyright © 2017 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.

  6. Combining high-speed SVM learning with CNN feature encoding for real-time target recognition in high-definition video for ISR missions

    NASA Astrophysics Data System (ADS)

    Kroll, Christine; von der Werth, Monika; Leuck, Holger; Stahl, Christoph; Schertler, Klaus

    2017-05-01

    For Intelligence, Surveillance, Reconnaissance (ISR) missions of manned and unmanned air systems typical electrooptical payloads provide high-definition video data which has to be exploited with respect to relevant ground targets in real-time by automatic/assisted target recognition software. Airbus Defence and Space is developing required technologies for real-time sensor exploitation since years and has combined the latest advances of Deep Convolutional Neural Networks (CNN) with a proprietary high-speed Support Vector Machine (SVM) learning method into a powerful object recognition system with impressive results on relevant high-definition video scenes compared to conventional target recognition approaches. This paper describes the principal requirements for real-time target recognition in high-definition video for ISR missions and the Airbus approach of combining an invariant feature extraction using pre-trained CNNs and the high-speed training and classification ability of a novel frequency-domain SVM training method. The frequency-domain approach allows for a highly optimized implementation for General Purpose Computation on a Graphics Processing Unit (GPGPU) and also an efficient training of large training samples. The selected CNN which is pre-trained only once on domain-extrinsic data reveals a highly invariant feature extraction. This allows for a significantly reduced adaptation and training of the target recognition method for new target classes and mission scenarios. A comprehensive training and test dataset was defined and prepared using relevant high-definition airborne video sequences. The assessment concept is explained and performance results are given using the established precision-recall diagrams, average precision and runtime figures on representative test data. A comparison to legacy target recognition approaches shows the impressive performance increase by the proposed CNN+SVM machine-learning approach and the capability of real-time high-definition video exploitation.

  7. High resolution, high frame rate video technology development plan and the near-term system conceptual design

    NASA Technical Reports Server (NTRS)

    Ziemke, Robert A.

    1990-01-01

    The objective of the High Resolution, High Frame Rate Video Technology (HHVT) development effort is to provide technology advancements to remove constraints on the amount of high speed, detailed optical data recorded and transmitted for microgravity science and application experiments. These advancements will enable the development of video systems capable of high resolution, high frame rate video data recording, processing, and transmission. Techniques such as multichannel image scan, video parameter tradeoff, and the use of dual recording media were identified as methods of making the most efficient use of the near-term technology.

  8. Low-cost synchronization of high-speed audio and video recordings in bio-acoustic experiments.

    PubMed

    Laurijssen, Dennis; Verreycken, Erik; Geipel, Inga; Daems, Walter; Peremans, Herbert; Steckel, Jan

    2018-02-27

    In this paper, we present a method for synchronizing high-speed audio and video recordings of bio-acoustic experiments. By embedding a random signal into the recorded video and audio data, robust synchronization of a diverse set of sensor streams can be performed without the need to keep detailed records. The synchronization can be performed using recording devices without dedicated synchronization inputs. We demonstrate the efficacy of the approach in two sets of experiments: behavioral experiments on different species of echolocating bats and the recordings of field crickets. We present the general operating principle of the synchronization method, discuss its synchronization strength and provide insights into how to construct such a device using off-the-shelf components. © 2018. Published by The Company of Biologists Ltd.

  9. Note: Sound recovery from video using SVD-based information extraction

    NASA Astrophysics Data System (ADS)

    Zhang, Dashan; Guo, Jie; Lei, Xiujun; Zhu, Chang'an

    2016-08-01

    This note reports an efficient singular value decomposition (SVD)-based vibration extraction approach that recovers sound information in silent high-speed video. A high-speed camera of which frame rates are in the range of 2 kHz-10 kHz is applied to film the vibrating objects. Sub-images cut from video frames are transformed into column vectors and then reconstructed to a new matrix. The SVD of the new matrix produces orthonormal image bases (OIBs) and image projections onto specific OIB can be recovered as understandable acoustical signals. Standard frequencies of 256 Hz and 512 Hz tuning forks are extracted offline from their vibrating surfaces and a 3.35 s speech signal is recovered online from a piece of paper that is stimulated by sound waves within 1 min.

  10. Video on phone lines: technology and applications

    NASA Astrophysics Data System (ADS)

    Hsing, T. Russell

    1996-03-01

    Recent advances in communications signal processing and VLSI technology are fostering tremendous interest in transmitting high-speed digital data over ordinary telephone lines at bit rates substantially above the ISDN Basic Access rate (144 Kbit/s). Two new technologies, high-bit-rate digital subscriber lines and asymmetric digital subscriber lines promise transmission over most of the embedded loop plant at 1.544 Mbit/s and beyond. Stimulated by these research promises and rapid advances on video coding techniques and the standards activity, information networks around the globe are now exploring possible business opportunities of offering quality video services (such as distant learning, telemedicine, and telecommuting etc.) through this high-speed digital transport capability in the copper loop plant. Visual communications for residential customers have become more feasible than ever both technically and economically.

  11. Analysis of TIMS performance subjected to simulated wind blast

    NASA Technical Reports Server (NTRS)

    Jaggi, S.; Kuo, S.

    1992-01-01

    The results of the performance of the Thermal Infrared Multispectral Scanner (TIMS) when it is subjected to various wind conditions in the laboratory are described. Various wind conditions were simulated using a 24 inch fan or combinations of air jet streams blowing toward either or both of the blackbody surfaces. The fan was used to simulate a large volume of air flow at moderate speeds (up to 30 mph). The small diameter air jets were used to probe TIMS system response in reaction to localized wind perturbations. The maximum nozzle speed of the air jet was 60 mph. A range of wind directions and speeds were set up in the laboratory during the test. The majority of the wind tests were conducted under ambient conditions with the room temperature fluctuating no more than 2 C. The temperature of the high speed air jet was determined to be within 1 C of the room temperature. TIMS response was recorded on analog tape. Additional thermistor readouts of the blackbody temperatures and thermocouple readout of the ambient temperature were recorded manually to be compared with the housekeeping data recorded on the tape. Additional tests were conducted under conditions of elevated and cooled room temperatures. The room temperature was varied between 19.5 to 25.5 C in these tests. The calibration parameters needed for quantitative analysis of TIMS data were first plotted on a scanline-by-scanline basis. These parameters are the low and high blackbody temperature readings as recorded by the TIMS and their corresponding digitized count values. Using these values, the system transfer equations were calculated. This equation allows us to compute the flux for any video count by computing the slope and intercept of the straight line that relates the flux to the digital count. The actual video of the target (the lab floor in this case) was then compared with a simulated target. This simulated target was assumed to be a blackbody at emissivity of .95 degrees and the temperature was assumed to be at ambient temperature as recorded by the TIMS for each scanline. Using the slope and the intercept the flux corresponding to this target was converted into digital counts. The counts were observed to have a strong correlation with the actual video as recorded by the TIMS. The attached graphs describe the performance of the TIMS when compressed air is blown at each one of the blackbodies at different speeds. The effect of blowing a fan and changing the room temperature is also being analyzed. Results indicate that the TIMS system responds to variation in wind speed in real time and maintains the capability to produce accurate temperatures on a scan line basis.

  12. The impact of complete denture making instructional videos on self-directed learning of clinical skills.

    PubMed

    Kon, Haruka; Botelho, Michael George; Bridges, Susan; Leung, Katherine Chiu Man

    2015-04-01

    The aim of this research was to evaluate the effectiveness of a clinical instructional video with a structured worksheet for independent self-study in a complete denture program. 47 multilingual dental students completed a task by watching an instructional video with subtitles regarding clinical complete denture procedures. After completion, students evaluated their learning experience, and 11 students participated in focus group interviews to gain further insight. A mixed-methods approach to data collection and analysis provided descriptive statistical results and a grounded theory approach to coding identified key concepts and categories from the qualitative data. Over 70% of students had favorable opinions of the learning experience and indicated that the speed and length of the video were appropriate. Highly positive and conflicting negative comments regarding the use of subtitles showed both preferences for subtitles over audio and vice versa. The use of a video resource was considered valuable as the replay and review functions allowed better visualization of the procedures, which was considered a good recap tool for the clinical demonstration. It was also a better revision aid than textbooks. So, if the students were able to view these videos at will, they believed that videos supplemented their self-study. Despite the positive response, videos were not considered to replace live clinical demonstrations. While students preferred live demonstrations over the clinical videos they did express a realization of these as a supplemental learning material for self-study based on their ease of access, use for revision, and prior to clinical preparation. Copyright © 2015 Japan Prosthodontic Society. Published by Elsevier Ltd. All rights reserved.

  13. Aggressive driving video and non-contact enforcement (ADVANCE): drivers' reaction to violation notices : summary of survey results

    DOT National Transportation Integrated Search

    2001-01-01

    ADVANCE is an integration of state of the practice, off-the-shelf technologies which include video, speed measurement, distance measurement, and digital imaging that detects UDAs in the traffic stream and subsequently notifies violators by ma...

  14. Intensive video gaming improves encoding speed to visual short-term memory in young male adults.

    PubMed

    Wilms, Inge L; Petersen, Anders; Vangkilde, Signe

    2013-01-01

    The purpose of this study was to measure the effect of action video gaming on central elements of visual attention using Bundesen's (1990) Theory of Visual Attention. To examine the cognitive impact of action video gaming, we tested basic functions of visual attention in 42 young male adults. Participants were divided into three groups depending on the amount of time spent playing action video games: non-players (<2h/month, N=12), casual players (4-8h/month, N=10), and experienced players (>15h/month, N=20). All participants were tested in three tasks which tap central functions of visual attention and short-term memory: a test based on the Theory of Visual Attention (TVA), an enumeration test and finally the Attentional Network Test (ANT). The results show that action video gaming does not seem to impact the capacity of visual short-term memory. However, playing action video games does seem to improve the encoding speed of visual information into visual short-term memory and the improvement does seem to depend on the time devoted to gaming. This suggests that intense action video gaming improves basic attentional functioning and that this improvement generalizes into other activities. The implications of these findings for cognitive rehabilitation training are discussed. Copyright © 2012 Elsevier B.V. All rights reserved.

  15. Genesis Reentry Observations and Data Analysis

    NASA Technical Reports Server (NTRS)

    Suggs, R. M.; Swift, W. R.

    2005-01-01

    The Genesis spacecraft reentry represented a unique opportunity to observe a "calibrated meteor" from northern Nevada. Knowing its speed, mass, composition, and precise trajectory made it a good subject to test some of the algorithms used to determine meteoroid mass from observed brightness. It was also a good test of an inexpensive set of cameras that could be deployed to observe future shuttle reentries. The utility of consumer-grade video cameras was evident during the STS-107 accident investigation, and the Genesis reentry gave us the opportunity to specify and test commercially available cameras that could be used during future reentries. This Technical Memorandum describes the video observations and their analysis, compares the results with a simple photometric model, describes the forward scatter radar experiment, and lists lessons learned from the expedition and implications for the Stardust reentry in January 2006 as well as future shuttle reentries.

  16. The Tacoma Narrows Bridge Collapse on Film and Video

    NASA Astrophysics Data System (ADS)

    Olson, Don; Hook, Joseph; Doescher, Russell; Wolf, Steven

    2015-11-01

    This month marks the 75th anniversary of the Tacoma Narrows Bridge collapse. During a gale on Nov. 7, 1940, the bridge exhibited remarkable oscillations before collapsing spectacularly (Figs. 1-5). Physicists over the years have spent a great deal of time and energy studying this event. By using open-source analysis tools and digitized footage of the disaster, physics students in both high school and college can continue in this tradition. Students can watch footage of "Galloping Gertie," ask scientific questions about the bridge's collapse, analyze data, and draw conclusions from that analysis. Students should be encouraged to pursue their own investigations, but the question that drove our inquiry was this: "When physics classes watch modern video showing the oscillations and the free fall of the bridge fragments, are these scenes sped up, slowed down, or at the correct speed compared to what was observed by the eyewitnesses on Nov. 7, 1940?"

  17. Flexible retrospective selection of temporal resolution in real-time speech MRI using a golden-ratio spiral view order.

    PubMed

    Kim, Yoon-Chul; Narayanan, Shrikanth S; Nayak, Krishna S

    2011-05-01

    In speech production research using real-time magnetic resonance imaging (MRI), the analysis of articulatory dynamics is performed retrospectively. A flexible selection of temporal resolution is highly desirable because of natural variations in speech rate and variations in the speed of different articulators. The purpose of the study is to demonstrate a first application of golden-ratio spiral temporal view order to real-time speech MRI and investigate its performance by comparison with conventional bit-reversed temporal view order. Golden-ratio view order proved to be more effective at capturing the dynamics of rapid tongue tip motion. A method for automated blockwise selection of temporal resolution is presented that enables the synthesis of a single video from multiple temporal resolution videos and potentially facilitates subsequent vocal tract shape analysis. Copyright © 2010 Wiley-Liss, Inc.

  18. Implementation of the Vehicle Black Box Using External Sensor and Networks

    NASA Astrophysics Data System (ADS)

    Back, Sung-Hyun; Kim, Jang-Ju; Kim, Mi-Jin; Kim, Hwa-Sun; Park, You-Sin; Jang, Jong-Wook

    With the increasing uses of black boxes for vehicles, they are being widely studied and developed. Existing black boxes store only video and sound, and have limitations in accurately identifying accident contexts. Besides, data are lost if the black box in the vehicle is damaged. In this study, a smart black box was manufactured by storing the additional data, including on the tire pressure, in-vehicle data (e.g., head lamp operation), current location, travel path and speed, and video and sound, using OBD-II and GPS to improve the efficiency and accuracy of accident analysis. An external storage device was used for data backup via wireless LAN to allow checking of data even when the black box is damaged.

  19. Development of a ground signal processor for digital synthetic array radar data

    NASA Technical Reports Server (NTRS)

    Griffin, C. R.; Estes, J. M.

    1981-01-01

    A modified APQ-102 sidelooking array radar (SLAR) in a B-57 aircraft test bed is used, with other optical and infrared sensors, in remote sensing of Earth surface features for various users at NASA Johnson Space Center. The video from the radar is normally recorded on photographic film and subsequently processed photographically into high resolution radar images. Using a high speed sampling (digitizing) system, the two receiver channels of cross-and co-polarized video are recorded on wideband magnetic tape along with radar and platform parameters. These data are subsequently reformatted and processed into digital synthetic aperture radar images with the image data available on magnetic tape for subsequent analysis by investigators. The system design and results obtained are described.

  20. Vision-based measurement for rotational speed by improving Lucas-Kanade template tracking algorithm.

    PubMed

    Guo, Jie; Zhu, Chang'an; Lu, Siliang; Zhang, Dashan; Zhang, Chunyu

    2016-09-01

    Rotational angle and speed are important parameters for condition monitoring and fault diagnosis of rotating machineries, and their measurement is useful in precision machining and early warning of faults. In this study, a novel vision-based measurement algorithm is proposed to complete this task. A high-speed camera is first used to capture the video of the rotational object. To extract the rotational angle, the template-based Lucas-Kanade algorithm is introduced to complete motion tracking by aligning the template image in the video sequence. Given the special case of nonplanar surface of the cylinder object, a nonlinear transformation is designed for modeling the rotation tracking. In spite of the unconventional and complex form, the transformation can realize angle extraction concisely with only one parameter. A simulation is then conducted to verify the tracking effect, and a practical tracking strategy is further proposed to track consecutively the video sequence. Based on the proposed algorithm, instantaneous rotational speed (IRS) can be measured accurately and efficiently. Finally, the effectiveness of the proposed algorithm is verified on a brushless direct current motor test rig through the comparison with results obtained by the microphone. Experimental results demonstrate that the proposed algorithm can extract accurately rotational angles and can measure IRS with the advantage of noncontact and effectiveness.

  1. Viewers can keep up with fast subtitles: Evidence from eye movements.

    PubMed

    Szarkowska, Agnieszka; Gerber-Morón, Olivia

    2018-01-01

    People watch subtitled audiovisual materials more than ever before. With the proliferation of subtitled content, we are also witnessing an increase in subtitle speeds. However, there is an ongoing controversy about what optimum subtitle speeds should be. This study looks into whether viewers can keep up with increasingly fast subtitles and whether the way people cope with subtitled content depends on their familiarity with subtitling and on their knowledge of the language of the film soundtrack. We tested 74 English, Polish and Spanish viewers watching films subtitled at different speeds (12, 16 and 20 characters per second). The films were either in Hungarian, a language unknown to the participants (Experiment 1), or in English (Experiment 2). We measured viewers' comprehension, self-reported cognitive load, scene and subtitle recognition, preferences and enjoyment. By analyzing people's eye gaze, we were able to discover that most viewers could read the subtitles as well as follow the images, coping well even with fast subtitle speeds. Slow subtitles triggered more re-reading, particularly in English clips, causing more frustration and less enjoyment. Faster subtitles with unreduced text were preferred in the case of English videos, and slower subtitles with text edited down in Hungarian videos. The results provide empirical grounds for revisiting current subtitling practices to enable more efficient processing of subtitled videos for viewers.

  2. Method of determining the necessary number of observations for video stream documents recognition

    NASA Astrophysics Data System (ADS)

    Arlazarov, Vladimir V.; Bulatov, Konstantin; Manzhikov, Temudzhin; Slavin, Oleg; Janiszewski, Igor

    2018-04-01

    This paper discusses a task of document recognition on a sequence of video frames. In order to optimize the processing speed an estimation is performed of stability of recognition results obtained from several video frames. Considering identity document (Russian internal passport) recognition on a mobile device it is shown that significant decrease is possible of the number of observations necessary for obtaining precise recognition result.

  3. Integrated microfluidic technology for sub-lethal and behavioral marine ecotoxicity biotests

    NASA Astrophysics Data System (ADS)

    Huang, Yushi; Reyes Aldasoro, Constantino Carlos; Persoone, Guido; Wlodkowic, Donald

    2015-06-01

    Changes in behavioral traits exhibited by small aquatic invertebrates are increasingly postulated as ethically acceptable and more sensitive endpoints for detection of water-born ecotoxicity than conventional mortality assays. Despite importance of such behavioral biotests, their implementation is profoundly limited by the lack of appropriate biocompatible automation, integrated optoelectronic sensors, and the associated electronics and analysis algorithms. This work outlines development of a proof-of-concept miniaturized Lab-on-a-Chip (LOC) platform for rapid water toxicity tests based on changes in swimming patterns exhibited by Artemia franciscana (Artoxkit M™) nauplii. In contrast to conventionally performed end-point analysis based on counting numbers of dead/immobile specimens we performed a time-resolved video data analysis to dynamically assess impact of a reference toxicant on swimming pattern of A. franciscana. Our system design combined: (i) innovative microfluidic device keeping free swimming Artemia sp. nauplii under continuous microperfusion as a mean of toxin delivery; (ii) mechatronic interface for user-friendly fluidic actuation of the chip; and (iii) miniaturized video acquisition for movement analysis of test specimens. The system was capable of performing fully programmable time-lapse and video-microscopy of multiple samples for rapid ecotoxicity analysis. It enabled development of a user-friendly and inexpensive test protocol to dynamically detect sub-lethal behavioral end-points such as changes in speed of movement or distance traveled by each animal.

  4. High Speed Video Measurements of a Magneto-optical Trap

    NASA Astrophysics Data System (ADS)

    Horstman, Luke; Graber, Curtis; Erickson, Seth; Slattery, Anna; Hoyt, Chad

    2016-05-01

    We present a video method to observe the mechanical properties of a lithium magneto-optical trap. A sinusoidally amplitude-modulated laser beam perturbed a collection of trapped ce7 Li atoms and the oscillatory response was recorded with a NAC Memrecam GX-8 high speed camera at 10,000 frames per second. We characterized the trap by modeling the oscillating cold atoms as a damped, driven, harmonic oscillator. Matlab scripts tracked the atomic cloud movement and relative phase directly from the captured high speed video frames. The trap spring constant, with magnetic field gradient bz = 36 G/cm, was measured to be 4 . 5 +/- . 5 ×10-19 N/m, which implies a trap resonant frequency of 988 +/- 55 Hz. Additionally, at bz = 27 G/cm the spring constant was measured to be 2 . 3 +/- . 2 ×10-19 N/m, which corresponds to a resonant frequency of 707 +/- 30 Hz. These properties at bz = 18 G/cm were found to be 8 . 8 +/- . 5 ×10-20 N/m, and 438 +/- 13 Hz. NSF #1245573.

  5. Video segmentation and camera motion characterization using compressed data

    NASA Astrophysics Data System (ADS)

    Milanese, Ruggero; Deguillaume, Frederic; Jacot-Descombes, Alain

    1997-10-01

    We address the problem of automatically extracting visual indexes from videos, in order to provide sophisticated access methods to the contents of a video server. We focus on tow tasks, namely the decomposition of a video clip into uniform segments, and the characterization of each shot by camera motion parameters. For the first task we use a Bayesian classification approach to detecting scene cuts by analyzing motion vectors. For the second task a least- squares fitting procedure determines the pan/tilt/zoom camera parameters. In order to guarantee the highest processing speed, all techniques process and analyze directly MPEG-1 motion vectors, without need for video decompression. Experimental results are reported for a database of news video clips.

  6. Aircraft Survivability: Unmanned Aircraft Systems Survivability. Fall 2008

    DTIC Science & Technology

    2008-01-01

    until June 2005. Upon deactivation, LtCol Matthews became the “Marine JCAT of One” and was assigned to the 4th Marine Aircraft Wing as a drilling ...strain gauges along with high- speed video. Seven tests were accomplished (Figure 5): four with no airflow, and three with 200 knots of airflow across...collection for manned and unmanned systems to support vulnerability testing and analysis. As Figure 7 illustrates, the system uses advanced metrology

  7. The Rotated Speeded-Up Robust Features Algorithm (R-SURF)

    DTIC Science & Technology

    2014-06-01

    blue color model YUV one luminance two chrominance color model xviii THIS PAGE INTENTIONALLY LEFT BLANK xix EXECUTIVE SUMMARY Automatic...256 256 3  color scheme with an uncompressed image is used, each visual pixel has a possibility of 3256 combinations 2 [5]. There are...Portugal, 2009. [41] J. Sivic and A. Zisserman, “Efficient visual search of videos cast as text retrieval,” IEEE Transactions on Pattern Analysis and

  8. Video-Based Systems Research, Analysis, and Applications Opportunities

    DTIC Science & Technology

    1981-07-30

    as a COM software consultant, marketing its own COMTREVE software; * DatagraphiX Inc., San Diego, offers several versions of its COM recorders. AutoCOM...Metropolitan Microforms Ltd. in New York markets its MCAR system, which satisfies the need for a one- or multiple-user information retrieval and input...targeted to the market for high-speed data communications within a single facility, such as a university campus. The first commercial installations were set

  9. Bullet Retarding Forces in Ballistic Gelatin by Analysis of High Speed Video

    DTIC Science & Technology

    2012-12-28

    tends to be close to the projectile path through tissue. The permanent cavity may be enlarged if the tissue is stretched beyond the elastic limit...inertia, weight, and elasticity causes it to spring back into place. Inelastic tissues such as liver, spleen, and brain stretch much less than... elastic tissues such as 1 Distribution A. Approved for public release. Distribution unlimited. The views expressed in this paper are those of the

  10. Bullet Retarding Forces in Ballistic Gelatin by Analysis of High Speed Video

    DTIC Science & Technology

    2012-12-28

    through tissue. The permanent cavity may be enlarged if the tissue is stretched beyond the elastic limit by the temporary cavity. The temporary...cavity arises because the retarding force accelerates tissue which then stretches until the combination of inertia, weight, and elasticity causes it to...spring back into place. Inelastic tissues such as liver, spleen, and brain stretch much less than elastic tissues such as 1 Distribution A

  11. Fast Video Encryption Using the H.264 Error Propagation Property for Smart Mobile Devices

    PubMed Central

    Chung, Yongwha; Lee, Sungju; Jeon, Taewoong; Park, Daihee

    2015-01-01

    In transmitting video data securely over Video Sensor Networks (VSNs), since mobile handheld devices have limited resources in terms of processor clock speed and battery size, it is necessary to develop an efficient method to encrypt video data to meet the increasing demand for secure connections. Selective encryption methods can reduce the amount of computation needed while satisfying high-level security requirements. This is achieved by selecting an important part of the video data and encrypting it. In this paper, to ensure format compliance and security, we propose a special encryption method for H.264, which encrypts only the DC/ACs of I-macroblocks and the motion vectors of P-macroblocks. In particular, the proposed new selective encryption method exploits the error propagation property in an H.264 decoder and improves the collective performance by analyzing the tradeoff between the visual security level and the processing speed compared to typical selective encryption methods (i.e., I-frame, P-frame encryption, and combined I-/P-frame encryption). Experimental results show that the proposed method can significantly reduce the encryption workload without any significant degradation of visual security. PMID:25850068

  12. Video analysis of concussion injury mechanism in under-18 rugby

    PubMed Central

    Hendricks, Sharief; O'Connor, Sam; Lambert, Michael; Brown, James C; Burger, Nicholas; Mc Fie, Sarah; Readhead, Clint; Viljoen, Wayne

    2016-01-01

    Background Understanding the mechanism of injury is necessary for the development of effective injury prevention strategies. Video analysis of injuries provides valuable information on the playing situation and athlete-movement patterns, which can be used to formulate these strategies. Therefore, we conducted a video analysis of the mechanism of concussion injury in junior-level rugby union and compared it with a representative and matched non-injury sample. Methods Injury reports for 18 concussion events were collected from the 2011 to 2013 under-18 Craven Week tournaments. Also, video footage was recorded for all 3 years. On the basis of the injury events, a representative ‘control’ sample of matched non-injury events in the same players was identified. The video footage, which had been recorded at each tournament, was then retrospectively analysed and coded. 10 injury events (5 tackle, 4 ruck, 1 aerial collision) and 83 non-injury events were analysed. Results All concussions were a result of contact with an opponent and 60% of players were unaware of the impending contact. For the measurement of head position on contact, 43% had a ‘down’ position, 29% the ‘up and forward’ and 29% the ‘away’ position (n=7). The speed of the injured tackler was observed as ‘slow’ in 60% of injurious tackles (n=5). In 3 of the 4 rucks in which injury occurred (75%), the concussed player was acting defensively either in the capacity of ‘support’ (n=2) or as the ‘jackal’ (n=1). Conclusions Training interventions aimed at improving peripheral vision, strengthening of the cervical muscles, targeted conditioning programmes to reduce the effects of fatigue, and emphasising safe and effective playing techniques have the potential to reduce the risk of sustaining a concussion injury. PMID:27900149

  13. QuickView video preview software of colon capsule endoscopy: reliability in presenting colorectal polyps as compared to normal mode reading.

    PubMed

    Farnbacher, Michael J; Krause, Horst H; Hagel, Alexander F; Raithel, Martin; Neurath, Markus F; Schneider, Thomas

    2014-03-01

    OBJECTIVE. Colon capsule endoscopy (CCE) proved to be highly sensitive in detection of colorectal polyps (CP). Major limitation is the time-consuming video reading. The aim of this prospective, double-center study was to assess the theoretical time-saving potential and its possible impact on the reliability of "QuickView" (QV), in the presentation of CP as compared to normal mode (NM). METHODS. During NM reading of 65 CCE videos (mean patient´s age 56 years), all frames showing CPs were collected and compared to the number of frames presented by QV at increasing QV settings (10, 20, ... 80%). Reliability of QV in presenting polyps <6 mm and ≥6 mm (significant polyp), and identifying patients for subsequent therapeutic colonoscopy, capsule egestion rate, cleansing level, and estimated time-saving potential were assessed. RESULTS. At a 30% QV setting, the QV video presented 89% of the significant polyps and 86% of any polyps with ≥1 frame (per-polyp analysis) identified in NM before. At a 10% QV setting, 98% of the 52 patients with significant polyps could be identified (per-patient analysis) by QV video analysis. Capsule excretion rate was 74% and colon cleanliness was adequate in 85%. QV´s presentation rate correlates to the QV setting, the polyp size, and the number of frames per finding. CONCLUSIONS. Depending on its setting, the reliability of QV in presenting CP as compared to NM reading is notable. However, if no significant polyp is presented by QV, NM reading must be performed afterwards. The reduction of frames to be analyzed in QV might speed up identification of candidates for therapeutic colonoscopy.

  14. Relative Reality: the Movie

    NASA Astrophysics Data System (ADS)

    Onley, David; Steinberg, Gary

    2004-04-01

    The consequences of the Special Theory of Relativity are explored in a virtual world in which the speed of light is only 10 m/s. Ray tracing software and other visualization tools, modified to allow for the finite speed of light, are employed to create a video that brings to life a journey through this imaginary world. The aberation of light, the Doppler effect, the altered perception of time and power of incoming radiation are explored in separate segments of this 35 min video. Several of the effects observed are new and quite unexpected. A commentary and animated explanations help keep the viewer from losing all perspective.

  15. The NASA Fireball Network

    NASA Technical Reports Server (NTRS)

    Cooke, William J.

    2013-01-01

    In the summer of 2008, the NASA Meteoroid Environments Office (MEO) began to establish a video fireball network, based on the following objectives: (1) determine the speed distribution of cm size meteoroids, (2) determine the major sources of cm size meteoroids (showers/sporadic sources), (3) characterize meteor showers (numbers, magnitudes, trajectories, orbits), (4) determine the size at which showers dominate the meteor flux, (5) discriminate between re-entering space debris and meteors, and 6) locate meteorite falls. In order to achieve the above with the limited resources available to the MEO, it was necessary that the network function almost fully autonomously, with very little required from humans in the areas of upkeep or analysis. With this in mind, the camera design and, most importantly, the ASGARD meteor detection software were adopted from the University of Western Ontario's Southern Ontario Meteor Network (SOMN), as NASA has a cooperative agreement with Western's Meteor Physics Group. 15 cameras have been built, and the network now consists of 8 operational cameras, with at least 4 more slated for deployment in calendar year 2013. The goal is to have 15 systems, distributed in two or more groups east of automatic analysis; every morning, this server also automatically generates an email and a web page (http://fireballs.ndc.nasa.gov) containing an automated analysis of the previous night's events. This analysis provides the following for each meteor: UTC date and time, speed, start and end locations (longitude, latitude, altitude), radiant, shower identification, light curve (meteor absolute magnitude as a function of time), photometric mass, orbital elements, and Tisserand parameter. Radiant/orbital plots and various histograms (number versus speed, time, etc) are also produced. After more than four years of operation, over 5,000 multi-station fireballs have been observed, 3 of which potentially dropped meteorites. A database containing data on all these events, including the videos and calibration information, has been developed and is being modified to include data from the SOMN and other camera networks.

  16. Computer-assisted 3D kinematic analysis of all leg joints in walking insects.

    PubMed

    Bender, John A; Simpson, Elaine M; Ritzmann, Roy E

    2010-10-26

    High-speed video can provide fine-scaled analysis of animal behavior. However, extracting behavioral data from video sequences is a time-consuming, tedious, subjective task. These issues are exacerbated where accurate behavioral descriptions require analysis of multiple points in three dimensions. We describe a new computer program written to assist a user in simultaneously extracting three-dimensional kinematics of multiple points on each of an insect's six legs. Digital video of a walking cockroach was collected in grayscale at 500 fps from two synchronized, calibrated cameras. We improved the legs' visibility by painting white dots on the joints, similar to techniques used for digitizing human motion. Compared to manual digitization of 26 points on the legs over a single, 8-second bout of walking (or 106,496 individual 3D points), our software achieved approximately 90% of the accuracy with 10% of the labor. Our experimental design reduced the complexity of the tracking problem by tethering the insect and allowing it to walk in place on a lightly oiled glass surface, but in principle, the algorithms implemented are extensible to free walking. Our software is free and open-source, written in the free language Python and including a graphical user interface for configuration and control. We encourage collaborative enhancements to make this tool both better and widely utilized.

  17. Data compression techniques applied to high resolution high frame rate video technology

    NASA Technical Reports Server (NTRS)

    Hartz, William G.; Alexovich, Robert E.; Neustadter, Marc S.

    1989-01-01

    An investigation is presented of video data compression applied to microgravity space experiments using High Resolution High Frame Rate Video Technology (HHVT). An extensive survey of methods of video data compression, described in the open literature, was conducted. The survey examines compression methods employing digital computing. The results of the survey are presented. They include a description of each method and assessment of image degradation and video data parameters. An assessment is made of present and near term future technology for implementation of video data compression in high speed imaging system. Results of the assessment are discussed and summarized. The results of a study of a baseline HHVT video system, and approaches for implementation of video data compression, are presented. Case studies of three microgravity experiments are presented and specific compression techniques and implementations are recommended.

  18. Loop-the-Loop: An Easy Experiment, A Challenging Explanation

    NASA Astrophysics Data System (ADS)

    Asavapibhop, B.; Suwonjandee, N.

    2010-07-01

    A loop-the-loop built by the Institute for the Promotion of Teaching Science and Technology (IPST) was used in Thai high school teachers training program to demonstrate a circular motion and investigate the concept of the conservation of mechanical energy. We took videos using high speed camera to record the motions of a spherical steel ball moving down the aluminum inclined track at different released positions. The ball then moved into the circular loop and underwent a projectile motion upon leaving the track. We then asked the teachers to predict the landing position of the ball if we changed the height of the whole loop-the-loop system. We also analyzed the videos using Tracker, a video analysis software. It turned out that most teachers did not realize the effect of the friction between the ball and the track and could not obtain the correct relationship hence their predictions were inconsistent with the actual landing positions of the ball.

  19. Study of atmospheric discharges caracteristics using with a standard video camera

    NASA Astrophysics Data System (ADS)

    Ferraz, E. C.; Saba, M. M. F.

    In this study is showed some preliminary statistics on lightning characteristics such as: flash multiplicity, number of ground contact points, formation of new and altered channels and presence of continuous current in the strokes that form the flash. The analysis is based on the images of a standard video camera (30 frames.s-1). The results obtained for some flashes will be compared to the images of a high-speed CCD camera (1000 frames.s-1). The camera observing site is located in São José dos Campos (23°S,46° W) at an altitude of 630m. This observational site has nearly 360° field of view at a height of 25m. It is possible to visualize distant thunderstorms occurring within a radius of 25km from the site. The room, situated over a metal structure, has water and power supplies, a telephone line and a small crane on the roof. KEY WORDS: Video images, Lightning, Multiplicity, Stroke.

  20. Uniform Local Binary Pattern Based Texture-Edge Feature for 3D Human Behavior Recognition.

    PubMed

    Ming, Yue; Wang, Guangchao; Fan, Chunxiao

    2015-01-01

    With the rapid development of 3D somatosensory technology, human behavior recognition has become an important research field. Human behavior feature analysis has evolved from traditional 2D features to 3D features. In order to improve the performance of human activity recognition, a human behavior recognition method is proposed, which is based on a hybrid texture-edge local pattern coding feature extraction and integration of RGB and depth videos information. The paper mainly focuses on background subtraction on RGB and depth video sequences of behaviors, extracting and integrating historical images of the behavior outlines, feature extraction and classification. The new method of 3D human behavior recognition has achieved the rapid and efficient recognition of behavior videos. A large number of experiments show that the proposed method has faster speed and higher recognition rate. The recognition method has good robustness for different environmental colors, lightings and other factors. Meanwhile, the feature of mixed texture-edge uniform local binary pattern can be used in most 3D behavior recognition.

  1. SarcOptiM for ImageJ: high-frequency online sarcomere length computing on stimulated cardiomyocytes.

    PubMed

    Pasqualin, Côme; Gannier, François; Yu, Angèle; Malécot, Claire O; Bredeloux, Pierre; Maupoil, Véronique

    2016-08-01

    Accurate measurement of cardiomyocyte contraction is a critical issue for scientists working on cardiac physiology and physiopathology of diseases implying contraction impairment. Cardiomyocytes contraction can be quantified by measuring sarcomere length, but few tools are available for this, and none is freely distributed. We developed a plug-in (SarcOptiM) for the ImageJ/Fiji image analysis platform developed by the National Institutes of Health. SarcOptiM computes sarcomere length via fast Fourier transform analysis of video frames captured or displayed in ImageJ and thus is not tied to a dedicated video camera. It can work in real time or offline, the latter overcoming rotating motion or displacement-related artifacts. SarcOptiM includes a simulator and video generator of cardiomyocyte contraction. Acquisition parameters, such as pixel size and camera frame rate, were tested with both experimental recordings of rat ventricular cardiomyocytes and synthetic videos. It is freely distributed, and its source code is available. It works under Windows, Mac, or Linux operating systems. The camera speed is the limiting factor, since the algorithm can compute online sarcomere shortening at frame rates >10 kHz. In conclusion, SarcOptiM is a free and validated user-friendly tool for studying cardiomyocyte contraction in all species, including human. Copyright © 2016 the American Physiological Society.

  2. High frequency mode shapes characterisation using Digital Image Correlation and phase-based motion magnification

    NASA Astrophysics Data System (ADS)

    Molina-Viedma, A. J.; Felipe-Sesé, L.; López-Alba, E.; Díaz, F.

    2018-03-01

    High speed video cameras provide valuable information in dynamic events. Mechanical characterisation has been improved by the interpretation of the behaviour in slow-motion visualisations. In modal analysis, videos contribute to the evaluation of mode shapes but, generally, the motion is too subtle to be interpreted. In latest years, image treatment algorithms have been developed to generate a magnified version of the motion that could be interpreted by naked eye. Nevertheless, optical techniques such as Digital Image Correlation (DIC) are able to provide quantitative information of the motion with higher sensitivity than naked eye. For vibration analysis, mode shapes characterisation is one of the most interesting DIC performances. Full-field measurements provide higher spatial density than classical instrumentations or Scanning Laser Doppler Vibrometry. However, the accurateness of DIC is reduced at high frequencies as a consequence of the low displacements and hence it is habitually employed in low frequency spectra. In the current work, the combination of DIC and motion magnification is explored in order to provide numerical information in magnified videos and perform DIC mode shapes characterisation at unprecedented high frequencies through increasing the amplitude of displacements.

  3. Video-games used in a group setting is feasible and effective to improve indicators of physical activity in individuals with chronic stroke: a randomized controlled trial.

    PubMed

    Givon, Noa; Zeilig, Gabi; Weingarden, Harold; Rand, Debbie

    2016-04-01

    To investigate the feasibility of using video-games in a group setting and to compare the effectiveness of video-games as a group intervention to a traditional group intervention for improving physical activity in individuals with chronic stroke. A single-blind randomized controlled trial with evaluations pre and post a 3-month intervention, and at 3-month follow-up. Compliance (session attendance), satisfaction and adverse effects were feasibility measures. Grip strength and gait speed were measures of physical activity. Hip accelerometers quantified steps/day and the Action Research Arm Test assessed the functional ability of the upper extremity. Forty-seven community-dwelling individuals with chronic stroke (29-78 years) were randomly allocated to receive video-game (N=24) or traditional therapy (N=23) in a group setting. There was high treatment compliance for both interventions (video-games-78%, traditional therapy-66%), but satisfaction was rated higher for the video-game (93%) than the traditional therapy (71%) (χ(2)=4.98, P=0.026). Adverse effects were not reported in either group. Significant improvements were demonstrated in both groups for gait speed (F=3.9, P=0.02), grip strength of the weaker (F=6.67, P=0.002) and stronger hands (F=7.5, P=0.001). Daily steps and functional ability of the weaker hand did not increase in either group. Using video-games in a small group setting is feasible, safe and satisfying. Video-games improve indicators of physical activity of individuals with chronic stroke. © The Author(s) 2015.

  4. Transmission of live laparoscopic surgery over the Internet2.

    PubMed

    Damore, L J; Johnson, J A; Dixon, R S; Iverson, M A; Ellison, E C; Melvin, W S

    1999-11-01

    Video broadcasting of surgical procedures is an important tool for education, training, and consultation. Current video conferencing systems are expensive and time-consuming and require preplanning. Real-time Internet video is known for its poor quality and relies on the equipment and the speed of the connection. The Internet2, a new high-speed (up to 2,048 Mbps), large bandwidth data network presently connects more than 100 universities and corporations. We have successfully used the Internet2 to broadcast the first real-time, high-quality audio/video program from a live laparoscopic operation to distant points. Video output of the laparoscopic camera and audio from a wireless microphone were broadcast to distant sites using a proprietary, PC-based implementation of H.320 video conferencing over a TCP/IP network connected to the Internet2. The receiving sites participated in two-way, real-time video and audio communications and graded the quality of the signal they received. On August 25, 1998, a laparoscopic Nissen fundoplication was transmitted to Internet2 stations in Colorado, Pennsylvania, and to an Internet station in New York. On September 28 and 29, 1998, we broadcast laparoscopic operations throughout both days to the Internet2 Fall Conference in San Francisco, California. Most recently, on February 24, 1999, we transmitted a laparoscopic Heller myotomy to the Abilene Network Launch Event in Washington, DC. The Internet2 is currently able to provide the bandwidth needed for a turn-key video conferencing system with high-resolution, real-time transmission. The system could be used for a variety of teaching and educational programs for experienced surgeons, residents, and medical students.

  5. The influence of the compression interface on the failure behavior and size effect of concrete

    NASA Astrophysics Data System (ADS)

    Kampmann, Raphael

    The failure behavior of concrete materials is not completely understood because conventional test methods fail to assess the material response independent of the sample size and shape. To study the influence of strength and strain affecting test conditions, four typical concrete sample types were experimentally evaluated in uniaxial compression and analyzed for strength, deformational behavior, crack initiation/propagation, and fracture patterns under varying boundary conditions. Both low friction and conventional compression interfaces were assessed. High-speed video technology was used to monitor macrocracking. Inferential data analysis proved reliably lower strength results for reduced surface friction at the compression interfaces, regardless of sample shape. Reciprocal comparisons revealed statistically significant strength differences between most sample shapes. Crack initiation and propagation was found to differ for dissimilar compression interfaces. The principal stress and strain distributions were analyzed, and the strain domain was found to resemble the experimental results, whereas the stress analysis failed to explain failure for reduced end confinement. Neither stresses nor strains indicated strength reductions due to reduced friction, and therefore, buckling effects were considered. The high-speed video analysis revealed localize buckling phenomena, regardless of end confinement. Slender elements were the result of low friction, and stocky fragments developed under conventional confinement. The critical buckling load increased accordingly. The research showed that current test methods do not reflect the "true'' compressive strength and that concrete failure is strain driven. Ultimate collapse results from buckling preceded by unstable cracking.

  6. Real-Time Cameraless Measurement System Based on Bioelectrical Ventilatory Signals to Evaluate Fear and Anxiety.

    PubMed

    Soh, Zu; Matsuno, Motoki; Yoshida, Masayuki; Tsuji, Toshio

    2018-04-01

    Fear and anxiety in fish are generally evaluated by video-based behavioral analysis. However, it is difficult to distinguish the psychological state of fish exclusively through video analysis, particularly whether the fish are freezing, which represents typical fear behavior, or merely resting. We propose a system that can measure bioelectrical signals called ventilatory signals and simultaneously analyze swimming behavior in real time. Experimental results comparing the behavioral analysis of the proposed system and the camera system showed a low error level with an average absolute position error of 9.75 ± 3.12 mm (about one-third of the body length) and a correlation between swimming speeds of r = 0.93 ± 0.07 (p < 0.01). We also exposed the fish to zebrafish skin extracts containing alarm substances that induce fear and anxiety responses to evaluate their emotional changes. The results confirmed that this solution significantly changed all behavioral and ventilatory signal indices obtained by the proposed system (p < 0.01). By combining the behavioral and ventilatory signal indices, we could detect fear and anxiety with a discrimination rate of 83.3% ± 16.7%. Furthermore, we found that the decreasing fear and anxiety over time could be detected according to the peak frequency of the ventilatory signals, which cannot be measured through video analysis.

  7. Extension and Application of High-Speed Digital Imaging Analysis Via Spatiotemporal Correlation and Eigenmode Analysis of Vocal Fold Vibration Before and After Polyp Excision.

    PubMed

    Wang, Jun-Sheng; Olszewski, Emily; Devine, Erin E; Hoffman, Matthew R; Zhang, Yu; Shao, Jun; Jiang, Jack J

    2016-08-01

    To evaluate the spatiotemporal correlation of vocal fold vibration using eigenmode analysis before and after polyp removal and explore the potential clinical relevance of spatiotemporal analysis of correlation length and entropy as quantitative voice parameters. We hypothesized that increased order in the vibrating signal after surgical intervention would decrease the eigenmode-based entropy and increase correlation length. Prospective case series. Forty subjects (23 males, 17 females) with unilateral (n = 24) or bilateral (n = 16) polyps underwent polyp removal. High-speed videoendoscopy was performed preoperatively and 2 weeks postoperatively. Spatiotemporal analysis was performed to determine entropy, quantification of signal disorder, correlation length, size, and spatially ordered structure of vocal fold vibration in comparison to full spatial consistency. The signal analyzed consists of the vibratory pattern in space and time derived from the high-speed video glottal area contour. Entropy decreased (Z = -3.871, P < .001) and correlation length increased (t = -8.913, P < .001) following polyp excision. The intraclass correlation coefficients (ICC) for correlation length and entropy were 0.84 and 0.93. Correlation length and entropy are sensitive to mass lesions. These parameters could potentially be used to augment subjective visualization after polyp excision when evaluating procedural efficacy. © The Author(s) 2016.

  8. Two-, three-, and four-poster jets in cross flow

    NASA Technical Reports Server (NTRS)

    Vukits, Thomas J.; Sullivan, John P.; Murthy, S. N. B.

    1993-01-01

    In connection with the problems of the ingestion of hot exhaust gases in engines of V/STOL and STOVL aircraft in ground effect, a series of studies have been undertaken. Ground impinging, two- and three-poster jets operating in the presence of cross flow were studied. The current paper is divided into two parts. The first part is a comparison of the low speed, two-, three-, and four-poster jet cases, with respect to the flowfield in the region of interaction between the forward and the jet flows. These include cases with mass balanced inlet suction. An analysis of the inlet entry plane of the low speed two- and three-poster jet cases is also given. In the second part, high speed results for a two jet configuration without inlet suction are given. The results are based on quantitative, marker concentration distributions obtained by digitizing video images.

  9. Strategic Acoustic Control of a Hummingbird Courtship Dive.

    PubMed

    Clark, Christopher J; Mistick, Emily A

    2018-04-23

    Male hummingbirds court females with a high-speed dive in which they "sing" with their tail feathers. The male's choice of trajectory provides him strategic control over acoustic frequency and pressure levels heard by the female. Unlike related species, male Costa's hummingbirds (Calypte costae) choose to place their dives to the side of females. Here we show that this minimizes an audible Doppler curve in their dive sound, thereby depriving females of an acoustic indicator that would otherwise reveal male dive speed. Wind-tunnel experiments indicate that the sounds produced by their feathers are directional; thus, males should aim their tail toward females. High-speed video of dives reveal that males twist half of their tail vertically during the dive, which acoustic-camera video shows effectively aims this sound sideways, toward the female. Our results demonstrate that male animals can strategically modulate female perception of dynamic aspects of athletic motor displays, such as their speed. Copyright © 2018 Elsevier Ltd. All rights reserved.

  10. Viewers can keep up with fast subtitles: Evidence from eye movements

    PubMed Central

    2018-01-01

    People watch subtitled audiovisual materials more than ever before. With the proliferation of subtitled content, we are also witnessing an increase in subtitle speeds. However, there is an ongoing controversy about what optimum subtitle speeds should be. This study looks into whether viewers can keep up with increasingly fast subtitles and whether the way people cope with subtitled content depends on their familiarity with subtitling and on their knowledge of the language of the film soundtrack. We tested 74 English, Polish and Spanish viewers watching films subtitled at different speeds (12, 16 and 20 characters per second). The films were either in Hungarian, a language unknown to the participants (Experiment 1), or in English (Experiment 2). We measured viewers’ comprehension, self-reported cognitive load, scene and subtitle recognition, preferences and enjoyment. By analyzing people’s eye gaze, we were able to discover that most viewers could read the subtitles as well as follow the images, coping well even with fast subtitle speeds. Slow subtitles triggered more re-reading, particularly in English clips, causing more frustration and less enjoyment. Faster subtitles with unreduced text were preferred in the case of English videos, and slower subtitles with text edited down in Hungarian videos. The results provide empirical grounds for revisiting current subtitling practices to enable more efficient processing of subtitled videos for viewers. PMID:29920538

  11. Application of video-based technology for the simultaneous measurement of accommodation and vergence.

    PubMed

    Suryakumar, Rajaraman; Meyers, Jason P; Irving, Elizabeth L; Bobier, William R

    2007-01-01

    Accommodation and vergence are two ocular motor systems that interact during binocular vision. Independent measurement of the response dynamics of each system has been achieved by the application of optometers and eye trackers. However, relatively few devices, typically earlier model optometers, allow the simultaneous assessment of accommodation and vergence. In this study we describe the development and application of a custom designed high-speed digital photorefractor that allows for rapid measures of accommodation (up to 75Hz). In addition the photorefractor was also synchronized with a video-based stereo eye tracker to allow a simultaneous measurement of accommodation and vergence. Analysis of accommodation and vergence could then be conducted offline. The new instrumentation is suitable for investigation of young children and could be potentially used for clinical populations.

  12. Haptic Glove Technology: Skill Development through Video Game Play

    ERIC Educational Resources Information Center

    Bargerhuff, Mary Ellen; Cowan, Heidi; Oliveira, Francisco; Quek, Francis; Fang, Bing

    2010-01-01

    This article introduces a recently developed haptic glove system and describes how the participants used a video game that was purposely designed to train them in skills that are needed for the efficient use of the haptic glove. Assessed skills included speed, efficiency, embodied skill, and engagement. The findings and implications for future…

  13. Video signal processing system uses gated current mode switches to perform high speed multiplication and digital-to-analog conversion

    NASA Technical Reports Server (NTRS)

    Gilliland, M. G.; Rougelot, R. S.; Schumaker, R. A.

    1966-01-01

    Video signal processor uses special-purpose integrated circuits with nonsaturating current mode switching to accept texture and color information from a digital computer in a visual spaceflight simulator and to combine these, for display on color CRT with analog information concerning fading.

  14. ATM: Restructing Learning for Deaf Students.

    ERIC Educational Resources Information Center

    Keefe, Barbara; Stockford, David

    Governor Baxter School for the Deaf is one of six Maine pilot sites chosen by NYNEX to showcase asynchronous transfer mode (ATM) technology. ATM is a network connection that allows high bandwidth transmission of data, voice, and video. Its high speed capability allows for high quality two-way full-motion video, which is especially beneficial to a…

  15. Vehicle counting system using real-time video processing

    NASA Astrophysics Data System (ADS)

    Crisóstomo-Romero, Pedro M.

    2006-02-01

    Transit studies are important for planning a road network with optimal vehicular flow. A vehicular count is essential. This article presents a vehicle counting system based on video processing. An advantage of such system is the greater detail than is possible to obtain, like shape, size and speed of vehicles. The system uses a video camera placed above the street to image transit in real-time. The video camera must be placed at least 6 meters above the street level to achieve proper acquisition quality. Fast image processing algorithms and small image dimensions are used to allow real-time processing. Digital filters, mathematical morphology, segmentation and other techniques allow identifying and counting all vehicles in the image sequences. The system was implemented under Linux in a 1.8 GHz Pentium 4 computer. A successful count was obtained with frame rates of 15 frames per second for images of size 240x180 pixels and 24 frames per second for images of size 180x120 pixels, thus being able to count vehicles whose speeds do not exceed 150 km/h.

  16. Action Video Games Improve Direction Discrimination of Parafoveal Translational Global Motion but Not Reaction Times.

    PubMed

    Pavan, Andrea; Boyce, Matthew; Ghin, Filippo

    2016-10-01

    Playing action video games enhances visual motion perception. However, there is psychophysical evidence that action video games do not improve motion sensitivity for translational global moving patterns presented in fovea. This study investigates global motion perception in action video game players and compares their performance to that of non-action video game players and non-video game players. Stimuli were random dot kinematograms presented in the parafovea. Observers discriminated the motion direction of a target random dot kinematogram presented in one of the four visual quadrants. Action video game players showed lower motion coherence thresholds than the other groups. However, when the task was performed at threshold, we did not find differences between groups in terms of distributions of reaction times. These results suggest that action video games improve visual motion sensitivity in the near periphery of the visual field, rather than speed response. © The Author(s) 2016.

  17. Development of High-speed Visualization System of Hypocenter Data Using CUDA-based GPU computing

    NASA Astrophysics Data System (ADS)

    Kumagai, T.; Okubo, K.; Uchida, N.; Matsuzawa, T.; Kawada, N.; Takeuchi, N.

    2014-12-01

    After the Great East Japan Earthquake on March 11, 2011, intelligent visualization of seismic information is becoming important to understand the earthquake phenomena. On the other hand, to date, the quantity of seismic data becomes enormous as a progress of high accuracy observation network; we need to treat many parameters (e.g., positional information, origin time, magnitude, etc.) to efficiently display the seismic information. Therefore, high-speed processing of data and image information is necessary to handle enormous amounts of seismic data. Recently, GPU (Graphic Processing Unit) is used as an acceleration tool for data processing and calculation in various study fields. This movement is called GPGPU (General Purpose computing on GPUs). In the last few years the performance of GPU keeps on improving rapidly. GPU computing gives us the high-performance computing environment at a lower cost than before. Moreover, use of GPU has an advantage of visualization of processed data, because GPU is originally architecture for graphics processing. In the GPU computing, the processed data is always stored in the video memory. Therefore, we can directly write drawing information to the VRAM on the video card by combining CUDA and the graphics API. In this study, we employ CUDA and OpenGL and/or DirectX to realize full-GPU implementation. This method makes it possible to write drawing information to the VRAM on the video card without PCIe bus data transfer: It enables the high-speed processing of seismic data. The present study examines the GPU computing-based high-speed visualization and the feasibility for high-speed visualization system of hypocenter data.

  18. An Analysis of Bubble Deformation by a Sphere Relevant to the Measurements of Bubble-Particle Contact Interaction and Detachment Forces.

    PubMed

    Sherman, H; Nguyen, A V; Bruckard, W

    2016-11-22

    Atomic force microscopy makes it possible to measure the interacting forces between individual colloidal particles and air bubbles, which can provide a measure of the particle hydrophobicity. To indicate the level of hydrophobicity of the particle, the contact angle can be calculated, assuming that no interfacial deformation occurs with the bubble retaining a spherical profile. Our experimental results obtained using a modified sphere tensiometry apparatus to detach submillimeter spherical particles show that deformation of the bubble interface does occur during particle detachment. We also develop a theoretical model to describe the equilibrium shape of the bubble meniscus at any given particle position, based on the minimization of the free energy of the system. The developed model allows us to analyze high-speed video captured during detachment. In the system model deformation of the bubble profile is accounted for by the incorporation of a Lagrange multiplier into both the Young-Laplace equation and the force balance. The solution of the bubble profile matched to the high-speed video allows us to accurately calculate the contact angle and determine the total force balance as a function of the contact point of the bubble on the particle surface.

  19. High-speed reconstruction of compressed images

    NASA Astrophysics Data System (ADS)

    Cox, Jerome R., Jr.; Moore, Stephen M.

    1990-07-01

    A compression scheme is described that allows high-definition radiological images with greater than 8-bit intensity resolution to be represented by 8-bit pixels. Reconstruction of the images with their original intensity resolution can be carried out by means of a pipeline architecture suitable for compact, high-speed implementation. A reconstruction system is described that can be fabricated according to this approach and placed between an 8-bit display buffer and the display's video system thereby allowing contrast control of images at video rates. Results for 50 CR chest images are described showing that error-free reconstruction of the original 10-bit CR images can be achieved.

  20. Transition of cavitating flow to supercavitation within Venturi nozzle - hysteresis investigation

    NASA Astrophysics Data System (ADS)

    Jiří, Kozák; Pavel, Rudolf; Rostislav, Huzlík; Martin, Hudec; Radomír, Chovanec; Ondřej, Urban; Blahoslav, Maršálek; Eliška, Maršálková; František, Pochylý; David, Štefan

    Cavitation is usually considered as undesirable phenomena. On the other hand, it can be utilized in many applications. One of the technical applications is using cavitation in water treatment, where hydrodynamic cavitation seems to be effective way how to reduce cyanobacteria within large bulks of water. The main scope of this paper is investigation of the cavitation within Venturi nozzle during the transition from fully developed cavitation to supercavitation regime and vice versa. Dynamics of cavitation was investigated using experimental data of pressure pulsations and analysis of high speed videos, where FFT of the pixel intensity and Proper Orthogonal Decomposition (POD) of the records were done to identify dominant frequencies connected with the presence of cavitation. The methodology of the high speed (HS) records semiautomated analysis using the FFT was described. Obtained results were correlated and above that the possible presence of hysteresis was discussed.

  1. Biomechanical Analysis of T2 Exercise

    NASA Technical Reports Server (NTRS)

    DeWitt, John K.; Ploutz-Snyder, Lori; Everett, Meghan; Newby, Nathaniel; Scott-Pandorf, Melissa; Guilliams, Mark E.

    2010-01-01

    Crewmembers regularly perform treadmill exercise on the ISS. With the implementation of T2 on ISS, there is now the capacity to obtain ground reaction force (GRF) data GRF data combined with video motion data allows biomechanical analyses to occur that generate joint torque estimates from exercise conditions. Knowledge of how speed and load influence joint torque will provide quantitative information on which exercise prescriptions can be based. The objective is to determine the joint kinematics, ground reaction forces, and joint kinetics associated with treadmill exercise on the ISS. This study will: 1) Determine if specific exercise speed and harness load combinations are superior to others in exercise benefit; and 2) Aid in the design of exercise prescriptions that will be most beneficial in maintaining crewmember health.

  2. OBSERVER RATING VERSUS THREE-DIMENSIONAL MOTION ANALYSIS OF LOWER EXTREMITY KINEMATICS DURING FUNCTIONAL SCREENING TESTS: A SYSTEMATIC REVIEW.

    PubMed

    Maclachlan, Liam; White, Steven G; Reid, Duncan

    2015-08-01

    Functional assessments are conducted in both clinical and athletic settings in an attempt to identify those individuals who exhibit movement patterns that may increase their risk of non-contact injury. In place of highly sophisticated three-dimensional motion analysis, functional testing can be completed through observation. To evaluate the validity of movement observation assessments by summarizing the results of articles comparing human observation in real-time or video play-back and three-dimensional motion analysis of lower extremity kinematics during functional screening tests. Systematic review. A computerized systematic search was conducted through Medline, SPORTSdiscus, Scopus, Cinhal, and Cochrane health databases between February and April of 2014. Validity studies comparing human observation (real-time or video play-back) to three-dimensional motion analysis of functional tasks were selected. Only studies comprising uninjured, healthy subjects conducting lower extremity functional assessments were appropriate for review. Eligible observers were certified health practitioners or qualified members of sports and athletic training teams that conduct athlete screening. The Quality Assessment of Diagnostic Accuracy Studies 2 (QUADAS-2) was used to appraise the literature. Results are presented in terms of functional tasks. Six studies met the inclusion criteria. Across these studies, two-legged squats, single-leg squats, drop-jumps, and running and cutting manoeuvres were the functional tasks analysed. When compared to three-dimensional motion analysis, observer ratings of lower extremity kinematics, such as knee position in relation to the foot, demonstrated mixed results. Single-leg squats achieved target sensitivity values (≥ 80%) but not specificity values (≥ 50%>%). Drop-jump task agreement ranged from poor (< 50%) to excellent (> 80%). Two-legged squats achieved 88% sensitivity and 85% specificity. Mean underestimations as large as 198 (peak knee flexion) were found in the results of those assessing running and side-step cutting manoeuvres. Variables such as the speed of movement, the methods of rating, the profiles of participants and the experience levels of observers may have influenced the outcomes of functional testing. The small number of studies used limits generalizability. Furthermore, this review used two dimensional video-playback for the majority of observations. If the movements had been rated in real-time three dimensional video, the results may have been different. Slower, speed controlled movements using dichotomous ratings reach target sensitivity and demonstrate higher overall levels of agreement. As a result, their utilization in functional screening is advocated. 1A.

  3. Integrating Time-Synchronized Video with Other Geospatial and Temporal Data for Remote Science Operations

    NASA Technical Reports Server (NTRS)

    Cohen, Tamar E.; Lees, David S.; Deans, Matthew C.; Lim, Darlene S. S.; Lee, Yeon Jin Grace

    2018-01-01

    Exploration Ground Data Systems (xGDS) supports rapid scientific decision making by synchronizing video in context with map, instrument data visualization, geo-located notes and any other collected data. xGDS is an open source web-based software suite developed at NASA Ames Research Center to support remote science operations in analog missions and prototype solutions for remote planetary exploration. (See Appendix B) Typical video systems are designed to play or stream video only, independent of other data collected in the context of the video. Providing customizable displays for monitoring live video and data as well as replaying recorded video and data helps end users build up a rich situational awareness. xGDS was designed to support remote field exploration with unreliable networks. Commercial digital recording systems operate under the assumption that there is a stable and reliable network between the source of the video and the recording system. In many field deployments and space exploration scenarios, this is not the case - there are both anticipated and unexpected network losses. xGDS' Video Module handles these interruptions, storing the available video, organizing and characterizing the dropouts, and presenting the video for streaming or replay to the end user including visualization of the dropouts. Scientific instruments often require custom or expensive software to analyze and visualize collected data. This limits the speed at which the data can be visualized and limits access to the data to those users with the software. xGDS' Instrument Module integrates with instruments that collect and broadcast data in a single snapshot or that continually collect and broadcast a stream of data. While seeing a visualization of collected instrument data is informative, showing the context for the collected data, other data collected nearby along with events indicating current status helps remote science teams build a better understanding of the environment. Further, sharing geo-located, tagged notes recorded by the scientists and others on the team spurs deeper analysis of the data.

  4. On-line 3-dimensional confocal imaging in vivo.

    PubMed

    Li, J; Jester, J V; Cavanagh, H D; Black, T D; Petroll, W M

    2000-09-01

    In vivo confocal microscopy through focusing (CMTF) can provide a 3-D stack of high-resolution corneal images and allows objective measurements of corneal sublayer thickness and backscattering. However, current systems require time-consuming off-line image processing and analysis on multiple software platforms. Furthermore, there is a trade off between the CMTF speed and measurement precision. The purpose of this study was to develop a novel on-line system for in vivo corneal imaging and analysis that overcomes these limitations. A tandem scanning confocal microscope (TSCM) was used for corneal imaging. The TSCM video camera was interfaced directly to a PC image acquisition board to implement real-time digitization. Software was developed to allow in vivo 2-D imaging, CMTF image acquisition, interactive 3-D reconstruction, and analysis of CMTF data to be performed on line in a single user-friendly environment. A procedure was also incorporated to separate the odd/even video fields, thereby doubling the CMTF sampling rate and theoretically improving the precision of CMTF thickness measurements by a factor of two. In vivo corneal examinations of a normal human and a photorefractive keratectomy patient are presented to demonstrate the capabilities of the new system. Improvements in the convenience, speed, and functionality of in vivo CMTF image acquisition, display, and analysis are demonstrated. This is the first full-featured software package designed for in vivo TSCM imaging of the cornea, which performs both 2-D and 3-D image acquisition, display, and processing as well as CMTF analysis. The use of a PC platform and incorporation of easy to use, on line, and interactive features should help to improve the clinical utility of this technology.

  5. Games for Training: Leveraging Commercial Off the Shelf Multiplayer Gaming Software for Infantry Squad Collective Training

    DTIC Science & Technology

    2005-09-01

    squad training, team training, dismounted training, video games , computer games, multiplayer games. 16. PRICE CODE 17. SECURITY CLASSIFICATION OF...Multiplayer - mode of play for computer and video games in which multiple people can play the same game at the same time (Wikipedia, 2005) D...that “improvements in 3-D image generation on the PC and the speed of the internet” have increased the military’s interest in the use of video games as

  6. A MPEG-4 encoder based on TMS320C6416

    NASA Astrophysics Data System (ADS)

    Li, Gui-ju; Liu, Wei-ning

    2013-08-01

    Engineering and products need to achieve real-time video encoding by DSP, but the high computational complexity and huge amount of data requires that system has high data throughput. In this paper, a real-time MPEG-4 video encoder is designed based on TMS320C6416 platform. The kernel is the DSP of TMS320C6416T and FPGA chip f as the organization and management of video data. In order to control the flow of input and output data. Encoded stream is output using the synchronous serial port. The system has the clock frequency of 1GHz and has up to 8000 MIPS speed processing capacity when running at full speed. Due to the low coding efficiency of MPEG-4 video encoder transferred directly to DSP platform, it is needed to improve the program structure, data structures and algorithms combined with TMS320C6416T characteristics. First: Design the image storage architecture by balancing the calculation spending, storage space cost and EDMA read time factors. Open up a more buffer in memory, each buffer cache 16 lines of video data to be encoded, reconstruction image and reference image including search range. By using the variable alignment mode of the DSP, modifying the definition of structure variables and change the look-up table which occupy larger space with a direct calculation array to save memory space. After the program structure optimization, the program code, all variables, buffering buffers and the interpolation image including the search range can be placed in memory. Then, as to the time-consuming process modules and some functions which are called many times, the corresponding modules are written in parallel assembly language of TMS320C6416T which can increase the running speed. Besides, the motion estimation algorithm is improved by using a cross-hexagon search algorithm, The search speed can be increased obviously. Finally, the execution time, signal-to-noise ratio and compression ratio of a real-time image acquisition sequence is given. The experimental results show that the designed encoder in this paper can accomplish real-time encoding of a 768× 576, 25 frames per second grayscale video. The code rate is 1.5M bits per second.

  7. Computer assisted video analysis of swimming performance in a forced swim test: simultaneous assessment of duration of immobility and swimming style in mice selected for high and low swim-stress induced analgesia.

    PubMed

    Juszczak, Grzegorz R; Lisowski, Paweł; Sliwa, Adam T; Swiergiel, Artur H

    2008-10-20

    In behavioral pharmacology, two problems are encountered when quantifying animal behavior: 1) reproducibility of the results across laboratories, especially in the case of manual scoring of animal behavior; 2) presence of different behavioral idiosyncrasies, common in genetically different animals, that mask or mimic the effects of the experimental treatments. This study aimed to develop an automated method enabling simultaneous assessment of the duration of immobility in mice and the depth of body submersion during swimming by means of computer assisted video analysis system (EthoVision from Noldus). We tested and compared parameters of immobility based either on the speed of an object (animal) movement or based on the percentage change in the object's area between the consecutive video frames. We also examined the effects of an erosion-dilation filtering procedure on the results obtained with both parameters of immobility. Finally, we proposed an automated method enabling assessment of depth of body submersion that reflects swimming performance. It was found that both parameters of immobility were sensitive to the effect of an antidepressant, desipramine, and that they yielded similar results when applied to mice that are good swimmers. The speed parameter was, however, more sensitive and more reliable because it depended less on random noise of the video image. Also, it was established that applying the erosion-dilation filtering procedure increased the reliability of both parameters of immobility. In case of mice that were poor swimmers, the assessed duration of immobility differed depending on a chosen parameter, thus resulting in the presence or lack of differences between two lines of mice that differed in swimming performance. These results substantiate the need for assessing swimming performance when the duration of immobility in the FST is compared in lines that differ in their swimming "styles". Testing swimming performance can also be important in the studies investigating the effects of swim stress on other behavioral or physiological parameters because poor swimming abilities displayed by some lines can increase severity of swim stress, masking the between-line differences or the main treatment effects.

  8. Positive correlation between motion analysis data on the LapMentor virtual reality laparoscopic surgical simulator and the results from videotape assessment of real laparoscopic surgeries.

    PubMed

    Matsuda, Tadashi; McDougall, Elspeth M; Ono, Yoshinari; Hattori, Ryohei; Baba, Shiro; Iwamura, Masatsugu; Terachi, Toshiro; Naito, Seiji; Clayman, Ralph V

    2012-11-01

    We studied the construct validity of the LapMentor, a virtual reality laparoscopic surgical simulator, and the correlation between the data collected on the LapMentor and the results of video assessment of real laparoscopic surgeries. Ninety-two urologists were tested on basic skill tasks No. 3 (SK3) to No. 8 (SK8) on the LapMentor. They were divided into three groups: Group A (n=25) had no experience with laparoscopic surgeries as a chief surgeon; group B (n=33) had <35 experiences; and group C (n=34) had ≥35 experiences. Group scores on the accuracy, efficacy, and time of the tasks were compared. Forty physicians with ≥20 experiences supplied unedited videotapes showing a laparoscopic nephrectomy or an adrenalectomy in its entirety, and the videos were assessed in a blinded fashion by expert referees. Correlations between the videotape score (VS) and the performances on the LapMentor were analyzed. Group C showed significantly better outcomes than group A in the accuracy (SK5) (P=0.013), efficacy (SK8) (P=0.014), or speed (SKs 3 and 8) (P=0.009 and P=0.002, respectively) of the performances of LapMentor. Group B showed significantly better outcomes than group A in the speed and efficacy of the performances in SK8 (P=0.011 and P=0.029, respectively). Analyses of motion analysis data of LapMentor demonstrated that smooth and ideal movement of instruments is more important than speed of the movement of instruments to achieve accurate performances in each task. Multiple linear regression analysis indicated that the average score of the accuracy in SK4, 5, and 8 had significant positive correlation with VS (P=0.01). This study demonstrated the construct and predictive validity of the LapMentor basic skill tasks, supporting their possible usefulness for the preclinical evaluation of laparoscopic skills.

  9. Shuttlecock detection system for fully-autonomous badminton robot with two high-speed video cameras

    NASA Astrophysics Data System (ADS)

    Masunari, T.; Yamagami, K.; Mizuno, M.; Une, S.; Uotani, M.; Kanematsu, T.; Demachi, K.; Sano, S.; Nakamura, Y.; Suzuki, S.

    2017-02-01

    Two high-speed video cameras are successfully used to detect the motion of a flying shuttlecock of badminton. The shuttlecock detection system is applied to badminton robots that play badminton fully autonomously. The detection system measures the three dimensional position and velocity of a flying shuttlecock, and predicts the position where the shuttlecock falls to the ground. The badminton robot moves quickly to the position where the shuttle-cock falls to, and hits the shuttlecock back into the opponent's side of the court. In the game of badminton, there is a large audience, and some of them move behind a flying shuttlecock, which are a kind of background noise and makes it difficult to detect the motion of the shuttlecock. The present study demonstrates that such noises can be eliminated by the method of stereo imaging with two high-speed cameras.

  10. High-speed schlieren videography of vortex-ring impact on a wall

    NASA Astrophysics Data System (ADS)

    Kissner, Benjamin; Hargather, Michael; Settles, Gary

    2011-11-01

    Ring vortices of approximately 20 cm diameter are generated through the use of an Airzooka toy. To make the vortex visible, it is seeded with difluoroethane gas, producing a refractive-index difference with the air. A 1-meter-diameter, single-mirror, double-pass schlieren system is used to visualize the ring-vortex motion, and also to provide the wall with which the vortex collides. High-speed imaging is provided by a Photron SA-1 digital video camera. The Airzooka is fired toward the mirror almost along the optical axis of the schlieren system, so that the view of the vortex-mirror collision is normal to the path of vortex motion. Vortex-wall interactions similar to those first observed by Walker et al. (JFM 181, 1987) are recorded at high speed. The presentation will consist of a screening and discussion of these video results.

  11. Video Analysis of Granular Gases in a Low-Gravity Environment

    NASA Astrophysics Data System (ADS)

    Lewallen, Erin

    2004-10-01

    Granular Agglomeration in Non-Gravitating Systems is a research project undertaken by the University of Tulsa Granular Dynamics Group. The project investigates the effects of weightlessness on granular systems by studying the dynamics of a "gas" of 1-mm diameter brass ball bearings driven at various amplitudes and frequencies in low-gravity. Models predict that particles in systems subjected to these conditions should exhibit clustering behavior due to energy loss through multiple inelastic collisions. Observation and study of clustering in our experiment could shed light on this phenomenon as a possible mechanism by which particles in space coalesce to form stable objects such as planetesimals and planetary ring systems. Our experiment has flown on NASA's KC-135 low gravity aircraft. Data analysis techniques for video data collected during these flights include modification of images using Adobe Photoshop and development of ball identification and tracking programs written in Interactive Data Language. By tracking individual balls, we aim to establish speed distributions for granular gases and thereby obtain values for granular temperature.

  12. An innovative experimental sequence on electromagnetic induction and eddy currents based on video analysis and cheap data acquisition

    NASA Astrophysics Data System (ADS)

    Bonanno, A.; Bozzo, G.; Sapia, P.

    2017-11-01

    In this work, we present a coherent sequence of experiments on electromagnetic (EM) induction and eddy currents, appropriate for university undergraduate students, based on a magnet falling through a drilled aluminum disk. The sequence, leveraging on the didactical interplay between the EM and mechanical aspects of the experiments, allows us to exploit the students’ awareness of mechanics to elicit their comprehension of EM phenomena. The proposed experiments feature two kinds of measurements: (i) kinematic measurements (performed by means of high-speed video analysis) give information on the system’s kinematics and, via appropriate numerical data processing, allow us to get dynamic information, in particular on energy dissipation; (ii) induced electromagnetic field (EMF) measurements (by using a homemade multi-coil sensor connected to a cheap data acquisition system) allow us to quantitatively determine the inductive effects of the moving magnet on its neighborhood. The comparison between experimental results and the predictions from an appropriate theoretical model (of the dissipative coupling between the moving magnet and the conducting disk) offers many educational hints on relevant topics related to EM induction, such as Maxwell’s displacement current, magnetic field flux variation, and the conceptual link between induced EMF and induced currents. Moreover, the didactical activity gives students the opportunity to be trained in video analysis, data acquisition and numerical data processing.

  13. Display device-adapted video quality-of-experience assessment

    NASA Astrophysics Data System (ADS)

    Rehman, Abdul; Zeng, Kai; Wang, Zhou

    2015-03-01

    Today's viewers consume video content from a variety of connected devices, including smart phones, tablets, notebooks, TVs, and PCs. This imposes significant challenges for managing video traffic efficiently to ensure an acceptable quality-of-experience (QoE) for the end users as the perceptual quality of video content strongly depends on the properties of the display device and the viewing conditions. State-of-the-art full-reference objective video quality assessment algorithms do not take into account the combined impact of display device properties, viewing conditions, and video resolution while performing video quality assessment. We performed a subjective study in order to understand the impact of aforementioned factors on perceptual video QoE. We also propose a full reference video QoE measure, named SSIMplus, that provides real-time prediction of the perceptual quality of a video based on human visual system behaviors, video content characteristics (such as spatial and temporal complexity, and video resolution), display device properties (such as screen size, resolution, and brightness), and viewing conditions (such as viewing distance and angle). Experimental results have shown that the proposed algorithm outperforms state-of-the-art video quality measures in terms of accuracy and speed.

  14. Striking Distance Determined From High-Speed Videos and Measured Currents in Negative Cloud-to-Ground Lightning

    NASA Astrophysics Data System (ADS)

    Visacro, Silverio; Guimaraes, Miguel; Murta Vale, Maria Helena

    2017-12-01

    First and subsequent return strokes' striking distances (SDs) were determined for negative cloud-to-ground flashes from high-speed videos exhibiting the development of positive and negative leaders and the pre-return stroke phase of currents measured along a short tower. In order to improve the results, a new criterion was used for the initiation and propagation of the sustained upward connecting leader, consisting of a 4 A continuous current threshold. An advanced approach developed from the combined use of this criterion and a reverse propagation procedure, which considers the calculated propagation speeds of the leaders, was applied and revealed that SDs determined solely from the first video frame showing the upward leader can be significantly underestimated. An original approach was proposed for a rough estimate of first strokes' SD using solely records of current. This approach combines the 4 A criterion and a representative composite three-dimensional propagation speed of 0.34 × 106 m/s for the leaders in the last 300 m propagated distance. SDs determined under this approach showed to be consistent with those of the advanced procedure. This approach was applied to determine the SD of 17 first return strokes of negative flashes measured at MCS, covering a wide peak-current range, from 18 to 153 kA. The estimated SDs exhibit very high dispersion and reveal great differences in relation to the SDs estimated for subsequent return strokes and strokes in triggered lightning.

  15. Assessment of canine vocal fold function after injection of a new biomaterial designed to treat phonatory mucosal scarring.

    PubMed

    Karajanagi, Sandeep S; Lopez-Guerra, Gerardo; Park, Hyoungshin; Kobler, James B; Galindo, Marilyn; Aanestad, Jon; Mehta, Daryush D; Kumai, Yoshihiko; Giordano, Nicholas; d'Almeida, Anthony; Heaton, James T; Langer, Robert; Herrera, Victoria L M; Faquin, William; Hillman, Robert E; Zeitels, Steven M

    2011-03-01

    Most cases of irresolvable hoarseness are due to deficiencies in the pliability and volume of the superficial lamina propria of the phonatory mucosa. By using a US Food and Drug Administration-approved polymer, polyethylene glycol (PEG), we created a novel hydrogel (PEG30) and investigated its effects on multiple vocal fold structural and functional parameters. We injected PEG30 unilaterally into 16 normal canine vocal folds with survival times of 1 to 4 months. High-speed videos of vocal fold vibration, induced by intratracheal airflow, and phonation threshold pressures were recorded at 4 time points per subject. Three-dimensional reconstruction analysis of 11.7 T magnetic resonance images and histologic analysis identified 3 cases wherein PEG30 injections were the most superficial, so as to maximally impact vibratory function. These cases were subjected to in-depth analyses. High-speed video analysis of the 3 selected cases showed minimal to no reduction in the maximum vibratory amplitudes of vocal folds injected with PEG30 compared to the non-injected, contralateral vocal fold. All PEG30-injected vocal folds displayed mucosal wave activity with low average phonation threshold pressures. No significant inflammation was observed on microlaryngoscopic examination. Magnetic resonance imaging and histologic analyses revealed time-dependent resorption of the PEG30 hydrogel by phagocytosis with minimal tissue reaction or fibrosis. The PEG30 hydrogel is a promising biocompatible candidate biomaterial to restore form and function to deficient phonatory mucosa, while not mechanically impeding residual endogenous superficial lamina propria.

  16. Particle shape effect on erosion of optical glass substrates due to microparticles

    NASA Astrophysics Data System (ADS)

    Waxman, Rachel; Gray, Perry; Guven, Ibrahim

    2018-03-01

    Impact experiments using sand particles and soda lime glass spheres were performed on four distinct glass substrates. Sand particles were characterized using optical and scanning electron microscopy. High-speed video footage from impact tests was used to calculate incoming and rebound velocities of the individual impact events, as well as the particle volume and two-dimensional sphericity. Furthermore, video analysis was used in conjunction with optical and scanning electron microscopy to relate the incoming velocity and particle shape to subsequent fractures, including both radial and lateral cracks. Indentation theory [Marshall et al., J. Am. Ceram. Soc. 65, 561-566 (1982)] was applied and correlated with lateral crack lengths. Multi-variable power law regression was performed, incorporating the particle shape into the model and was shown to have better fit to damage data than the previous indentation model.

  17. GLOBECOM '88 - IEEE Global Telecommunications Conference and Exhibition, Hollywood, FL, Nov. 28-Dec. 1, 1988, Conference Record. Volumes 1, 2, & 3

    NASA Astrophysics Data System (ADS)

    Various papers on communications for the information age are presented. Among the general topics considered are: telematic services and terminals, satellite communications, telecommunications mangaement network, control of integrated broadband networks, advances in digital radio systems, the intelligent network, broadband networks and services deployment, future switch architectures, performance analysis of computer networks, advances in spread spectrum, optical high-speed LANs, and broadband switching and networks. Also addressed are: multiple access protocols, video coding techniques, modulation and coding, photonic switching, SONET terminals and applications, standards for video coding, digital switching, progress in MANs, mobile and portable radio, software design for improved maintainability, multipath propagation and advanced countermeasure, data communication, network control and management, fiber in the loop, network algorithm and protocols, and advances in computer communications.

  18. Novel method based on video tracking system for simultaneous measurement of kinematics and flow in the wake of a freely swimming fish

    NASA Astrophysics Data System (ADS)

    Wu, Guanhao; Yang, Yan; Zeng, Lijiang

    2006-11-01

    A novel method based on video tracking system for simultaneous measurement of kinematics and flow in the wake of a freely swimming fish is described. Spontaneous and continuous swimming behaviors of a variegated carp (Cyprinus carpio) are recorded by two cameras mounted on a translation stage which is controlled to track the fish. By processing the images recorded during tracking, the detailed kinematics based on calculated midlines and quantitative analysis of the flow in the wake during a low-speed turn and burst-and-coast swimming are revealed. We also draw the trajectory of the fish during a continuous swimming bout containing several moderate maneuvers. The results prove that our method is effective for studying maneuvers of fish both from kinematic and hydrodynamic viewpoints.

  19. Sequence of the Essex-Lopresti lesion—a high-speed video documentation and kinematic analysis

    PubMed Central

    2014-01-01

    Background and purpose The pathomechanics of the Essex-Lopresti lesion are not fully understood. We used human cadavers and documented the genesis of the injury with high-speed cameras. Methods 4 formalin-fixed cadaveric specimens of human upper extremities were tested in a prototype, custom-made, drop-weight test bench. An axial high-energy impulse was applied and the development of the lesion was documented with 3 high-speed cameras. Results The high-speed images showed a transversal movement of the radius and ulna, which moved away from each other in the transversal plane during the impact. This resulted into a transversal rupture of the interosseous membrane, starting in its central portion, and only then did the radius migrate proximally and fracture. The lesion proceeded to the dislocation of the distal radio-ulnar joint and then to a full-blown Essex-Lopresti lesion. Interpretation Our findings indicate that fracture of the radial head may be preceded by at least partial lesions of the interosseous membrane in the course of high-energy axial trauma. PMID:24479620

  20. Movement of fine particles on an air bubble surface studied using high-speed video microscopy.

    PubMed

    Nguyen, Anh V; Evans, Geoffrey M

    2004-05-01

    A CCD high-speed video microscopy system operating at 1000 frames per second was used to obtain direct quantitative measurements of the trajectories of fine glass spheres on the surface of air bubbles. The glass spheres were rendered hydrophobic by a methylation process. Rupture of the intervening water film between a hydrophobic particle and an air bubble with the consequent formation of a three-phase contact was observed. The bubble-particle sliding attachment interaction is not satisfactorily described by the available theories. Surface forces had little effect on the particle sliding with a water film, which ruptured probably due to the submicrometer-sized gas bubbles existing at the hydrophobic particle-water interface.

  1. High speed imaging - An important industrial tool

    NASA Technical Reports Server (NTRS)

    Moore, Alton; Pinelli, Thomas E.

    1986-01-01

    High-speed photography, which is a rapid sequence of photographs that allow an event to be analyzed through the stoppage of motion or the production of slow-motion effects, is examined. In high-speed photography 16, 35, and 70 mm film and framing rates between 64-12,000 frames per second are utilized to measure such factors as angles, velocities, failure points, and deflections. The use of dual timing lamps in high-speed photography and the difficulties encountered with exposure and programming the camera and event are discussed. The application of video cameras to the recording of high-speed events is described.

  2. Diagnosing subtle palatal anomalies: Validation of video-analysis and assessment protocol for diagnosing occult submucous cleft palate.

    PubMed

    Rourke, Ryan; Weinberg, Seth M; Marazita, Mary L; Jabbour, Noel

    2017-09-01

    Submucous cleft palate (SMCP) classically involves bifid uvula, zona pellucida, and notched hard palate. However, patients may present with more subtle anatomic abnormalities. The ability to detect these abnormalities is important for surgeons managing velopharyngeal dysfunction (VPD) or considering adenoidectomy. Validate an assessment protocol for diagnosis of occult submucous cleft palate (OSMCP) and identify physical examination features present in patients with OSMCP in the relaxed and activated palate positions. Study participants included patients referred to a pediatric VPD clinic with concern for hypernasality or SMCP. Using an appropriately encrypted iPod touch, transoral video was obtained for each patient with the palate in the relaxed and activated positions. The videos were reviewed by two otolaryngologists in normal speed and slow-motion, as needed, and a questionnaire was completed by each reviewer pertaining to the anatomy and function of the palate. 47 patients, with an average age of 4.6 years, were included in the study over a one-year period. Four videos were unusable due to incomplete view of the palate. The most common palatal abnormality noted was OSMCP, diagnosed by each reviewer in 26/43 and 30/43 patients respectively. Using the assessment protocol, agreement on palatal diagnosis was 83.7% (kappa = 0.68), indicating substantial agreement, with the most prevalent anatomic features being vaulted palate elevation (96%) and visible notching of hard palate (75%). The diagnosis of subtle palatal anomalies is difficult and can be subjective. Using the proposed video-analysis method and assessment protocol may improve reliability of diagnosis of OSMCP. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Is Relativistic Mass Real?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lincoln, Don

    One of the oddest features of special relativity is the inability to go faster than the speed of light and this is absolutely true. The most common explanation is that the mass of an object increases with speed, but this particular explanation simply isn’t true. In this video, Fermilab’s Dr. Don Lincoln explains the truth behind this.

  4. The conical pendulum: the tethered aeroplane

    NASA Astrophysics Data System (ADS)

    Mazza, Anthony P.; Metcalf, William E.; Cinson, Anthony D.; Lynch, John J.

    2007-01-01

    The introductory physics lab curriculum usually has one experiment on uniform circular motion (UCM). Physics departments typically have several variable-speed rotators in storage that, if they work, no longer work well. Replacing these rotators with new ones is costly, especially when they are only used once a year. This article describes how an inexpensive (ap10) tethered aeroplane, powered by a small electric motor, can be used to study UCM. The aeroplane is easy to see and entertaining to watch. For a given string length and air speed, a tethered aeroplane quickly finds a stable, horizontal, circular orbit. Using a digital video (DV) camcorder, VideoPoint Capture, QuickTime player, metre sticks and a stopwatch, data on the aeroplane's motion were obtained. The length of the string was varied from 120 to 340 cm while the air speed ranged from 200 to 480 cm s-1. For each string length and air speed, the period of the orbit and the diameter of the path were carefully measured. Theoretical values of path radii were then calculated using Newton's second law. The agreement between experiment and theory was usually better than 2%.

  5. Real-time fluorescence target/background (T/B) ratio calculation in multimodal endoscopy for detecting GI tract cancer

    NASA Astrophysics Data System (ADS)

    Jiang, Yang; Gong, Yuanzheng; Wang, Thomas D.; Seibel, Eric J.

    2017-02-01

    Multimodal endoscopy, with fluorescence-labeled probes binding to overexpressed molecular targets, is a promising technology to visualize early-stage cancer. T/B ratio is the quantitative analysis used to correlate fluorescence regions to cancer. Currently, T/B ratio calculation is post-processing and does not provide real-time feedback to the endoscopist. To achieve real-time computer assisted diagnosis (CAD), we establish image processing protocols for calculating T/B ratio and locating high-risk fluorescence regions for guiding biopsy and therapy in Barrett's esophagus (BE) patients. Methods: Chan-Vese algorithm, an active contour model, is used to segment high-risk regions in fluorescence videos. A semi-implicit gradient descent method was applied to minimize the energy function of this algorithm and evolve the segmentation. The surrounding background was then identified using morphology operation. The average T/B ratio was computed and regions of interest were highlighted based on user-selected thresholding. Evaluation was conducted on 50 fluorescence videos acquired from clinical video recordings using a custom multimodal endoscope. Results: With a processing speed of 2 fps on a laptop computer, we obtained accurate segmentation of high-risk regions examined by experts. For each case, the clinical user could optimize target boundary by changing the penalty on area inside the contour. Conclusion: Automatic and real-time procedure of calculating T/B ratio and identifying high-risk regions of early esophageal cancer was developed. Future work will increase processing speed to <5 fps, refine the clinical interface, and apply to additional GI cancers and fluorescence peptides.

  6. Characterization of Axial Inducer Cavitation Instabilities via High Speed Video Recordings

    NASA Technical Reports Server (NTRS)

    Arellano, Patrick; Peneda, Marinelle; Ferguson, Thomas; Zoladz, Thomas

    2011-01-01

    Sub-scale water tests were undertaken to assess the viability of utilizing high resolution, high frame-rate digital video recordings of a liquid rocket engine turbopump axial inducer to characterize cavitation instabilities. These high speed video (HSV) images of various cavitation phenomena, including higher order cavitation, rotating cavitation, alternating blade cavitation, and asymmetric cavitation, as well as non-cavitating flows for comparison, were recorded from various orientations through an acrylic tunnel using one and two cameras at digital recording rates ranging from 6,000 to 15,700 frames per second. The physical characteristics of these cavitation forms, including the mechanisms that define the cavitation frequency, were identified. Additionally, these images showed how the cavitation forms changed and transitioned from one type (tip vortex) to another (sheet cavitation) as the inducer boundary conditions (inlet pressures) were changed. Image processing techniques were developed which tracked the formation and collapse of cavitating fluid in a specified target area, both in the temporal and frequency domains, in order to characterize the cavitation instability frequency. The accuracy of the analysis techniques was found to be very dependent on target size for higher order cavitation, but much less so for the other phenomena. Tunnel-mounted piezoelectric, dynamic pressure transducers were present throughout these tests and were used as references in correlating the results obtained by image processing. Results showed good agreement between image processing and dynamic pressure spectral data. The test set-up, test program, and test results including H-Q and suction performance, dynamic environment and cavitation characterization, and image processing techniques and results will be discussed.

  7. Darwin's bee-trap: The kinetics of Catasetum, a new world orchid.

    PubMed

    Nicholson, Charles C; Bales, James W; Palmer-Fortune, Joyce E; Nicholson, Robert G

    2008-01-01

    The orchid genera Catasetum employs a hair-trigger activated, pollen release mechanism, which forcibly attaches pollen sacs onto foraging insects in the New World tropics. This remarkable adaptation was studied extensively by Charles Darwin and he termed this rapid response "sensitiveness." Using high speed video cameras with a frame speed of 1000 fps, this rapid release was filmed and from the subsequent footage, velocity, speed, acceleration, force and kinetic energy were computed.

  8. Real-time lens distortion correction: speed, accuracy and efficiency

    NASA Astrophysics Data System (ADS)

    Bax, Michael R.; Shahidi, Ramin

    2014-11-01

    Optical lens systems suffer from nonlinear geometrical distortion. Optical imaging applications such as image-enhanced endoscopy and image-based bronchoscope tracking require correction of this distortion for accurate localization, tracking, registration, and measurement of image features. Real-time capability is desirable for interactive systems and live video. The use of a texture-mapping graphics accelerator, which is standard hardware on current motherboard chipsets and add-in video graphics cards, to perform distortion correction is proposed. Mesh generation for image tessellation, an error analysis, and performance results are presented. It is shown that distortion correction using commodity graphics hardware is substantially faster than using the main processor and can be performed at video frame rates (faster than 30 frames per second), and that the polar-based method of mesh generation proposed here is more accurate than a conventional grid-based approach. Using graphics hardware to perform distortion correction is not only fast and accurate but also efficient as it frees the main processor for other tasks, which is an important issue in some real-time applications.

  9. Flow visualization of CFD using graphics workstations

    NASA Technical Reports Server (NTRS)

    Lasinski, Thomas; Buning, Pieter; Choi, Diana; Rogers, Stuart; Bancroft, Gordon

    1987-01-01

    High performance graphics workstations are used to visualize the fluid flow dynamics obtained from supercomputer solutions of computational fluid dynamic programs. The visualizations can be done independently on the workstation or while the workstation is connected to the supercomputer in a distributed computing mode. In the distributed mode, the supercomputer interactively performs the computationally intensive graphics rendering tasks while the workstation performs the viewing tasks. A major advantage of the workstations is that the viewers can interactively change their viewing position while watching the dynamics of the flow fields. An overview of the computer hardware and software required to create these displays is presented. For complex scenes the workstation cannot create the displays fast enough for good motion analysis. For these cases, the animation sequences are recorded on video tape or 16 mm film a frame at a time and played back at the desired speed. The additional software and hardware required to create these video tapes or 16 mm movies are also described. Photographs illustrating current visualization techniques are discussed. Examples of the use of the workstations for flow visualization through animation are available on video tape.

  10. Literature review on risky driving videos on YouTube: Unknown effects and areas for concern?

    PubMed

    Vingilis, Evelyn; Yıldırım-Yenier, Zümrüt; Vingilis-Jaremko, Larissa; Wickens, Christine; Seeley, Jane; Fleiter, Judy; Grushka, Daniel H

    2017-08-18

    Entry of terms reflective of extreme risky driving behaviors into the YouTube website yields millions of videos. The majority of the top 20 highly subscribed automotive YouTube websites are focused on high-performance vehicles, high speed, and often risky driving. Moreover, young men are the heaviest users of online video sharing sites, overall streaming more videos, and watching them longer than any other group. The purpose of this article is to review the literature on YouTube videos and risky driving. A systematic search was performed using the following specialized database sources-Scopus, PubMed, Web of Science, ERIC, and Google Scholar-for the years 2005-2015 for articles in the English language. Search words included "YouTube AND driving," "YouTube AND speeding," "YouTube AND racing." No published research was found on the content of risky driving videos or on the effects of these videos on viewers. This literature review presents the current state of our published knowledge on the topic, which includes a review of the effects of mass media on risky driving cognitions; attitudes and behavior; similarities and differences between mass and social media; information on the YouTube platform; psychological theories that could support YouTube's potential effects on driving behavior; and 2 examples of risky driving behaviors ("sidewalk skiing" and "ghost riding the whip") suggestive of varying levels of modeling behavior in subsequent YouTube videos. Every month about 1 billion individuals are reported to view YouTube videos (ebizMBA Guide 2015 ) and young men are the heaviest users, overall streaming more YouTube videos and watching them longer than women and other age groups (Nielsen 2011 ). This group is also the most dangerous group in traffic, engaging in more per capita violations and experiencing more per capita injuries and fatalities (e.g., Parker et al. 1995 ; Reason et al. 1990 ; Transport Canada 2015 ; World Health Organization 2015 ). YouTube also contains many channels depicting risky driving videos. The time has come for the traffic safety community to begin exploring these relationships.

  11. Progressive Return to Activity Following Acute Concussion/Mild Traumatic Brain Injury: Guidance for the Rehabilitation Provider in Deployed and Non-deployed Settings

    DTIC Science & Technology

    2014-01-01

    activity requires minimum 2 hours of rest • Video games , driving simulation ― 20 minutes to maximum of 40 minutes, followed by 80 minutes cognitive...as breath-holding, exertion, playing video games and driving. He is encouraged to monitor his HR before and during activity (not to exceed 40 percent...walking at low speed b) Shopping in the exchange for a single item c) Video games d) Television with rest breaks each hour Knowledge Test

  12. Direct measurement of erythrocyte deformability in diabetes mellitus with a transparent microchannel capillary model and high-speed video camera system.

    PubMed

    Tsukada, K; Sekizuka, E; Oshio, C; Minamitani, H

    2001-05-01

    To measure erythrocyte deformability in vitro, we made transparent microchannels on a crystal substrate as a capillary model. We observed axisymmetrically deformed erythrocytes and defined a deformation index directly from individual flowing erythrocytes. By appropriate choice of channel width and erythrocyte velocity, we could observe erythrocytes deforming to a parachute-like shape similar to that occurring in capillaries. The flowing erythrocytes magnified 200-fold through microscopy were recorded with an image-intensified high-speed video camera system. The sensitivity of deformability measurement was confirmed by comparing the deformation index in healthy controls with erythrocytes whose membranes were hardened by glutaraldehyde. We confirmed that the crystal microchannel system is a valuable tool for erythrocyte deformability measurement. Microangiopathy is a characteristic complication of diabetes mellitus. A decrease in erythrocyte deformability may be part of the cause of this complication. In order to identify the difference in erythrocyte deformability between control and diabetic erythrocytes, we measured erythrocyte deformability using transparent crystal microchannels and a high-speed video camera system. The deformability of diabetic erythrocytes was indeed measurably lower than that of erythrocytes in healthy controls. This result suggests that impaired deformability in diabetic erythrocytes can cause altered viscosity and increase the shear stress on the microvessel wall. Copyright 2001 Academic Press.

  13. A Study of Quality of Service Communication for High-Speed Packet-Switching Computer Sub-Networks

    NASA Technical Reports Server (NTRS)

    Cui, Zhenqian

    1999-01-01

    With the development of high-speed networking technology, computer networks, including local-area networks (LANs), wide-area networks (WANs) and the Internet, are extending their traditional roles of carrying computer data. They are being used for Internet telephony, multimedia applications such as conferencing and video on demand, distributed simulations, and other real-time applications. LANs are even used for distributed real-time process control and computing as a cost-effective approach. Differing from traditional data transfer, these new classes of high-speed network applications (video, audio, real-time process control, and others) are delay sensitive. The usefulness of data depends not only on the correctness of received data, but also the time that data are received. In other words, these new classes of applications require networks to provide guaranteed services or quality of service (QoS). Quality of service can be defined by a set of parameters and reflects a user's expectation about the underlying network's behavior. Traditionally, distinct services are provided by different kinds of networks. Voice services are provided by telephone networks, video services are provided by cable networks, and data transfer services are provided by computer networks. A single network providing different services is called an integrated-services network.

  14. An Experimental Study of Launch Vehicle Propellant Tank Fragmentation

    NASA Technical Reports Server (NTRS)

    Richardson, Erin; Jackson, Austin; Hays, Michael; Bangham, Mike; Blackwood, James; Skinner, Troy; Richman, Ben

    2014-01-01

    In order to better understand launch vehicle abort environments, Bangham Engineering Inc. (BEi) built a test assembly that fails sample materials (steel and aluminum plates of various alloys and thicknesses) under quasi-realistic vehicle failure conditions. Samples are exposed to pressures similar to those expected in vehicle failure scenarios and filmed at high speed to increase understanding of complex fracture mechanics. After failure, the fragments of each test sample are collected, catalogued and reconstructed for further study. Post-test analysis shows that aluminum samples consistently produce fewer fragments than steel samples of similar thickness and at similar failure pressures. Video analysis shows that there are several failure 'patterns' that can be observed for all test samples based on configuration. Fragment velocities are also measured from high speed video data. Sample thickness and material are analyzed for trends in failure pressure. Testing is also done with cryogenic and noncryogenic liquid loading on the samples. It is determined that liquid loading and cryogenic temperatures can decrease material fragmentation for sub-flight thicknesses. A method is developed for capture and collection of fragments that is greater than 97 percent effective in recovering sample mass, addressing the generation of tiny fragments. Currently, samples tested do not match actual launch vehicle propellant tank material thicknesses because of size constraints on test assembly, but test findings are used to inform the design and build of another, larger test assembly with the purpose of testing actual vehicle flight materials that include structural components such as iso-grid and friction stir welds.

  15. Bipolar cloud-to-ground lightning flash observations

    NASA Astrophysics Data System (ADS)

    Saba, Marcelo M. F.; Schumann, Carina; Warner, Tom A.; Helsdon, John H.; Schulz, Wolfgang; Orville, Richard E.

    2013-10-01

    lightning is usually defined as a lightning flash where the current waveform exhibits a polarity reversal. There are very few reported cases of cloud-to-ground (CG) bipolar flashes using only one channel in the literature. Reports on this type of bipolar flashes are not common due to the fact that in order to confirm that currents of both polarities follow the same channel to the ground, one necessarily needs video records. This study presents five clear observations of single-channel bipolar CG flashes. High-speed video and electric field measurement observations are used and analyzed. Based on the video images obtained and based on previous observations of positive CG flashes with high-speed cameras, we suggest that positive leader branches which do not participate in the initial return stroke of a positive cloud-to-ground flash later generate recoil leaders whose negative ends, upon reaching the branch point, traverse the return stroke channel path to the ground resulting in a subsequent return stroke of opposite polarity.

  16. Quantitative Measurement of Vocal Fold Vibration in Male Radio Performers and Healthy Controls Using High-Speed Videoendoscopy

    PubMed Central

    Warhurst, Samantha; McCabe, Patricia; Heard, Rob; Yiu, Edwin; Wang, Gaowu; Madill, Catherine

    2014-01-01

    Purpose Acoustic and perceptual studies show a number of differences between the voices of radio performers and controls. Despite this, the vocal fold kinematics underlying these differences are largely unknown. Using high-speed videoendoscopy, this study sought to determine whether the vocal vibration features of radio performers differed from those of non-performing controls. Method Using high-speed videoendoscopy, recordings of a mid-phonatory/i/ in 16 male radio performers (aged 25–52 years) and 16 age-matched controls (aged 25–52 years) were collected. Videos were extracted and analysed semi-automatically using High-Speed Video Program, obtaining measures of fundamental frequency (f0), open quotient and speed quotient. Post-hoc analyses of sound pressure level (SPL) were also performed (n = 19). Pearson's correlations were calculated between SPL and both speed and open quotients. Results Male radio performers had a significantly higher speed quotient than their matched controls (t = 3.308, p = 0.005). No significant differences were found for f0 or open quotient. No significant correlation was found between either open or speed quotient with SPL. Discussion A higher speed quotient in male radio performers suggests that their vocal fold vibration was characterised by a higher ratio of glottal opening to closing times than controls. This result may explain findings of better voice quality, higher equivalent sound level and greater spectral tilt seen in previous research. Open quotient was not significantly different between groups, indicating that the durations of complete vocal fold closure were not different between the radio performers and controls. Further validation of these results is required to determine the aetiology of the higher speed quotient result and its implications for voice training and clinical management in performers. PMID:24971625

  17. Integrated bronchoscopic video tracking and 3D CT registration for virtual bronchoscopy

    NASA Astrophysics Data System (ADS)

    Higgins, William E.; Helferty, James P.; Padfield, Dirk R.

    2003-05-01

    Lung cancer assessment involves an initial evaluation of 3D CT image data followed by interventional bronchoscopy. The physician, with only a mental image inferred from the 3D CT data, must guide the bronchoscope through the bronchial tree to sites of interest. Unfortunately, this procedure depends heavily on the physician's ability to mentally reconstruct the 3D position of the bronchoscope within the airways. In order to assist physicians in performing biopsies of interest, we have developed a method that integrates live bronchoscopic video tracking and 3D CT registration. The proposed method is integrated into a system we have been devising for virtual-bronchoscopic analysis and guidance for lung-cancer assessment. Previously, the system relied on a method that only used registration of the live bronchoscopic video to corresponding virtual endoluminal views derived from the 3D CT data. This procedure only performs the registration at manually selected sites; it does not draw upon the motion information inherent in the bronchoscopic video. Further, the registration procedure is slow. The proposed method has the following advantages: (1) it tracks the 3D motion of the bronchoscope using the bronchoscopic video; (2) it uses the tracked 3D trajectory of the bronchoscope to assist in locating sites in the 3D CT "virtual world" to perform the registration. In addition, the method incorporates techniques to: (1) detect and exclude corrupted video frames (to help make the video tracking more robust); (2) accelerate the computation of the many 3D virtual endoluminal renderings (thus, speeding up the registration process). We have tested the integrated tracking-registration method on a human airway-tree phantom and on real human data.

  18. Development of a Video-Microscopic Tool To Evaluate the Precipitation Kinetics of Poorly Water Soluble Drugs: A Case Study with Tadalafil and HPMC.

    PubMed

    Christfort, Juliane Fjelrad; Plum, Jakob; Madsen, Cecilie Maria; Nielsen, Line Hagner; Sandau, Martin; Andersen, Klaus; Müllertz, Anette; Rades, Thomas

    2017-12-04

    Many drug candidates today have a low aqueous solubility and, hence, may show a low oral bioavailability, presenting a major formulation and drug delivery challenge. One way to increase the bioavailability of these drugs is to use a supersaturating drug delivery strategy. The aim of this study was to develop a video-microscopic method, to evaluate the effect of a precipitation inhibitor on supersaturated solutions of the poorly soluble drug tadalafil, using a novel video-microscopic small scale setup. Based on preliminary studies, a degree of supersaturation of 29 was chosen for the supersaturation studies with tadalafil in FaSSIF. Different amounts of hydroxypropyl methyl cellulose (HPMC) were predissolved in FaSSIF to give four different concentrations, and the supersaturated system was then created using a solvent shift method. Precipitation of tadalafil from the supersaturated solutions was monitored by video-microscopy as a function of time. Single-particle analysis was possible using commercially available software; however, to investigate the entire population of precipitating particles (i.e., their number and area covered in the field of view), an image analysis algorithm was developed (multiparticle analysis). The induction time for precipitation of tadalafil in FaSSIF was significantly prolonged by adding 0.01% (w/v) HPMC to FaSSIF, and the maximum inhibition was reached at 0.1% (w/v) HPMC, after which additional HPMC did not further increase the induction time. The single-particle and multiparticle analyses yielded the same ranking of the HPMC concentrations, regarding the inhibitory effect on precipitation. The developed small scale method to assess the effect of precipitation inhibitors can speed up the process of choosing the right precipitation inhibitor and the concentration to be used.

  19. An evidence based method to calculate pedestrian crossing speeds in vehicle collisions (PCSC).

    PubMed

    Bastien, C; Wellings, R; Burnett, B

    2018-06-07

    Pedestrian accident reconstruction is necessary to establish cause of death, i.e. establishing vehicle collision speed as well as circumstances leading to the pedestrian being impacted and determining culpability of those involved for subsequent court enquiry. Understanding the complexity of the pedestrian attitude during an accident investigation is necessary to ascertain the causes leading to the tragedy. A generic new method, named Pedestrian Crossing Speed Calculator (PCSC), based on vector algebra, is proposed to compute the pedestrian crossing speed at the moment of impact. PCSC uses vehicle damage and pedestrian anthropometric dimensions to establish a combination of head projection angles against the windscreen; this angle is then compared against the combined velocities angle created from the vehicle and the pedestrian crossing speed at the time of impact. This method has been verified using one accident fatality case in which the exact vehicle and pedestrian crossing speeds were known from Police forensic video analysis. PCSC was then applied on two other accident scenarios and correctly corroborated with the witness statements regarding the pedestrians crossing behaviours. The implications of PCSC could be significant once fully validated against further future accident data, as this method is reversible, allowing the computation of vehicle impact velocity from pedestrian crossing speed as well as verifying witness accounts. Copyright © 2018 Elsevier Ltd. All rights reserved.

  20. Ultrasound investigation of fetal human upper respiratory anatomy.

    PubMed

    Wolfson, V P; Laitman, J T

    1990-07-01

    Although the human upper respiratory-upper digestive tract is an area of vital importance, relatively little is known about either the structural or functional changes that occur in the region during the fetal period. While investigations in our laboratory have begun to chart these changes through the use of postmortem materials, in vivo studies have been rarely attempted. This study combines ultrasonography with new applications of video editing to examine aspects of prenatal upper respiratory development. Structures of the fetal upper respiratory-digestive tract and their movements were studied through the use of ultrasonography and detailed frame-by-frame analysis. Twenty-five living fetuses, aged 18-36 weeks gestation, were studied in utero during routine diagnostic ultrasound examination. These real-time linear array sonograms were videotaped during each study. Videotapes were next analyzed for anatomical structures and movement patterns, played back through the ultrasound machine in normal speed, and then examined with a frame-by-frame video editor (FFVE) to identify structures and movements. Still images were photographed directly from the video monitor using a 35 mm camera. Results show that upper respiratory and digestive structures, as well as their movements, could be seen clearly during normal speed and repeat frame-by-frame analysis. Major structures that could be identified in the majority of subjects included trachea in 20 of 25 fetuses (80%); larynx, 76%; pharynx, 76%. Smaller structures were more variable, but were nevertheless observed on both sagittal and coronal section: piriform sinuses, 76%; thyroid cartilage, 36%; cricoid cartilage, 32%; and epiglottis, 16%. Movements of structures could also be seen and were those typically observed in connection with swallowing: fluttering tongue movements, changes in pharyngeal shape, and passage of a bolus via the piriform sinuses to esophagus. Fetal swallows had minimal laryngeal motion. This study represents the first time that the appearance of upper airway and digestive tract structures have been quantified in conjunction with their movements in the living fetus.

  1. A Pressure Plate-Based Method for the Automatic Assessment of Foot Strike Patterns During Running.

    PubMed

    Santuz, Alessandro; Ekizos, Antonis; Arampatzis, Adamantios

    2016-05-01

    The foot strike pattern (FSP, description of how the foot touches the ground at impact) is recognized to be a predictor of both performance and injury risk. The objective of the current investigation was to validate an original foot strike pattern assessment technique based on the numerical analysis of foot pressure distribution. We analyzed the strike patterns during running of 145 healthy men and women (85 male, 60 female). The participants ran on a treadmill with integrated pressure plate at three different speeds: preferred (shod and barefoot 2.8 ± 0.4 m/s), faster (shod 3.5 ± 0.6 m/s) and slower (shod 2.3 ± 0.3 m/s). A custom-designed algorithm allowed the automatic footprint recognition and FSP evaluation. Incomplete footprints were simultaneously identified and corrected from the software itself. The widely used technique of analyzing high-speed video recordings was checked for its reliability and has been used to validate the numerical technique. The automatic numerical approach showed a good conformity with the reference video-based technique (ICC = 0.93, p < 0.01). The great improvement in data throughput and the increased completeness of results allow the use of this software as a powerful feedback tool in a simple experimental setup.

  2. Optical observations of electrical activity in cloud discharges

    NASA Astrophysics Data System (ADS)

    Vayanganie, S. P. A.; Fernando, M.; Sonnadara, U.; Cooray, V.; Perera, C.

    2018-07-01

    Temporal variation of the luminosity of seven natural cloud-to-cloud lightning channels were studied, and results were presented. They were recorded by using a high-speed video camera with the speed of 5000 fps (frames per second) and the pixel resolution of 512 × 512 in three locations in Sri Lanka in the tropics. Luminosity variation of the channel with time was obtained by analyzing the image sequences. Recorded video frames together with the luminosity variation were studied to understand the cloud discharge process. Image analysis techniques also used to understand the characteristics of channels. Cloud flashes show more luminosity variability than ground flashes. Most of the time it starts with a leader which do not have stepping process. Channel width and standard deviation of intensity variation across the channel for each cloud flashes was obtained. Brightness variation across the channel shows a Gaussian distribution. The average time duration of the cloud flashes which start with non stepped leader was 180.83 ms. Identified characteristics are matched with the existing models to understand the process of cloud flashes. The fact that cloud discharges are not confined to a single process have been further confirmed from this study. The observations show that cloud flash is a basic lightning discharge which transfers charge between two charge centers without using one specific mechanism.

  3. Combining High-Speed Cameras and Stop-Motion Animation Software to Support Students' Modeling of Human Body Movement

    ERIC Educational Resources Information Center

    Lee, Victor R.

    2015-01-01

    Biomechanics, and specifically the biomechanics associated with human movement, is a potentially rich backdrop against which educators can design innovative science teaching and learning activities. Moreover, the use of technologies associated with biomechanics research, such as high-speed cameras that can produce high-quality slow-motion video,…

  4. Validation of 2 noninvasive, markerless reconstruction techniques in biplane high-speed fluoroscopy for 3-dimensional research of bovine distal limb kinematics.

    PubMed

    Weiss, M; Reich, E; Grund, S; Mülling, C K W; Geiger, S M

    2017-10-01

    Lameness severely impairs cattle's locomotion, and it is among the most important threats to animal welfare, performance, and productivity in the modern dairy industry. However, insight into the pathological alterations of claw biomechanics leading to lameness and an understanding of the biomechanics behind development of claw lesions causing lameness are limited. Biplane high-speed fluoroscopic kinematography is a new approach for the analysis of skeletal motion. Biplane high-speed videos in combination with bone scans can be used for 3-dimensional (3D) animations of bones moving in 3D space. The gold standard, marker-based animation, requires implantation of radio-opaque markers into bones, which impairs the practicability for lameness research in live animals. Therefore, the purpose of this study was to evaluate the comparative accuracy of 2 noninvasive, markerless animation techniques (semi-automatic and manual) in 3D animation of the bovine distal limb. Tantalum markers were implanted into each of the distal, middle, and proximal phalanges of 5 isolated bovine distal forelimbs, and biplane high-speed x-ray videos of each limb were recorded to capture the simulation of one step. The limbs were scanned by computed tomography to create bone models of the 6 digital bones, and 3D animation of the bones' movements were subsequently reconstructed using the marker-based, the semi-automatic, and the manual animation techniques. Manual animation translational bias and precision varied from 0.63 ± 0.26 mm to 0.80 ± 0.49 mm, and rotational bias and precision ranged from 2.41 ± 1.43° to 6.75 ± 4.67°. Semi-automatic translational values for bias and precision ranged from 1.26 ± 1.28 mm to 2.75 ± 2.17 mm, and rotational values varied from 3.81 ± 2.78° to 11.7 ± 8.11°. In our study, we demonstrated the successful application of biplane high-speed fluoroscopic kinematography to gait analysis of bovine distal limb. Using the manual animation technique, kinematics can be measured with sub-millimeter accuracy without the need for invasive marker implantation. Copyright © 2017 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  5. World's first telepathology experiments employing WINDS ultra-high-speed internet satellite, nicknamed “KIZUNA”

    PubMed Central

    Sawai, Takashi; Uzuki, Miwa; Miura, Yasuhiro; Kamataki, Akihisa; Matsumura, Tsubasa; Saito, Kenji; Kurose, Akira; Osamura, Yoshiyuki R.; Yoshimi, Naoki; Kanno, Hiroyuki; Moriya, Takuya; Ishida, Yoji; Satoh, Yohichi; Nakao, Masahiro; Ogawa, Emiko; Matsuo, Satoshi; Kasai, Hiroyuki; Kumagai, Kazuhiro; Motoda, Toshihiro; Hopson, Nathan

    2013-01-01

    Background: Recent advances in information technology have allowed the development of a telepathology system involving high-speed transfer of high-volume histological figures via fiber optic landlines. However, at present there are geographical limits to landlines. The Japan Aerospace Exploration Agency (JAXA) has developed the “Kizuna” ultra-high speed internet satellite and has pursued its various applications. In this study we experimented with telepathology in collaboration with JAXA using Kizuna. To measure the functionality of the Wideband InterNet working engineering test and Demonstration Satellite (WINDS) ultra-high speed internet satellite in remote pathological diagnosis and consultation, we examined the adequate data transfer speed and stability to conduct telepathology (both diagnosis and conferencing) with functionality, and ease similar or equal to telepathology using fiber-optic landlines. Materials and Methods: We performed experiments for 2 years. In year 1, we tested the usability of the WINDS for telepathology with real-time video and virtual slide systems. These are state-of-the-art technologies requiring massive volumes of data transfer. In year 2, we tested the usability of the WINDS for three-way teleconferencing with virtual slides. Facilities in Iwate (northern Japan), Tokyo, and Okinawa were connected via the WINDS and voice conferenced while remotely examining and manipulating virtual slides. Results: Network function parameters measured using ping and Iperf were within acceptable limits. However; stage movement, zoom, and conversation suffered a lag of approximately 0.8 s when using real-time video, and a delay of 60-90 s was experienced when accessing the first virtual slide in a session. No significant lag or inconvenience was experienced during diagnosis and conferencing, and the results were satisfactory. Our hypothesis was confirmed for both remote diagnosis using real-time video and virtual slide systems, and also for teleconferencing using virtual slide systems with voice functionality. Conclusions: Our results demonstrate the feasibility of ultra-high-speed internet satellite networks for use in telepathology. Because communications satellites have less geographical and infrastructural requirements than landlines, ultra-high-speed internet satellite telepathology represents a major step toward alleviating regional disparity in the quality of medical care. PMID:24244882

  6. Tsunami Research driven by Survivor Observations: Sumatra 2004, Tohoku 2011 and the Lituya Bay Landslide (Plinius Medal Lecture)

    NASA Astrophysics Data System (ADS)

    Fritz, Hermann M.

    2014-05-01

    The 10th anniversary of the 2004 Indian Ocean tsunami recalls the advent of tsunami video recordings by eyewitnesses. The tsunami of December 26, 2004 severely affected Banda Aceh along the North tip of Sumatra (Indonesia) at a distance of 250 km from the epicenter of the Magnitude 9.0 earthquake. The tsunami flow velocity analysis focused on two survivor videos recorded within Banda Aceh more than 3km from the open ocean. The exact locations of the tsunami eyewitness video recordings were revisited to record camera calibration ground control points. The motion of the camera during the recordings was determined. The individual video images were rectified with a direct linear transformation (DLT). Finally a cross-correlation based particle image velocimetry (PIV) analysis was applied to the rectified video images to determine instantaneous tsunami flow velocity fields. The measured overland tsunami flow velocities were within the range of 2 to 5 m/s in downtown Banda Aceh, Indonesia. The March 11, 2011, magnitude Mw 9.0 earthquake off the coast of Japan caused catastrophic damage and loss of life. Fortunately many survivors at evacuation sites recorded countless tsunami videos with unprecedented spatial and temporal coverage. Numerous tsunami reconnaissance trips were conducted in Japan. This report focuses on the surveys at selected tsunami eyewitness video recording locations along Japan's Sanriku coast and the subsequent tsunami video image analysis. Locations with high quality survivor videos were visited, eyewitnesses interviewed and detailed site topography scanned with a terrestrial laser scanner (TLS). The analysis of the tsunami videos followed the four step procedure developed for the analysis of 2004 Indian Ocean tsunami videos at Banda Aceh. Tsunami currents up to 11 m/s were measured in Kesennuma Bay making navigation impossible. Further tsunami height and runup hydrographs are derived from the videos to discuss the complex effects of coastal structures on inundation and outflow flow velocities. Tsunamis generated by landslides and volcanic island collapses account for some of the most catastrophic events. On July 10, 1958, an earthquake Mw 8.3 along the Fairweather fault triggered a major subaerial landslide into Gilbert Inlet at the head of Lituya Bay on the south coast of Alaska. The landslide impacted the water at high speed generating a giant tsunami and the highest wave runup in recorded history. This event was observed by eyewitnesses on board the sole surviving fishing boat, which managed to ride the tsunami. The mega-tsunami runup to an elevation of 524 m caused total forest destruction and erosion down to bedrock on a spur ridge in direct prolongation of the slide axis. A cross-section of Gilbert Inlet was rebuilt in a two dimensional physical laboratory model. Particle image velocimetry (PIV) provided instantaneous velocity vector fields of decisive initial phase with landslide impact and wave generation as well as the runup on the headland. Three dimensional source and runup scenarios based on real world events are physically modeled in the NEES tsunami wave basin (TWB) at Oregon State University (OSU). The measured landslide and tsunami data serve to validate and advance numerical landslide tsunami models. This lecture encompasses multi-hazard aspects and implications of recent tsunami and cyclonic events around the world such as the November 2013 Typhoon Haiyan (Yolanda) in the Philippines.

  7. Hot Spots from Generated Defects in HMX Crystals

    NASA Astrophysics Data System (ADS)

    Sorensen, Christian; Cummock, Nicholas; O'Grady, Caitlin; Gunduz, I. Emre; Son, Steven

    2017-06-01

    There are several hot spot initiation mechanisms that have been proposed. However, direct observation of shock or impact compression of these mechanisms at macroscopic scale in explosives is difficult. Phase contrast imaging (PCI) may be applied to these systems. Here, high-speed video was used to record optical spectrum and for x-ray Phase Contrast Imaging (PCI) of shockwave interaction with low defect HMX crystals and crystals with engineered defects. Additionally, multiple crystals were arranged and observed under shock loading with PCI and optical high-speed video. Sample preparation techniques for generating voids and other engineered defects will be discussed. These methods include drilled holes and laser machined samples. Insight into hot spot mechanisms was obtained. Funding from ONR's PC@Xtreme MURI.

  8. Multicore-based 3D-DWT video encoder

    NASA Astrophysics Data System (ADS)

    Galiano, Vicente; López-Granado, Otoniel; Malumbres, Manuel P.; Migallón, Hector

    2013-12-01

    Three-dimensional wavelet transform (3D-DWT) encoders are good candidates for applications like professional video editing, video surveillance, multi-spectral satellite imaging, etc. where a frame must be reconstructed as quickly as possible. In this paper, we present a new 3D-DWT video encoder based on a fast run-length coding engine. Furthermore, we present several multicore optimizations to speed-up the 3D-DWT computation. An exhaustive evaluation of the proposed encoder (3D-GOP-RL) has been performed, and we have compared the evaluation results with other video encoders in terms of rate/distortion (R/D), coding/decoding delay, and memory consumption. Results show that the proposed encoder obtains good R/D results for high-resolution video sequences with nearly in-place computation using only the memory needed to store a group of pictures. After applying the multicore optimization strategies over the 3D DWT, the proposed encoder is able to compress a full high-definition video sequence in real-time.

  9. Optimized static and video EEG rapid serial visual presentation (RSVP) paradigm based on motion surprise computation

    NASA Astrophysics Data System (ADS)

    Khosla, Deepak; Huber, David J.; Bhattacharyya, Rajan

    2017-05-01

    In this paper, we describe an algorithm and system for optimizing search and detection performance for "items of interest" (IOI) in large-sized images and videos that employ the Rapid Serial Visual Presentation (RSVP) based EEG paradigm and surprise algorithms that incorporate motion processing to determine whether static or video RSVP is used. The system works by first computing a motion surprise map on image sub-regions (chips) of incoming sensor video data and then uses those surprise maps to label the chips as either "static" or "moving". This information tells the system whether to use a static or video RSVP presentation and decoding algorithm in order to optimize EEG based detection of IOI in each chip. Using this method, we are able to demonstrate classification of a series of image regions from video with an azimuth value of 1, indicating perfect classification, over a range of display frequencies and video speeds.

  10. Retrieving eruptive vent conditions from dynamical properties of unsteady volcanic plume using high-speed imagery and numerical simulations

    NASA Astrophysics Data System (ADS)

    Tournigand, Pierre-Yves; Taddeucci, Jacopo; José Peña Fernandez, Juan; Gaudin, Damien; Sesterhenn, Jörn; Scarlato, Piergiorgio; Del Bello, Elisabetta

    2016-04-01

    Vent conditions are key parameters controlling volcanic plume dynamics and the ensuing different hazards, such as human health issues, infrastructure damages, and air traffic disruption. Indeed, for a given magma and vent geometry, plume development and stability over time mainly depend on the mass eruption rate, function of the velocity and density of the eruptive mixture at the vent, where direct measurements are impossible. High-speed imaging of eruptive plumes and numerical jet simulations were here non-dimensionally coupled to retrieve eruptive vent conditions starting from measurable plume parameters. High-speed videos of unsteady, momentum-driven volcanic plumes (jets) from Strombolian to Vulcanian activity from three different volcanoes (Sakurajima, Japan, Stromboli, Italy, and Fuego, Guatemala) were recorded in the visible and the thermal spectral ranges by using an Optronis CR600x2 (1280x1024 pixels definition, 500 Hz frame rate) and a FLIR SC655 (640x480 pixels definition, 50 Hz frame rate) cameras. Atmospheric effects correction and pre-processing of the thermal videos were performed to increase measurement accuracy. Pre-processing consists of the extraction of the plume temperature gradient over time, combined with a temperature threshold in order to remove the image background. The velocity and the apparent surface temperature fields of the plumes, and their changes over timescales of tenths of seconds, were then measured by particle image velocimetry and thermal image analysis, respectively, of the pre-processed videos. The parameters thus obtained are representative of the outer plume surface, corresponding to its boundary shear layer at the interface with the atmosphere, and may significantly differ from conditions in the plume interior. To retrieve information on the interior of the plume, and possibly extrapolate it even at the eruptive vent level, video-derived plume parameters were non-dimensionally compared to the results of numerical simulations of momentum-driven gas jets impulsively released from a vent in a pressurized container. These simulations solve flow conditions globally, thus allowing one to set empirical relations between flow conditions in different parts of the jet, most notably the shear layer, the flow centerline, and at the vent. Applying these relations to the volcanic cases gives access to the evolution of velocity and temperature at the vent. From these, the speed of sound and flow Mach number can be obtained, which in turn can be used to estimate the pressure ratio between atmosphere and vent and finally, assuming some conduit geometry and mixture density, the total amount of erupted gas. Preliminary results suggest subsonic exit velocities of the eruptive mixture at the vent, and a plume centerline velocity that can be twice as fast as the one measured at the plume boundary.

  11. Analysis of Space Shuttle Primary Reaction-Control Engine-Exhaust Transients

    DTIC Science & Technology

    2008-10-01

    sensi- tivity in the spectral range of 0.4 to 0.9 /xm. The sensor gain was set to limit the size of the spot attributable to saturation by the solar...setting of the LAAT sensor . Table 1 lists the pertinent parameters for 22 attitude-control bums for which quality (30 frames per second) video footage was...intensity evolution of a narrow pulse of l-/im-diam droplets flying from the GLO sensor at the transient 2 representative speed of 1.6 km • s~’. The

  12. Applications Of Digital Image Acquisition In Anthropometry

    NASA Astrophysics Data System (ADS)

    Woolford, Barbara; Lewis, James L.

    1981-10-01

    Anthropometric data on reach and mobility have traditionally been collected by time consuming and relatively inaccurate manual methods. Three dimensional digital image acquisition promises to radically increase the speed and ease of data collection and analysis. A three-camera video anthropometric system for collecting position, velocity, and force data in real time is under development for the Anthropometric Measurement Laboratory at NASA's Johnson Space Center. The use of a prototype of this system for collecting data on reach capabilities and on lateral stability is described. Two extensions of this system are planned.

  13. The role of laryngoscopy in the diagnosis of spasmodic dysphonia.

    PubMed

    Daraei, Pedram; Villari, Craig R; Rubin, Adam D; Hillel, Alexander T; Hapner, Edie R; Klein, Adam M; Johns, Michael M

    2014-03-01

    Spasmodic dysphonia (SD) can be difficult to diagnose, and patients often see multiple physicians for many years before diagnosis. Improving the speed of diagnosis for individuals with SD may decrease the time to treatment and improve patient quality of life more quickly. To assess whether the diagnosis of SD can be accurately predicted through auditory cues alone without the assistance of visual cues offered by laryngoscopic examination. Single-masked, case-control study at a specialized referral center that included patients who underwent laryngoscopic examination as part of a multidisciplinary workup for dysphonia. Twenty-two patients were selected in total: 10 with SD, 5 with vocal tremor, and 7 controls without SD or vocal tremor. The laryngoscopic examination was recorded, deidentified, and edited to make 3 media clips for each patient: video alone, audio alone, and combined video and audio. These clips were randomized and presented to 3 fellowship-trained laryngologist raters (A.D.R., A.T.H., and A.M.K.), who established the most probable diagnosis for each clip. Intrarater and interrater reliability were evaluated using repeat clips incorporated in the presentations. We measured diagnostic accuracy for video-only, audio-only, and combined multimedia clips. These measures were established before data collection. Data analysis was accomplished with analysis of variance and Tukey honestly significant differences. Of patients with SD, diagnostic accuracy was 10%, 73%, and 73% for video-only, audio-only, and combined, respectively (P < .001, df = 2). Of patients with vocal tremor, diagnostic accuracy was 93%, 73%, and 100% for video-only, audio-only, and combined, respectively (P = .05, df = 2). Of the controls, diagnostic accuracy was 81%, 19%, and 62% for video-only, audio-only, and combined, respectively (P < .001, df = 2). The diagnosis of SD during examination is based primarily on auditory cues. Viewing combined audio and video clips afforded no change in diagnostic accuracy compared with audio alone. Laryngoscopy serves an important role in the diagnosis of SD by excluding other pathologic causes and identifying vocal tremor.

  14. Using PDV to Understand Damage in Rocket Motor Propellants

    NASA Astrophysics Data System (ADS)

    Tear, Gareth; Chapman, David; Ottley, Phillip; Proud, William; Gould, Peter; Cullis, Ian

    2017-06-01

    There is a continuing requirement to design and manufacture insensitive munition (IM) rocket motors for in-service use under a wide range of conditions, particularly due to shock initiation and detonation of damaged propellant spalled across the central bore of the rocket motor (XDT). High speed photography has been crucial in determining this behaviour, however attempts to model the dynamic behaviour are limited by the lack of precision particle and wave velocity data with which to validate against. In this work Photonic Doppler Velocimetery (PDV) has been combined with high speed video to give accurate point velocity and timing measurements of the rear surface of a propellant block impacted by a fragment travelling upto 1.4 km s-1. By combining traditional high speed video with PDV through a dichroic mirror, the point of velocity measurement within the debris cloud has been determined. This demonstrates a new capability to characterise the damage behaviour of a double base rocket motor propellant and hence validate the damage and fragmentation algorithms used in the numerical simulations.

  15. Using NetMeeting for remote configuration of the Otto Bock C-Leg: technical considerations.

    PubMed

    Lemaire, E D; Fawcett, J A

    2002-08-01

    Telehealth has the potential to be a valuable tool for technical and clinical support of computer controlled prosthetic devices. This pilot study examined the use of Internet-based, desktop video conferencing for remote configuration of the Otto Bock C-Leg. Laboratory tests involved connecting two computers running Microsoft NetMeeting over a local area network (IP protocol). Over 56 Kbs(-1), DSL/Cable, and 10 Mbs(-1) LAN speeds, a prosthetist remotely configured a user's C-Leg by using Application Sharing, Live Video, and Live Audio. A similar test between sites in Ottawa and Toronto, Canada was limited by the notebook computer's 28 Kbs(-1) modem. At the 28 Kbs(-1) Internet-connection speed, NetMeeting's application sharing feature was not able to update the remote Sliders window fast enough to display peak toe loads and peak knee angles. These results support the use of NetMeeting as an accessible and cost-effective tool for remote C-Leg configuration, provided that sufficient Internet data transfer speed is available.

  16. Cost/benefit analysis for video security systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1997-01-01

    Dr. Don Hush and Scott Chapman, in conjunction with the Electrical and Computer Engineering Department of the University of New Mexico (UNM), have been contracted by Los Alamos National Laboratories to perform research in the area of high security video analysis. The first phase of this research, presented in this report, is a cost/benefit analysis of various approaches to the problem in question. This discussion begins with a description of three architectures that have been used as solutions to the problem of high security surveillance. An overview of the relative merits and weaknesses of each of the proposed systems ismore » included. These descriptions are followed directly by a discussion of the criteria chosen in evaluating the systems and the techniques used to perform the comparisons. The results are then given in graphical and tabular form, and their implications discussed. The project to this point has involved assessing hardware and software issues in image acquisition, processing and change detection. Future work is to leave these questions behind to consider the issues of change analysis - particularly the detection of human motion - and alarm decision criteria. The criteria for analysis in this report include: cost; speed; tradeoff issues in moving primative operations from software to hardware; real time operation considerations; change image resolution; and computational requirements.« less

  17. Very High-Speed Digital Video Capability for In-Flight Use

    NASA Technical Reports Server (NTRS)

    Corda, Stephen; Tseng, Ting; Reaves, Matthew; Mauldin, Kendall; Whiteman, Donald

    2006-01-01

    digital video camera system has been qualified for use in flight on the NASA supersonic F-15B Research Testbed aircraft. This system is capable of very-high-speed color digital imaging at flight speeds up to Mach 2. The components of this system have been ruggedized and shock-mounted in the aircraft to survive the severe pressure, temperature, and vibration of the flight environment. The system includes two synchronized camera subsystems installed in fuselage-mounted camera pods (see Figure 1). Each camera subsystem comprises a camera controller/recorder unit and a camera head. The two camera subsystems are synchronized by use of an MHub(TradeMark) synchronization unit. Each camera subsystem is capable of recording at a rate up to 10,000 pictures per second (pps). A state-of-the-art complementary metal oxide/semiconductor (CMOS) sensor in the camera head has a maximum resolution of 1,280 1,024 pixels at 1,000 pps. Exposure times of the electronic shutter of the camera range from 1/200,000 of a second to full open. The recorded images are captured in a dynamic random-access memory (DRAM) and can be downloaded directly to a personal computer or saved on a compact flash memory card. In addition to the high-rate recording of images, the system can display images in real time at 30 pps. Inter Range Instrumentation Group (IRIG) time code can be inserted into the individual camera controllers or into the M-Hub unit. The video data could also be used to obtain quantitative, three-dimensional trajectory information. The first use of this system was in support of the Space Shuttle Return to Flight effort. Data were needed to help in understanding how thermally insulating foam is shed from a space shuttle external fuel tank during launch. The cameras captured images of simulated external tank debris ejected from a fixture mounted under the centerline of the F-15B aircraft. Digital video was obtained at subsonic and supersonic flight conditions, including speeds up to Mach 2 and altitudes up to 50,000 ft (15.24 km). The digital video was used to determine the structural survivability of the debris in a real flight environment and quantify the aerodynamic trajectories of the debris.

  18. A video coding scheme based on joint spatiotemporal and adaptive prediction.

    PubMed

    Jiang, Wenfei; Latecki, Longin Jan; Liu, Wenyu; Liang, Hui; Gorman, Ken

    2009-05-01

    We propose a video coding scheme that departs from traditional Motion Estimation/DCT frameworks and instead uses Karhunen-Loeve Transform (KLT)/Joint Spatiotemporal Prediction framework. In particular, a novel approach that performs joint spatial and temporal prediction simultaneously is introduced. It bypasses the complex H.26x interframe techniques and it is less computationally intensive. Because of the advantage of the effective joint prediction and the image-dependent color space transformation (KLT), the proposed approach is demonstrated experimentally to consistently lead to improved video quality, and in many cases to better compression rates and improved computational speed.

  19. Phasegram Analysis of Vocal Fold Vibration Documented With Laryngeal High-speed Video Endoscopy.

    PubMed

    Herbst, Christian T; Unger, Jakob; Herzel, Hanspeter; Švec, Jan G; Lohscheller, Jörg

    2016-11-01

    In a recent publication, the phasegram, a bifurcation diagram over time, has been introduced as an intuitive visualization tool for assessing the vibratory states of oscillating systems. Here, this nonlinear dynamics approach is augmented with quantitative analysis parameters, and it is applied to clinical laryngeal high-speed video (HSV) endoscopic recordings of healthy and pathological phonations. HSV data from a total of 73 females diagnosed as healthy (n = 42), or with functional dysphonia (n = 15) or with unilateral vocal fold paralysis (n = 16), were quantitatively analyzed. Glottal area waveforms (GAW) and left and right hemi-GAWs (hGAW) were extracted from the HSV recordings. Based on Poincaré sections through phase space-embedded signals, two novel quantitative parameters were computed: the phasegram entropy (PE) and the phasegram complexity estimate (PCE), inspired by signal entropy and correlation dimension computation, respectively. Both PE and PCE assumed higher average values (suggesting more irregular vibrations) for the pathological as compared with the healthy participants, thus significantly discriminating healthy group from the paralysis group (P = 0.02 for both PE and PCE). Comparisons of individual PE or PCE data for the left and the right hGAW within each subject resulted in asymmetry measures for the regularity of vocal fold vibration. The PCE-based asymmetry measure revealed significant differences between the healthy group and the paralysis group (P = 0.03). Quantitative phasegram analysis of GAW and hGAW data is a promising tool for the automated processing of HSV data in research and in clinical practice. Copyright © 2016 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  20. Visualization of fluid dynamics at NASA Ames

    NASA Technical Reports Server (NTRS)

    Watson, Val

    1989-01-01

    The hardware and software currently used for visualization of fluid dynamics at NASA Ames is described. The software includes programs to create scenes (for example particle traces representing the flow over an aircraft), programs to interactively view the scenes, and programs to control the creation of video tapes and 16mm movies. The hardware includes high performance graphics workstations, a high speed network, digital video equipment, and film recorders.

  1. Manipulations of the features of standard video lottery terminal (VLT) games: effects in pathological and non-pathological gamblers.

    PubMed

    Loba, P; Stewart, S H; Klein, R M; Blackburn, J R

    2001-01-01

    The present study was conducted to identify game parameters that would reduce the risk of abuse of video lottery terminals (VLTs) by pathological gamblers, while exerting minimal effects on the behavior of non-pathological gamblers. Three manipulations of standard VLT game features were explored. Participants were exposed to: a counter which displayed a running total of money spent; a VLT spinning reels game where participants could no longer "stop" the reels by touching the screen; and sensory feature manipulations. In control conditions, participants were exposed to standard settings for either a spinning reels or a video poker game. Dependent variables were self-ratings of reactions to each set of parameters. A set of 2(3) x 2 x 2 (game manipulation [experimental condition(s) vs. control condition] x game [spinning reels vs. video poker] x gambler status [pathological vs. non-pathological]) repeated measures ANOVAs were conducted on all dependent variables. The findings suggest that the sensory manipulations (i.e., fast speed/sound or slow speed/no sound manipulations) produced the most robust reaction differences. Before advocating harm reduction policies such as lowering sensory features of VLT games to reduce potential harm to pathological gamblers, it is important to replicate findings in a more naturalistic setting, such as a real bar.

  2. Enhanced protocol for real-time transmission of echocardiograms over wireless channels.

    PubMed

    Cavero, Eva; Alesanco, Alvaro; García, Jose

    2012-11-01

    This paper presents a methodology to transmit clinical video over wireless networks in real-time. A 3-D set partitioning in hierarchical trees compression prior to transmission is proposed. In order to guarantee the clinical quality of the compressed video, a clinical evaluation specific to each video modality has to be made. This evaluation indicates the minimal transmission rate necessary for an accurate diagnosis. However, the channel conditions produce errors and distort the video. A reliable application protocol is therefore proposed using a hybrid solution in which either retransmission or retransmission combined with forward error correction (FEC) techniques are used, depending on the channel conditions. In order to analyze the proposed methodology, the 2-D mode of an echocardiogram has been assessed. A bandwidth of 200 kbps is necessary to guarantee its clinical quality. The transmission using the proposed solution and retransmission and FEC techniques working separately have been simulated and compared in high-speed uplink packet access (HSUPA) and worldwide interoperability for microwave access (WiMAX) networks. The proposed protocol achieves guaranteed clinical quality for bit error rates higher than with the other protocols, being for a mobile speed of 60 km/h up to 3.3 times higher for HSUPA and 10 times for WiMAX.

  3. Efficient space-time sampling with pixel-wise coded exposure for high-speed imaging.

    PubMed

    Liu, Dengyu; Gu, Jinwei; Hitomi, Yasunobu; Gupta, Mohit; Mitsunaga, Tomoo; Nayar, Shree K

    2014-02-01

    Cameras face a fundamental trade-off between spatial and temporal resolution. Digital still cameras can capture images with high spatial resolution, but most high-speed video cameras have relatively low spatial resolution. It is hard to overcome this trade-off without incurring a significant increase in hardware costs. In this paper, we propose techniques for sampling, representing, and reconstructing the space-time volume to overcome this trade-off. Our approach has two important distinctions compared to previous works: 1) We achieve sparse representation of videos by learning an overcomplete dictionary on video patches, and 2) we adhere to practical hardware constraints on sampling schemes imposed by architectures of current image sensors, which means that our sampling function can be implemented on CMOS image sensors with modified control units in the future. We evaluate components of our approach, sampling function and sparse representation, by comparing them to several existing approaches. We also implement a prototype imaging system with pixel-wise coded exposure control using a liquid crystal on silicon device. System characteristics such as field of view and modulation transfer function are evaluated for our imaging system. Both simulations and experiments on a wide range of scenes show that our method can effectively reconstruct a video from a single coded image while maintaining high spatial resolution.

  4. Advantages of computer cameras over video cameras/frame grabbers for high-speed vision applications

    NASA Astrophysics Data System (ADS)

    Olson, Gaylord G.; Walker, Jo N.

    1997-09-01

    Cameras designed to work specifically with computers can have certain advantages in comparison to the use of cameras loosely defined as 'video' cameras. In recent years the camera type distinctions have become somewhat blurred, with a great presence of 'digital cameras' aimed more at the home markets. This latter category is not considered here. The term 'computer camera' herein is intended to mean one which has low level computer (and software) control of the CCD clocking. These can often be used to satisfy some of the more demanding machine vision tasks, and in some cases with a higher rate of measurements than video cameras. Several of these specific applications are described here, including some which use recently designed CCDs which offer good combinations of parameters such as noise, speed, and resolution. Among the considerations for the choice of camera type in any given application would be such effects as 'pixel jitter,' and 'anti-aliasing.' Some of these effects may only be relevant if there is a mismatch between the number of pixels per line in the camera CCD and the number of analog to digital (A/D) sampling points along a video scan line. For the computer camera case these numbers are guaranteed to match, which alleviates some measurement inaccuracies and leads to higher effective resolution.

  5. Machinability of Al 6061 Deposited with Cold Spray Additive Manufacturing

    NASA Astrophysics Data System (ADS)

    Aldwell, Barry; Kelly, Elaine; Wall, Ronan; Amaldi, Andrea; O'Donnell, Garret E.; Lupoi, Rocco

    2017-10-01

    Additive manufacturing techniques such as cold spray are translating from research laboratories into more mainstream high-end production systems. Similar to many additive processes, finishing still depends on removal processes. This research presents the results from investigations into aspects of the machinability of aluminum 6061 tubes manufactured with cold spray. Through the analysis of cutting forces and observations on chip formation and surface morphology, the effect of cutting speed, feed rate, and heat treatment was quantified, for both cold-sprayed and bulk aluminum 6061. High-speed video of chip formation shows changes in chip form for varying material and heat treatment, which is supported by the force data and quantitative imaging of the machined surface. The results shown in this paper demonstrate that parameters involved in cold spray directly impact on machinability and therefore have implications for machining parameters and strategy.

  6. Real-time intravascular photoacoustic-ultrasound imaging of lipid-laden plaque at speed of video-rate level

    NASA Astrophysics Data System (ADS)

    Hui, Jie; Cao, Yingchun; Zhang, Yi; Kole, Ayeeshik; Wang, Pu; Yu, Guangli; Eakins, Gregory; Sturek, Michael; Chen, Weibiao; Cheng, Ji-Xin

    2017-03-01

    Intravascular photoacoustic-ultrasound (IVPA-US) imaging is an emerging hybrid modality for the detection of lipidladen plaques by providing simultaneous morphological and lipid-specific chemical information of an artery wall. The clinical utility of IVPA-US technology requires real-time imaging and display at speed of video-rate level. Here, we demonstrate a compact and portable IVPA-US system capable of imaging at up to 25 frames per second in real-time display mode. This unprecedented imaging speed was achieved by concurrent innovations in excitation laser source, rotary joint assembly, 1 mm IVPA-US catheter, differentiated A-line strategy, and real-time image processing and display algorithms. By imaging pulsatile motion at different imaging speeds, 16 frames per second was deemed to be adequate to suppress motion artifacts from cardiac pulsation for in vivo applications. Our lateral resolution results further verified the number of A-lines used for a cross-sectional IVPA image reconstruction. The translational capability of this system for the detection of lipid-laden plaques was validated by ex vivo imaging of an atherosclerotic human coronary artery at 16 frames per second, which showed strong correlation to gold-standard histopathology.

  7. A feasibility study of damage detection in beams using high-speed camera (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Wan, Chao; Yuan, Fuh-Gwo

    2017-04-01

    In this paper a method for damage detection in beam structures using high-speed camera is presented. Traditional methods of damage detection in structures typically involve contact (i.e., piezoelectric sensor or accelerometer) or non-contact sensors (i.e., laser vibrometer) which can be costly and time consuming to inspect an entire structure. With the popularity of the digital camera and the development of computer vision technology, video cameras offer a viable capability of measurement including higher spatial resolution, remote sensing and low-cost. In the study, a damage detection method based on the high-speed camera was proposed. The system setup comprises a high-speed camera and a line-laser which can capture the out-of-plane displacement of a cantilever beam. The cantilever beam with an artificial crack was excited and the vibration process was recorded by the camera. A methodology called motion magnification, which can amplify subtle motions in a video is used for modal identification of the beam. A finite element model was used for validation of the proposed method. Suggestions for applications of this methodology and challenges in future work will be discussed.

  8. The ESA/NASA Multi-Aircraft ATV-1 Re-Entry Campaign: Analysis of Airborne Intensified Video Observations from the NASA/JSC Experiment

    NASA Technical Reports Server (NTRS)

    Barker, Ed; Maley, Paul; Mulrooney, Mark; Beaulieu, Kevin

    2009-01-01

    In September 2008, a joint ESA/NASA multi-instrument airborne observing campaign was conducted over the Southern Pacific ocean. The objective was the acquisition of data to support detailed atmospheric re-entry analysis for the first flight of the European Automated Transfer Vehicle (ATV)-1. Skilled observers were deployed aboard two aircraft which were flown at 12.8 km altitude within visible range of the ATV-1 re-entry zone. The observers operated a suite of instruments with low-light-level detection sensitivity including still cameras, high speed and 30 fps video cameras, and spectrographs. The collected data has provided valuable information regarding the dynamic time evolution of the ATV-1 re-entry fragmentation. Specifically, the data has satisfied the primary mission objective of recording the explosion of ATV-1's primary fuel tank and thereby validating predictions regarding the tanks demise and the altitude of its occurrence. Furthermore, the data contains the brightness and trajectories of several hundred ATV-1 fragments. It is the analysis of these properties, as recorded by the particular instrument set sponsored by NASA/Johnson Space Center, which we present here.

  9. Determining the frequency of open windows in motor vehicles: a pilot study using a video camera in Houston, Texas during high temperature conditions.

    PubMed

    Long, Tom; Johnson, Ted; Ollison, Will

    2002-05-01

    Researchers have developed a variety of computer-based models to estimate population exposure to air pollution. These models typically estimate exposures by simulating the movement of specific population groups through defined microenvironments. Exposures in the motor vehicle microenvironment are significantly affected by air exchange rate, which in turn is affected by vehicle speed, window position, vent status, and air conditioning use. A pilot study was conducted in Houston, Texas, during September 2000 for a specific set of weather, vehicle speed, and road type conditions to determine whether useful information on the position of windows, sunroofs, and convertible tops could be obtained through the use of video cameras. Monitoring was conducted at three sites (two arterial roads and one interstate highway) on the perimeter of Harris County located in or near areas not subject to mandated Inspection and Maintenance programs. Each site permitted an elevated view of vehicles as they proceeded through a turn, thereby exposing all windows to the stationary video camera. Five videotaping sessions were conducted over a two-day period in which the Heat Index (HI)-a function of temperature and humidity-varied from 80 to 101 degrees F and vehicle speed varied from 30 to 74 mph. The resulting videotapes were processed to create a master database listing vehicle-specific data for site location, date, time, vehicle type (e.g., minivan), color, window configuration (e.g., four windows and sunroof), number of windows in each of three position categories (fully open, partially open, and closed), HI, and speed. Of the 758 vehicles included in the database, 140 (18.5 percent) were labeled as "open," indicating a window, sunroof, or convertible top was fully or partially open. The results of a series of stepwise linear regression analyses indicated that the probability of a vehicle in the master database being "open" was weakly affected by time of day, vehicle type, vehicle color, vehicle speed, and HI. In particular, open windows occurred more frequently when vehicle speed was less than 50 mph during periods when HI exceeded 99.9 degrees F and the vehicle was a minivan or passenger van. Overall, the pilot study demonstrated that data on factors affecting vehicle window position could be acquired through a relatively simple experimental protocol using a single video camera. Limitations of the study requiring further research include the inability to determine the status of the vehicle air conditioning system; lack of a wide range of weather, vehicle speed, and road type conditions; and the need to exclude some vehicles from statistical analyses due to ambiguous window positions.

  10. Landscape analysis and pattern of hurricane impact and circulation on mangrove forests of the everglades

    USGS Publications Warehouse

    Doyle, T.W.; Krauss, K.W.; Wells, C.J.

    2009-01-01

    The Everglades ecosystem contains the largest contiguous tract of mangrove forest outside the tropics that were also coincidentally intersected by a major Category 5 hurricane. Airborne videography was flown to capture the landscape pattern and process of forest damage in relation to storm trajectory and circulation. Two aerial video transects, representing different topographic positions, were used to quantify forest damage from video frame analysis in relation to prevailing wind force, treefall direction, and forest height. A hurricane simulation model was applied to reconstruct wind fields corresponding to the ground location of each video frame and to correlate observed treefall and destruction patterns with wind speed and direction. Mangrove forests within the storm's eyepath and in the right-side (forewind) quadrants suffered whole or partial blowdowns, while left-side (backwind) sites south of the eyewall zone incurred moderate canopy reduction and defoliation. Sites along the coastal transect sustained substantially more storm damage than sites along the inland transect which may be attributed to differences in stand exposure and/or stature. Observed treefall directions were shown to be non-random and associated with hurricane trajectory and simulated forewind azimuths. Wide-area sampling using airborne videography provided an efficient adjunct to limited ground observations and improved our spatial understanding of how hurricanes imprint landscape-scale patterns of disturbance. ?? 2009 The Society of Wetland Scientists.

  11. Effect of aerobic training on inter-arm coordination in highly trained swimmers.

    PubMed

    Schnitzler, Christophe; Seifert, Ludovic; Chollet, Didier; Toussaint, Huub

    2014-02-01

    The effect of three months of aerobic training on spatio-temporal and coordination parameters was examined during a swim trial at maximal aerobic speed. Nine male swimmers swam a 400-m front crawl at maximal speed twice: in trial 1, after summer break, and trial 2, after three months of aerobic training. Video analysis determined the stroke (swimming speed, stroke length, and stroke rate) and coordination (Index of Coordination and propulsive phase duration) parameters for every 50-m segment. All swimmers significantly increased their swimming speed after training. For all swimmers except one, stroke length increased and stroke rate remained constant, whereas the Index of Coordination and the propulsive phase duration decreased (p<.05). This study suggests that aerobic training developed a greater force impulse in the swimmers during the propulsive phases, which allowed them to take advantage of longer non-propulsive phases. In this case, catch-up coordination, if associated with greater stroke length, can be an efficient coordination mode that reflects optimal drag/propulsion adaptation. This finding thus provides new insight into swimmers' adaptations to the middle-distance event. Copyright © 2013 Elsevier B.V. All rights reserved.

  12. Open-source telemedicine platform for wireless medical video communication.

    PubMed

    Panayides, A; Eleftheriou, I; Pantziaris, M

    2013-01-01

    An m-health system for real-time wireless communication of medical video based on open-source software is presented. The objective is to deliver a low-cost telemedicine platform which will allow for reliable remote diagnosis m-health applications such as emergency incidents, mass population screening, and medical education purposes. The performance of the proposed system is demonstrated using five atherosclerotic plaque ultrasound videos. The videos are encoded at the clinically acquired resolution, in addition to lower, QCIF, and CIF resolutions, at different bitrates, and four different encoding structures. Commercially available wireless local area network (WLAN) and 3.5G high-speed packet access (HSPA) wireless channels are used to validate the developed platform. Objective video quality assessment is based on PSNR ratings, following calibration using the variable frame delay (VFD) algorithm that removes temporal mismatch between original and received videos. Clinical evaluation is based on atherosclerotic plaque ultrasound video assessment protocol. Experimental results show that adequate diagnostic quality wireless medical video communications are realized using the designed telemedicine platform. HSPA cellular networks provide for ultrasound video transmission at the acquired resolution, while VFD algorithm utilization bridges objective and subjective ratings.

  13. Open-Source Telemedicine Platform for Wireless Medical Video Communication

    PubMed Central

    Panayides, A.; Eleftheriou, I.; Pantziaris, M.

    2013-01-01

    An m-health system for real-time wireless communication of medical video based on open-source software is presented. The objective is to deliver a low-cost telemedicine platform which will allow for reliable remote diagnosis m-health applications such as emergency incidents, mass population screening, and medical education purposes. The performance of the proposed system is demonstrated using five atherosclerotic plaque ultrasound videos. The videos are encoded at the clinically acquired resolution, in addition to lower, QCIF, and CIF resolutions, at different bitrates, and four different encoding structures. Commercially available wireless local area network (WLAN) and 3.5G high-speed packet access (HSPA) wireless channels are used to validate the developed platform. Objective video quality assessment is based on PSNR ratings, following calibration using the variable frame delay (VFD) algorithm that removes temporal mismatch between original and received videos. Clinical evaluation is based on atherosclerotic plaque ultrasound video assessment protocol. Experimental results show that adequate diagnostic quality wireless medical video communications are realized using the designed telemedicine platform. HSPA cellular networks provide for ultrasound video transmission at the acquired resolution, while VFD algorithm utilization bridges objective and subjective ratings. PMID:23573082

  14. Three-Dimensional Reconstruction of Cloud-to-Ground Lightning Using High-Speed Video and VHF Broadband Interferometer

    NASA Astrophysics Data System (ADS)

    Li, Yun; Qiu, Shi; Shi, Lihua; Huang, Zhengyu; Wang, Tao; Duan, Yantao

    2017-12-01

    The time resolved three-dimensional (3-D) spatial reconstruction of lightning channels using high-speed video (HSV) images and VHF broadband interferometer (BITF) data is first presented in this paper. Because VHF and optical radiations in step formation process occur with time separation no more than 1 μs, the observation data of BITF and HSV at two different sites provide the possibility of reconstructing the time resolved 3-D channel of lightning. With the proposed procedures for 3-D reconstruction of leader channels, dart leaders as well as stepped leaders with complex multiple branches can be well reconstructed. The differences between 2-D speeds and 3-D speeds of leader channels are analyzed by comparing the development of leader channels in 2-D and 3-D space. Since return stroke (RS) usually follows the path of previous leader channels, the 3-D speeds of the return strokes are first estimated by combination with the 3-D structure of the preceding leaders and HSV image sequences. For the fourth RS, the ratios of the 3-D to 2-D RS speeds increase with height, and the largest ratio of the 3-D to 2-D return stroke speeds can reach 2.03, which is larger than the result of triggered lightning reported by Idone. Since BITF can detect lightning radiation in a 360° view, correlated BITF and HSV observations increase the 3-D detection probability than dual-station HSV observations, which is helpful to obtain more events and deeper understanding of the lightning process.

  15. Evaluating the relationship between white matter integrity, cognition, and varieties of video game learning.

    PubMed

    Ray, Nicholas R; O'Connell, Margaret A; Nashiro, Kaoru; Smith, Evan T; Qin, Shuo; Basak, Chandramallika

    2017-01-01

    Many studies are currently researching the effects of video games, particularly in the domain of cognitive training. Great variability exists among video games however, and few studies have attempted to compare different types of video games. Little is known, for instance, about the cognitive processes or brain structures that underlie learning of different genres of video games. To examine the cognitive and neural underpinnings of two different types of game learning in order to evaluate their common and separate correlates, with the hopes of informing future intervention research. Participants (31 younger adults and 31 older adults) completed an extensive cognitive battery and played two different genres of video games, one action game and one strategy game, for 1.5 hours each. DTI scans were acquired for each participant, and regional fractional anisotropy (FA) values were extracted using the JHU atlas. Behavioral results indicated that better performance on tasks of working memory and perceptual discrimination was related to enhanced learning in both games, even after controlling for age, whereas better performance on a perceptual speed task was uniquely related with enhanced learning of the strategy game. DTI results indicated that white matter FA in the right fornix/stria terminalis was correlated with action game learning, whereas white matter FA in the left cingulum/hippocampus was correlated with strategy game learning, even after controlling for age. Although cognition, to a large extent, was a common predictor of both types of game learning, regional white matter FA could separately predict action and strategy game learning. Given the neural and cognitive correlates of strategy game learning, strategy games may provide a more beneficial training tool for adults suffering from memory-related disorders or declines in processing speed, particularly older adults.

  16. Tracking Steps on Apple Watch at Different Walking Speeds.

    PubMed

    Veerabhadrappa, Praveen; Moran, Matthew Duffy; Renninger, Mitchell D; Rhudy, Matthew B; Dreisbach, Scott B; Gift, Kristin M

    2018-04-09

    QUESTION: How accurate are the step counts obtained from Apple Watch? In this validation study, video steps vs. Apple Watch steps (mean ± SD) were 2965 ± 144 vs. 2964 ± 145 steps; P < 0.001. Lin's concordance correlation coefficient showed a strong correlation (r = 0.96; P < 0.001) between the two measurements. There was a total error of 0.034% (1.07 steps) for the Apple Watch steps when compared with the manual counts obtained from video recordings. Our study is one of the initial studies to objectively validate the accuracy of the step counts obtained from Apple watch at different walking speeds. Apple Watch tested to be an extremely accurate device for measuring daily step counts for adults.

  17. HDR {sup 192}Ir source speed measurements using a high speed video camera

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fonseca, Gabriel P.; Viana, Rodrigo S. S.; Yoriyaz, Hélio

    Purpose: The dose delivered with a HDR {sup 192}Ir afterloader can be separated into a dwell component, and a transit component resulting from the source movement. The transit component is directly dependent on the source speed profile and it is the goal of this study to measure accurate source speed profiles. Methods: A high speed video camera was used to record the movement of a {sup 192}Ir source (Nucletron, an Elekta company, Stockholm, Sweden) for interdwell distances of 0.25–5 cm with dwell times of 0.1, 1, and 2 s. Transit dose distributions were calculated using a Monte Carlo code simulatingmore » the source movement. Results: The source stops at each dwell position oscillating around the desired position for a duration up to (0.026 ± 0.005) s. The source speed profile shows variations between 0 and 81 cm/s with average speed of ∼33 cm/s for most of the interdwell distances. The source stops for up to (0.005 ± 0.001) s at nonprogrammed positions in between two programmed dwell positions. The dwell time correction applied by the manufacturer compensates the transit dose between the dwell positions leading to a maximum overdose of 41 mGy for the considered cases and assuming an air-kerma strength of 48 000 U. The transit dose component is not uniformly distributed leading to over and underdoses, which is within 1.4% for commonly prescribed doses (3–10 Gy). Conclusions: The source maintains its speed even for the short interdwell distances. Dose variations due to the transit dose component are much lower than the prescribed treatment doses for brachytherapy, although transit dose component should be evaluated individually for clinical cases.« less

  18. ACCURACY OF SELF-REPORTED FOOT STRIKE PATTERN IN INTERCOLLEGIATE AND RECREATIONAL RUNNERS DURING SHOD RUNNING

    PubMed Central

    Bade, Michael B.; Aaron, Katie

    2016-01-01

    ABSTRACT Background Clinicians are interested in the foot strike pattern (FSP) in runners because of the suggested relationship between the strike pattern and lower extremity injury. Purpose The purpose of this study was to assess the ability of collegiate cross-country runners and recreational runners to self-report their foot strike pattern during running. Study Design Cross-sectional Study Methods Twenty-three collegiate cross-country and 23 recreational runners voluntarily consented to participate. Inclusion criteria included running at least 18 miles per week, experience running on a treadmill, no history of lower extremity congenital or traumatic deformity, or acute injury three months prior to the start of the study. All participants completed a pre-test survey to indicate their typical foot strike pattern during a training run (FSPSurvey). Prior to running, reflective markers were placed on the posterior midsole and the vamp of the running shoe. A high-speed camera was used to film each runner in standing and while running at his or her preferred speed on a treadmill. The angle between the vector formed by the two reflective markers and the superior surface of the treadmill was used to calculate the foot strike angle (FSA). To determine the foot strike pattern from the video data (FSPVideo), the static standing angle was subtracted from the FSA at initial contact of the shoe on the treadmill. In addition to descriptive statistics, percent agreement and Chi square analysis was used to determine distribution differences between the video analysis results and the survey. Results The results of the chi-square analysis on the distribution of the FSPSurvey in comparison to the FSPVideo were significantly different for both the XCRunners (p < .01; Chi-square = 8.77) and the REC Runners (p < .0002; Chi-square = 16.70). The cross-country and recreational runners could correctly self-identified their foot strike pattern 56.5% and 43.5% of the time, respectively. Conclusion The findings of this study suggest that the clinician cannot depend on an experienced runner to correctly self-identify their FSP. Clinicians interested in knowing the FSP of a runner should consider performing the two-dimensional video analysis described in this paper. Level of Evidence 3 PMID:27274421

  19. Positive Correlation Between Motion Analysis Data on the LapMentor Virtual Reality Laparoscopic Surgical Simulator and the Results from Videotape Assessment of Real Laparoscopic Surgeries

    PubMed Central

    McDougall, Elspeth M.; Ono, Yoshinari; Hattori, Ryohei; Baba, Shiro; Iwamura, Masatsugu; Terachi, Toshiro; Naito, Seiji; Clayman, Ralph V.

    2012-01-01

    Abstract Purpose We studied the construct validity of the LapMentor, a virtual reality laparoscopic surgical simulator, and the correlation between the data collected on the LapMentor and the results of video assessment of real laparoscopic surgeries. Materials and Methods Ninety-two urologists were tested on basic skill tasks No. 3 (SK3) to No. 8 (SK8) on the LapMentor. They were divided into three groups: Group A (n=25) had no experience with laparoscopic surgeries as a chief surgeon; group B (n=33) had <35 experiences; and group C (n=34) had ≥35 experiences. Group scores on the accuracy, efficacy, and time of the tasks were compared. Forty physicians with ≥20 experiences supplied unedited videotapes showing a laparoscopic nephrectomy or an adrenalectomy in its entirety, and the videos were assessed in a blinded fashion by expert referees. Correlations between the videotape score (VS) and the performances on the LapMentor were analyzed. Results Group C showed significantly better outcomes than group A in the accuracy (SK5) (P=0.013), efficacy (SK8) (P=0.014), or speed (SKs 3 and 8) (P=0.009 and P=0.002, respectively) of the performances of LapMentor. Group B showed significantly better outcomes than group A in the speed and efficacy of the performances in SK8 (P=0.011 and P=0.029, respectively). Analyses of motion analysis data of LapMentor demonstrated that smooth and ideal movement of instruments is more important than speed of the movement of instruments to achieve accurate performances in each task. Multiple linear regression analysis indicated that the average score of the accuracy in SK4, 5, and 8 had significant positive correlation with VS (P=0.01). Conclusions This study demonstrated the construct and predictive validity of the LapMentor basic skill tasks, supporting their possible usefulness for the preclinical evaluation of laparoscopic skills. PMID:22642549

  20. Experimental Investigation of Aeroelastic Deformation of Slender Wings at Supersonic Speeds Using a Video Model Deformation Measurement Technique

    NASA Technical Reports Server (NTRS)

    Erickson, Gary E.

    2013-01-01

    A video-based photogrammetric model deformation system was established as a dedicated optical measurement technique at supersonic speeds in the NASA Langley Research Center Unitary Plan Wind Tunnel. This system was used to measure the wing twist due to aerodynamic loads of two supersonic commercial transport airplane models with identical outer mold lines but different aeroelastic properties. One model featured wings with deflectable leading- and trailing-edge flaps and internal channels to accommodate static pressure tube instrumentation. The wings of the second model were of single-piece construction without flaps or internal channels. The testing was performed at Mach numbers from 1.6 to 2.7, unit Reynolds numbers of 1.0 million to 5.0 million, and angles of attack from -4 degrees to +10 degrees. The video model deformation system quantified the wing aeroelastic response to changes in the Mach number, Reynolds number concurrent with dynamic pressure, and angle of attack and effectively captured the differences in the wing twist characteristics between the two test articles.

  1. Real-Time Control of a Video Game Using Eye Movements and Two Temporal EEG Sensors.

    PubMed

    Belkacem, Abdelkader Nasreddine; Saetia, Supat; Zintus-art, Kalanyu; Shin, Duk; Kambara, Hiroyuki; Yoshimura, Natsue; Berrached, Nasreddine; Koike, Yasuharu

    2015-01-01

    EEG-controlled gaming applications range widely from strictly medical to completely nonmedical applications. Games can provide not only entertainment but also strong motivation for practicing, thereby achieving better control with rehabilitation system. In this paper we present real-time control of video game with eye movements for asynchronous and noninvasive communication system using two temporal EEG sensors. We used wavelets to detect the instance of eye movement and time-series characteristics to distinguish between six classes of eye movement. A control interface was developed to test the proposed algorithm in real-time experiments with opened and closed eyes. Using visual feedback, a mean classification accuracy of 77.3% was obtained for control with six commands. And a mean classification accuracy of 80.2% was obtained using auditory feedback for control with five commands. The algorithm was then applied for controlling direction and speed of character movement in two-dimensional video game. Results showed that the proposed algorithm had an efficient response speed and timing with a bit rate of 30 bits/min, demonstrating its efficacy and robustness in real-time control.

  2. Real-Time Control of a Video Game Using Eye Movements and Two Temporal EEG Sensors

    PubMed Central

    Saetia, Supat; Zintus-art, Kalanyu; Shin, Duk; Kambara, Hiroyuki; Yoshimura, Natsue; Berrached, Nasreddine; Koike, Yasuharu

    2015-01-01

    EEG-controlled gaming applications range widely from strictly medical to completely nonmedical applications. Games can provide not only entertainment but also strong motivation for practicing, thereby achieving better control with rehabilitation system. In this paper we present real-time control of video game with eye movements for asynchronous and noninvasive communication system using two temporal EEG sensors. We used wavelets to detect the instance of eye movement and time-series characteristics to distinguish between six classes of eye movement. A control interface was developed to test the proposed algorithm in real-time experiments with opened and closed eyes. Using visual feedback, a mean classification accuracy of 77.3% was obtained for control with six commands. And a mean classification accuracy of 80.2% was obtained using auditory feedback for control with five commands. The algorithm was then applied for controlling direction and speed of character movement in two-dimensional video game. Results showed that the proposed algorithm had an efficient response speed and timing with a bit rate of 30 bits/min, demonstrating its efficacy and robustness in real-time control. PMID:26690500

  3. Robust object tracking techniques for vision-based 3D motion analysis applications

    NASA Astrophysics Data System (ADS)

    Knyaz, Vladimir A.; Zheltov, Sergey Y.; Vishnyakov, Boris V.

    2016-04-01

    Automated and accurate spatial motion capturing of an object is necessary for a wide variety of applications including industry and science, virtual reality and movie, medicine and sports. For the most part of applications a reliability and an accuracy of the data obtained as well as convenience for a user are the main characteristics defining the quality of the motion capture system. Among the existing systems for 3D data acquisition, based on different physical principles (accelerometry, magnetometry, time-of-flight, vision-based), optical motion capture systems have a set of advantages such as high speed of acquisition, potential for high accuracy and automation based on advanced image processing algorithms. For vision-based motion capture accurate and robust object features detecting and tracking through the video sequence are the key elements along with a level of automation of capturing process. So for providing high accuracy of obtained spatial data the developed vision-based motion capture system "Mosca" is based on photogrammetric principles of 3D measurements and supports high speed image acquisition in synchronized mode. It includes from 2 to 4 technical vision cameras for capturing video sequences of object motion. The original camera calibration and external orientation procedures provide the basis for high accuracy of 3D measurements. A set of algorithms as for detecting, identifying and tracking of similar targets, so for marker-less object motion capture is developed and tested. The results of algorithms' evaluation show high robustness and high reliability for various motion analysis tasks in technical and biomechanics applications.

  4. Evaluation of safety effect of turbo-roundabout lane dividers using floating car data and video observation.

    PubMed

    Kieć, Mariusz; Ambros, Jiří; Bąk, Radosław; Gogolín, Ondřej

    2018-06-01

    Roundabouts are one of the safest types of intersections. However, the needs to meet the requirements of operation, capacity, traffic organization and surrounding development lead to a variety of design solutions. One of such alternatives are turbo-roundabouts, which simplify drivers' decision making, limit lane changing in the roundabout, and induce low driving speed thanks to raised lane dividers. However, in spite of their generally positive reception, the safety impact of turbo-roundabouts has not been sufficiently studied. Given the low number of existing turbo-roundabouts and the statistical rarity of accident occurrence, the prevalent previously conducted studies applied only simple before-after designs or relied on traffic conflicts in micro-simulations. Nevertheless, the presence of raised lane dividers is acknowledged as an important feature of well performing and safe turbo-roundabouts. Following the previous Polish studies, the primary objective of the present study was assessment of influence of presence of lane dividers on road safety and developing a reliable and valid surrogate safety measure based on field data, which will circumvent the limitations of accident data or micro-simulations. The secondary objective was using the developed surrogate safety measure to assess and compare the safety levels of Polish turbo-roundabout samples with and without raised lane dividers. The surrogate safety measure was based on speed and lane behaviour. Speed was obtained from video observations and floating car data, which enabled the construction of representative speed profiles. Lane behaviour data was gathered from video observations. The collection of the data allowed for a relative validation of the method by comparing the safety performance of turbo-roundabouts with and without raised lane dividers. In the end, the surrogate measure was applied for evaluation of safety levels and enhancement of the existing safety performance functions, which combine traffic volumes, and speeds as a function of radii). The final models may help quantify the safety impact of different turbo-roundabout solutions. Copyright © 2018 Elsevier Ltd. All rights reserved.

  5. Physics and Video Analysis

    NASA Astrophysics Data System (ADS)

    Allain, Rhett

    2016-05-01

    We currently live in a world filled with videos. There are videos on YouTube, feature movies and even videos recorded with our own cameras and smartphones. These videos present an excellent opportunity to not only explore physical concepts, but also inspire others to investigate physics ideas. With video analysis, we can explore the fantasy world in science-fiction films. We can also look at online videos to determine if they are genuine or fake. Video analysis can be used in the introductory physics lab and it can even be used to explore the make-believe physics embedded in video games. This book covers the basic ideas behind video analysis along with the fundamental physics principles used in video analysis. The book also includes several examples of the unique situations in which video analysis can be used.

  6. A semi-automated software tool to study treadmill locomotion in the rat: from experiment videos to statistical gait analysis.

    PubMed

    Gravel, P; Tremblay, M; Leblond, H; Rossignol, S; de Guise, J A

    2010-07-15

    A computer-aided method for the tracking of morphological markers in fluoroscopic images of a rat walking on a treadmill is presented and validated. The markers correspond to bone articulations in a hind leg and are used to define the hip, knee, ankle and metatarsophalangeal joints. The method allows a user to identify, using a computer mouse, about 20% of the marker positions in a video and interpolate their trajectories from frame-to-frame. This results in a seven-fold speed improvement in detecting markers. This also eliminates confusion problems due to legs crossing and blurred images. The video images are corrected for geometric distortions from the X-ray camera, wavelet denoised, to preserve the sharpness of minute bone structures, and contrast enhanced. From those images, the marker positions across video frames are extracted, corrected for rat "solid body" motions on the treadmill, and used to compute the positional and angular gait patterns. Robust Bootstrap estimates of those gait patterns and their prediction and confidence bands are finally generated. The gait patterns are invaluable tools to study the locomotion of healthy animals or the complex process of locomotion recovery in animals with injuries. The method could, in principle, be adapted to analyze the locomotion of other animals as long as a fluoroscopic imager and a treadmill are available. Copyright 2010 Elsevier B.V. All rights reserved.

  7. Teaching Physics with Basketball

    NASA Astrophysics Data System (ADS)

    Chanpichai, N.; Wattanakasiwich, P.

    2010-07-01

    Recently, technologies and computer takes important roles in learning and teaching, including physics. Advance in technologies can help us better relating physics taught in the classroom to the real world. In this study, we developed a module on teaching a projectile motion through shooting a basketball. Students learned about physics of projectile motion, and then they took videos of their classmates shooting a basketball by using the high speed camera. Then they analyzed videos by using Tracker, a video analysis and modeling tool. While working with Tracker, students learned about the relationships between three kinematics graphs. Moreover, they learned about a real projectile motion (with an air resistance) through modeling tools. Students' abilities to interpret kinematics graphs were investigated before and after the instruction by using the Test of Understanding Graphs in Kinematics (TUG-K). The maximum normalized gain or is 0.77, which indicated students' improvement in determining displacement from the velocity-time graph. The minimum is 0.20, which indicated that most students still have difficulties interpreting the change in velocity from the acceleration-time graph. Results from evaluation questionnaires revealed that students also satisfied with the instructions that related physics contents to shooting basketball.

  8. High speed imaging television system

    DOEpatents

    Wilkinson, William O.; Rabenhorst, David W.

    1984-01-01

    A television system for observing an event which provides a composite video output comprising the serially interlaced images the system is greater than the time resolution of any of the individual cameras.

  9. Dynamic Torsional and Cyclic Fracture Behavior of ProFile Rotary Instruments at Continuous or Reciprocating Rotation as Visualized with High-speed Digital Video Imaging.

    PubMed

    Tokita, Daisuke; Ebihara, Arata; Miyara, Kana; Okiji, Takashi

    2017-08-01

    This study examined the dynamic fracture behavior of nickel-titanium rotary instruments in torsional or cyclic loading at continuous or reciprocating rotation by means of high-speed digital video imaging. The ProFile instruments (size 30, 0.06 taper; Dentsply Maillefer, Ballaigues, Switzerland) were categorized into 4 groups (n = 7 in each group) as follows: torsional/continuous (TC), torsional/reciprocating (TR), cyclic/continuous (CC), and cyclic/reciprocating (CR). Torsional loading was performed by rotating the instruments by holding the tip with a vise. For cyclic loading, a custom-made device with a 38° curvature was used. Dynamic fracture behavior was observed with a high-speed camera. The time to fracture was recorded, and the fractured surface was examined with scanning electron microscopy. The TC group initially exhibited necking of the file followed by the development of an initial crack line. The TR group demonstrated opening and closing of a crack according to its rotation in the cutting and noncutting directions, respectively. The CC group separated without any detectable signs of deformation. In the CR group, initial crack formation was recognized in 5 of 7 samples. The reciprocating rotation exhibited a longer time to fracture in both torsional and cyclic fatigue testing (P < .05). The scanning electron microscopic images showed a severely deformed surface in the TR group. The dynamic fracture behavior of NiTi rotary instruments, as visualized with high-speed digital video imaging, varied between the different modes of rotation and different fatigue testing. Reciprocating rotation induced a slower crack propagation and conferred higher fatigue resistance than continuous rotation in both torsional and cyclic loads. Copyright © 2017 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.

  10. Real-time video compressing under DSP/BIOS

    NASA Astrophysics Data System (ADS)

    Chen, Qiu-ping; Li, Gui-ju

    2009-10-01

    This paper presents real-time MPEG-4 Simple Profile video compressing based on the DSP processor. The programming framework of video compressing is constructed using TMS320C6416 Microprocessor, TDS510 simulator and PC. It uses embedded real-time operating system DSP/BIOS and the API functions to build periodic function, tasks and interruptions etcs. Realize real-time video compressing. To the questions of data transferring among the system. Based on the architecture of the C64x DSP, utilized double buffer switched and EDMA data transfer controller to transit data from external memory to internal, and realize data transition and processing at the same time; the architecture level optimizations are used to improve software pipeline. The system used DSP/BIOS to realize multi-thread scheduling. The whole system realizes high speed transition of a great deal of data. Experimental results show the encoder can realize real-time encoding of 768*576, 25 frame/s video images.

  11. Real-time strategy game training: emergence of a cognitive flexibility trait.

    PubMed

    Glass, Brian D; Maddox, W Todd; Love, Bradley C

    2013-01-01

    Training in action video games can increase the speed of perceptual processing. However, it is unknown whether video-game training can lead to broad-based changes in higher-level competencies such as cognitive flexibility, a core and neurally distributed component of cognition. To determine whether video gaming can enhance cognitive flexibility and, if so, why these changes occur, the current study compares two versions of a real-time strategy (RTS) game. Using a meta-analytic Bayes factor approach, we found that the gaming condition that emphasized maintenance and rapid switching between multiple information and action sources led to a large increase in cognitive flexibility as measured by a wide array of non-video gaming tasks. Theoretically, the results suggest that the distributed brain networks supporting cognitive flexibility can be tuned by engrossing video game experience that stresses maintenance and rapid manipulation of multiple information sources. Practically, these results suggest avenues for increasing cognitive function.

  12. Real-Time Strategy Game Training: Emergence of a Cognitive Flexibility Trait

    PubMed Central

    Glass, Brian D.; Maddox, W. Todd; Love, Bradley C.

    2013-01-01

    Training in action video games can increase the speed of perceptual processing. However, it is unknown whether video-game training can lead to broad-based changes in higher-level competencies such as cognitive flexibility, a core and neurally distributed component of cognition. To determine whether video gaming can enhance cognitive flexibility and, if so, why these changes occur, the current study compares two versions of a real-time strategy (RTS) game. Using a meta-analytic Bayes factor approach, we found that the gaming condition that emphasized maintenance and rapid switching between multiple information and action sources led to a large increase in cognitive flexibility as measured by a wide array of non-video gaming tasks. Theoretically, the results suggest that the distributed brain networks supporting cognitive flexibility can be tuned by engrossing video game experience that stresses maintenance and rapid manipulation of multiple information sources. Practically, these results suggest avenues for increasing cognitive function. PMID:23950921

  13. Video Image Stabilization and Registration (VISAR) Software

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Two scientists at NASA Marshall Space Flight Center, atmospheric scientist Paul Meyer (left) and solar physicist Dr. David Hathaway, have developed promising new software, called Video Image Stabilization and Registration (VISAR), that may help law enforcement agencies to catch criminals by improving the quality of video recorded at crime scenes, VISAR stabilizes camera motion in the horizontal and vertical as well as rotation and zoom effects; produces clearer images of moving objects; smoothes jagged edges; enhances still images; and reduces video noise of snow. VISAR could also have applications in medical and meteorological imaging. It could steady images of Ultrasounds which are infamous for their grainy, blurred quality. It would be especially useful for tornadoes, tracking whirling objects and helping to determine the tornado's wind speed. This image shows two scientists reviewing an enhanced video image of a license plate taken from a moving automobile.

  14. Highly efficient simulation environment for HDTV video decoder in VLSI design

    NASA Astrophysics Data System (ADS)

    Mao, Xun; Wang, Wei; Gong, Huimin; He, Yan L.; Lou, Jian; Yu, Lu; Yao, Qingdong; Pirsch, Peter

    2002-01-01

    With the increase of the complex of VLSI such as the SoC (System on Chip) of MPEG-2 Video decoder with HDTV scalability especially, simulation and verification of the full design, even as high as the behavior level in HDL, often proves to be very slow, costly and it is difficult to perform full verification until late in the design process. Therefore, they become bottleneck of the procedure of HDTV video decoder design, and influence it's time-to-market mostly. In this paper, the architecture of Hardware/Software Interface of HDTV video decoder is studied, and a Hardware-Software Mixed Simulation (HSMS) platform is proposed to check and correct error in the early design stage, based on the algorithm of MPEG-2 video decoding. The application of HSMS to target system could be achieved by employing several introduced approaches. Those approaches speed up the simulation and verification task without decreasing performance.

  15. Real-time UAV trajectory generation using feature points matching between video image sequences

    NASA Astrophysics Data System (ADS)

    Byun, Younggi; Song, Jeongheon; Han, Dongyeob

    2017-09-01

    Unmanned aerial vehicles (UAVs), equipped with navigation systems and video capability, are currently being deployed for intelligence, reconnaissance and surveillance mission. In this paper, we present a systematic approach for the generation of UAV trajectory using a video image matching system based on SURF (Speeded up Robust Feature) and Preemptive RANSAC (Random Sample Consensus). Video image matching to find matching points is one of the most important steps for the accurate generation of UAV trajectory (sequence of poses in 3D space). We used the SURF algorithm to find the matching points between video image sequences, and removed mismatching by using the Preemptive RANSAC which divides all matching points to outliers and inliers. The inliers are only used to determine the epipolar geometry for estimating the relative pose (rotation and translation) between image sequences. Experimental results from simulated video image sequences showed that our approach has a good potential to be applied to the automatic geo-localization of the UAVs system

  16. A High-Speed Spectroscopy System for Observing Lightning and Transient Luminous Events

    NASA Astrophysics Data System (ADS)

    Boggs, L.; Liu, N.; Austin, M.; Aguirre, F.; Tilles, J.; Nag, A.; Lazarus, S. M.; Rassoul, H.

    2017-12-01

    Here we present a high-speed spectroscopy system that can be used to record atmospheric electrical discharges, including lightning and transient luminous events. The system consists of a Phantom V1210 high-speed camera, a Volume Phase Holographic (VPH) grism, an optional optical slit, and lenses. The spectrograph has the capability to record videos at speeds of 200,000 frames per second and has an effective wavelength band of 550-775 nm for the first order spectra. When the slit is used, the system has a spectral resolution of about 0.25 nm per pixel. We have constructed a durable enclosure made of heavy duty aluminum to house the high-speed spectrograph. It has two fans for continuous air flow and a removable tray to mount the spectrograph components. In addition, a Watec video camera (30 frames per second) is attached to the top of the enclosure to provide a scene view. A heavy duty Pelco pan/tilt motor is used to position the enclosure and can be controlled remotely through a Rasperry Pi computer. An observation campaign has been conducted during the summer and fall of 2017 at the Florida Institute of Technology. Several close cloud-to-ground discharges were recorded at 57,000 frames per second. The spectrum of a downward stepped negative leader and a positive cloud-to-ground return stroke will be reported on.

  17. The application of the high-speed photography in the experiments of boiling liquid expanding vapor explosions

    NASA Astrophysics Data System (ADS)

    Chen, Sining; Sun, Jinhua; Chen, Dongliang

    2007-01-01

    The liquefied-petroleum gas tank in some failure situations may release its contents, and then a series of hazards with different degrees of severity may occur. The most dangerous accident is the boiling liquid expanding vapor explosion (BLEVE). In this paper, a small-scale experiment was established to experimentally investigate the possible processes that could lead to a BLEVE. As there is some danger in using LPG in the experiments, water was used as the test fluid. The change of pressure and temperature was measured during the experiment. The ejection of the vapor and the sequent two-phase flow were recorded by a high-speed video camera. It was observed that two pressure peaks result after the pressure is released. The vapor was first ejected at a high speed; there was a sudden pressure drop which made the liquid superheated. The superheated liquid then boiled violently causing the liquid contents to swell, and also, the vapor pressure in the tank increased rapidly. The second pressure peak was possibly due to the swell of this two-phase flow which was likely to violently impact the wall of the tank with high speed. The whole evolution of the two-phase flow was recorded through photos captured by the high-speed video camera, and the "two step" BLEVE process was confirmed.

  18. Kinematic Measurements of the Vocal-Fold Displacement Waveform in Typical Children and Adult Populations: Quantification of High-Speed Endoscopic Videos

    ERIC Educational Resources Information Center

    Patel, Rita; Donohue, Kevin D.; Unnikrishnan, Harikrishnan; Kryscio, Richard J.

    2015-01-01

    Purpose: This article presents a quantitative method for assessing instantaneous and average lateral vocal-fold motion from high-speed digital imaging, with a focus on developmental changes in vocal-fold kinematics during childhood. Method: Vocal-fold vibrations were analyzed for 28 children (aged 5-11 years) and 28 adults (aged 21-45 years)…

  19. A Bio-Inspired, Motion-Based Analysis of Crowd Behavior Attributes Relevance to Motion Transparency, Velocity Gradients, and Motion Patterns

    PubMed Central

    Raudies, Florian; Neumann, Heiko

    2012-01-01

    The analysis of motion crowds is concerned with the detection of potential hazards for individuals of the crowd. Existing methods analyze the statistics of pixel motion to classify non-dangerous or dangerous behavior, to detect outlier motions, or to estimate the mean throughput of people for an image region. We suggest a biologically inspired model for the analysis of motion crowds that extracts motion features indicative for potential dangers in crowd behavior. Our model consists of stages for motion detection, integration, and pattern detection that model functions of the primate primary visual cortex area (V1), the middle temporal area (MT), and the medial superior temporal area (MST), respectively. This model allows for the processing of motion transparency, the appearance of multiple motions in the same visual region, in addition to processing opaque motion. We suggest that motion transparency helps to identify “danger zones” in motion crowds. For instance, motion transparency occurs in small exit passages during evacuation. However, motion transparency occurs also for non-dangerous crowd behavior when people move in opposite directions organized into separate lanes. Our analysis suggests: The combination of motion transparency and a slow motion speed can be used for labeling of candidate regions that contain dangerous behavior. In addition, locally detected decelerations or negative speed gradients of motions are a precursor of danger in crowd behavior as are globally detected motion patterns that show a contraction toward a single point. In sum, motion transparency, image speeds, motion patterns, and speed gradients extracted from visual motion in videos are important features to describe the behavioral state of a motion crowd. PMID:23300930

  20. Lightning attachment process to common buildings

    NASA Astrophysics Data System (ADS)

    Saba, M. M. F.; Paiva, A. R.; Schumann, C.; Ferro, M. A. S.; Naccarato, K. P.; Silva, J. C. O.; Siqueira, F. V. C.; Custódio, D. M.

    2017-05-01

    The physical mechanism of lightning attachment to grounded structures is one of the most important issues in lightning physics research, and it is the basis for the design of the lightning protection systems. Most of what is known about the attachment process comes from leader propagation models that are mostly based on laboratory observations of long electrical discharges or from observations of lightning attachment to tall structures. In this paper we use high-speed videos to analyze the attachment process of downward lightning flashes to an ordinary residential building. For the first time, we present characteristics of the attachment process to common structures that are present in almost every city (in this case, two buildings under 60 m in São Paulo City, Brazil). Parameters like striking distance and connecting leaders speed, largely used in lightning attachment models and in lightning protection standards, are revealed in this work.Plain Language SummarySince the time of Benjamin Franklin, no one has ever recorded high-speed video images of a lightning connection to a common building. It is very difficult to do it. Cameras need to be very close to the structure chosen to be observed, and long observation time is required to register one lightning strike to that particular structure. Models and theories used to determine the zone of protection of a lightning rod have been developed, but they all suffer from the lack of field data. The submitted manuscript provides results from high-speed video observations of lightning attachment to low buildings that are commonly found in almost every populated area around the world. The proximity of the camera and the high frame rate allowed us to see interesting details that will improve the understanding of the attachment process and, consequently, the models and theories used by lightning protection standards. This paper also presents spectacular images and videos of lightning flashes connecting lightning rods that will be of interest not only to the lightning physics scientific community and to engineers that struggle with lightning protection but also to all those who want to understand how a lightning rod works.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li class="active"><span>16</span></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_16 --> <div id="page_17" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li class="active"><span>17</span></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="321"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20000011914','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20000011914"><span>Unmanned Vehicle Guidance Using Video Camera/Vehicle Model</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Sutherland, T.</p> <p>1999-01-01</p> <p>A video guidance sensor (VGS) system has flown on both STS-87 and STS-95 to validate a single camera/target concept for vehicle navigation. The main part of the image algorithm was the subtraction of two consecutive images using software. For a nominal size image of 256 x 256 pixels this subtraction can take a large portion of the time between successive frames in standard rate video leaving very little time for other computations. The purpose of this project was to integrate the software subtraction into hardware to speed up the subtraction process and allow for more complex algorithms to be performed, both in hardware and software.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20070006358','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20070006358"><span>Electrical Arc Ignition Testing of Spacesuit Materials</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Smith, Sarah; Gallus, Tim; Tapia, Susana; Ball, Elizabeth; Beeson, Harold</p> <p>2006-01-01</p> <p>A viewgraph presentation on electrical arc ignition testing of spacesuit materials is shown. The topics include: 1) Background; 2) Test Objectives; 3) Test Sample Materials; 4) Test Methods; 5) Scratch Test Objectives; 6) Cotton Scratch Test Video; 7) Scratch Test Results; 8) Entire Date Plot; 9) Closeup Data Plot; 10) Scratch Test Problems; 11) Poke Test Objectives; 12) Poke Test Results; 13) Poke Test Problems; 14) Wire-break Test Objectives; 15) Cotton Wire-Break Test Video; 16) High Speed Cotton Wire-break Test Video; 17) Typical Data Plot; 18) Closeup Data Plot; 19) Wire-break Test Results; 20) Wire-break Tests vs. Scratch Tests; 21) Urethane-coated Nylon; and 22) Moleskin.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2006SPIE.6119..119O','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2006SPIE.6119..119O"><span>Development of a 300,000-pixel ultrahigh-speed high-sensitivity CCD</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ohtake, H.; Hayashida, T.; Kitamura, K.; Arai, T.; Yonai, J.; Tanioka, K.; Maruyama, H.; Etoh, T. Goji; Poggemann, D.; Ruckelshausen, A.; van Kuijk, H.; Bosiers, Jan T.</p> <p>2006-02-01</p> <p>We are developing an ultrahigh-speed, high-sensitivity broadcast camera that is capable of capturing clear, smooth slow-motion videos even where lighting is limited, such as at professional baseball games played at night. In earlier work, we developed an ultrahigh-speed broadcast color camera1) using three 80,000-pixel ultrahigh-speed, highsensitivity CCDs2). This camera had about ten times the sensitivity of standard high-speed cameras, and enabled an entirely new style of presentation for sports broadcasts and science programs. Most notably, increasing the pixel count is crucially important for applying ultrahigh-speed, high-sensitivity CCDs to HDTV broadcasting. This paper provides a summary of our experimental development aimed at improving the resolution of CCD even further: a new ultrahigh-speed high-sensitivity CCD that increases the pixel count four-fold to 300,000 pixels.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1998SPIE.3408..498G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1998SPIE.3408..498G"><span>Fast and predictable video compression in software design and implementation of an H.261 codec</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Geske, Dagmar; Hess, Robert</p> <p>1998-09-01</p> <p>The use of software codecs for video compression becomes commonplace in several videoconferencing applications. In order to reduce conflicts with other applications used at the same time, mechanisms for resource reservation on endsystems need to determine an upper bound for computing time used by the codec. This leads to the demand for predictable execution times of compression/decompression. Since compression schemes as H.261 inherently depend on the motion contained in the video, an adaptive admission control is required. This paper presents a data driven approach based on dynamical reduction of the number of processed macroblocks in peak situations. Besides the absolute speed is a point of interest. The question, whether and how software compression of high quality video is feasible on today's desktop computers, is examined.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1992SPIE.1539.....J','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1992SPIE.1539.....J"><span>Ultrahigh- and high-speed photography, videography, and photonics '91; Proceedings of the Meeting, San Diego, CA, July 24-26, 1991</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Jaanimagi, Paul A.</p> <p>1992-01-01</p> <p>This volume presents papers grouped under the topics on advances in streak and framing camera technology, applications of ultrahigh-speed photography, characterizing high-speed instrumentation, high-speed electronic imaging technology and applications, new technology for high-speed photography, high-speed imaging and photonics in detonics, and high-speed velocimetry. The papers presented include those on a subpicosecond X-ray streak camera, photocathodes for ultrasoft X-ray region, streak tube dynamic range, high-speed TV cameras for streak tube readout, femtosecond light-in-flight holography, and electrooptical systems characterization techniques. Attention is also given to high-speed electronic memory video recording techniques, high-speed IR imaging of repetitive events using a standard RS-170 imager, use of a CCD array as a medium-speed streak camera, the photography of shock waves in explosive crystals, a single-frame camera based on the type LD-S-10 intensifier tube, and jitter diagnosis for pico- and femtosecond sources.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018PhyEd..53c3007V','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018PhyEd..53c3007V"><span>Direct speed of sound measurement within the atmosphere during a national holiday in New Zealand</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Vollmer, M.</p> <p>2018-05-01</p> <p>Measuring the speed of sound belongs to almost any physics curriculum. Two methods dominate, measuring resonance phenomena of standing waves or time-of-flight measurements. The second type is conceptually simpler, however, performing such experiments with dimensions of meters usually requires precise electronic time measurement equipment if accurate results are to be obtained. Here a time-of-flight measurement from a video recording is reported with a dimension of several km and an accuracy for the speed of sound of the order of 1%.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26994179','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26994179"><span>Automated detection of feeding strikes by larval fish using continuous high-speed digital video: a novel method to extract quantitative data from fast, sparse kinematic events.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Shamur, Eyal; Zilka, Miri; Hassner, Tal; China, Victor; Liberzon, Alex; Holzman, Roi</p> <p>2016-06-01</p> <p>Using videography to extract quantitative data on animal movement and kinematics constitutes a major tool in biomechanics and behavioral ecology. Advanced recording technologies now enable acquisition of long video sequences encompassing sparse and unpredictable events. Although such events may be ecologically important, analysis of sparse data can be extremely time-consuming and potentially biased; data quality is often strongly dependent on the training level of the observer and subject to contamination by observer-dependent biases. These constraints often limit our ability to study animal performance and fitness. Using long videos of foraging fish larvae, we provide a framework for the automated detection of prey acquisition strikes, a behavior that is infrequent yet critical for larval survival. We compared the performance of four video descriptors and their combinations against manually identified feeding events. For our data, the best single descriptor provided a classification accuracy of 77-95% and detection accuracy of 88-98%, depending on fish species and size. Using a combination of descriptors improved the accuracy of classification by ∼2%, but did not improve detection accuracy. Our results indicate that the effort required by an expert to manually label videos can be greatly reduced to examining only the potential feeding detections in order to filter false detections. Thus, using automated descriptors reduces the amount of manual work needed to identify events of interest from weeks to hours, enabling the assembly of an unbiased large dataset of ecologically relevant behaviors. © 2016. Published by The Company of Biologists Ltd.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016JPhCS.739a2032P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016JPhCS.739a2032P"><span>Development of an ICT-Based Air Column Resonance Learning Media</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Purjiyanta, Eka; Handayani, Langlang; Marwoto, Putut</p> <p>2016-08-01</p> <p>Commonly, the sound source used in the air column resonance experiment is the tuning fork having disadvantage of unoptimal resonance results due to the sound produced which is getting weaker. In this study we made tones with varying frequency using the Audacity software which were, then, stored in a mobile phone as a source of sound. One advantage of this sound source is the stability of the resulting sound enabling it to produce the same powerful sound. The movement of water in a glass tube mounted on the tool resonance and the tone sound that comes out from the mobile phone were recorded by using a video camera. Sound resonances recorded were first, second, and third resonance, for each tone frequency mentioned. The resulting sound stays longer, so it can be used for the first, second, third and next resonance experiments. This study aimed to (1) explain how to create tones that can substitute tuning forks sound used in air column resonance experiments, (2) illustrate the sound wave that occurred in the first, second, and third resonance in the experiment, and (3) determine the speed of sound in the air. This study used an experimental method. It was concluded that; (1) substitute tones of a tuning fork sound can be made by using the Audacity software; (2) the form of sound waves that occured in the first, second, and third resonance in the air column resonance can be drawn based on the results of video recording of the air column resonance; and (3) based on the experiment result, the speed of sound in the air is 346.5 m/s, while based on the chart analysis with logger pro software, the speed of sound in the air is 343.9 ± 0.3171 m/s.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2000SPIE.4315..579L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2000SPIE.4315..579L"><span>Designing a scalable video-on-demand server with data sharing</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Lim, Hyeran; Du, David H.</p> <p>2000-12-01</p> <p>As current disk space and transfer speed increase, the bandwidth between a server and its disks has become critical for video-on-demand (VOD) services. Our VOD server consists of several hosts sharing data on disks through a ring-based network. Data sharing provided by the spatial-reuse ring network between servers and disks not only increases the utilization towards full bandwidth but also improves the availability of videos. Striping and replication methods are introduced in order to improve the efficiency of our VOD server system as well as the availability of videos. We consider tow kinds of resources of a VOD server system. Given a representative access profile, our intention is to propose an algorithm to find an initial condition, place videos on disks in the system successfully. If any copy of a video cannot be placed due to lack of resources, more servers/disks are added. When all videos are place on the disks by our algorithm, the final configuration is determined with indicator of how tolerable it is against the fluctuation in demand of videos. Considering it is a NP-hard problem, our algorithm generates the final configuration with O(M log M) at best, where M is the number of movies.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2001SPIE.4315..579L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2001SPIE.4315..579L"><span>Designing a scalable video-on-demand server with data sharing</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Lim, Hyeran; Du, David H. C.</p> <p>2001-01-01</p> <p>As current disk space and transfer speed increase, the bandwidth between a server and its disks has become critical for video-on-demand (VOD) services. Our VOD server consists of several hosts sharing data on disks through a ring-based network. Data sharing provided by the spatial-reuse ring network between servers and disks not only increases the utilization towards full bandwidth but also improves the availability of videos. Striping and replication methods are introduced in order to improve the efficiency of our VOD server system as well as the availability of videos. We consider tow kinds of resources of a VOD server system. Given a representative access profile, our intention is to propose an algorithm to find an initial condition, place videos on disks in the system successfully. If any copy of a video cannot be placed due to lack of resources, more servers/disks are added. When all videos are place on the disks by our algorithm, the final configuration is determined with indicator of how tolerable it is against the fluctuation in demand of videos. Considering it is a NP-hard problem, our algorithm generates the final configuration with O(M log M) at best, where M is the number of movies.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24508284','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24508284"><span>Nurse-surgeon object transfer: video analysis of communication and situation awareness in the operating theatre.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Korkiakangas, Terhi; Weldon, Sharon-Marie; Bezemer, Jeff; Kneebone, Roger</p> <p>2014-09-01</p> <p>One of the most central collaborative tasks during surgical operations is the passing of objects, including instruments. Little is known about how nurses and surgeons achieve this. The aim of the present study was to explore what factors affect this routine-like task, resulting in fast or slow transfer of objects. A qualitative video study, informed by an observational ethnographic approach, was conducted in a major teaching hospital in the UK. A total of 20 general surgical operations were observed. In total, approximately 68 h of video data have been reviewed. A subsample of 225 min has been analysed in detail using interactional video-analysis developed within the social sciences. Two factors affecting object transfer were observed: (1) relative instrument trolley position and (2) alignment. The scrub nurse's instrument trolley position (close to vs. further back from the surgeon) and alignment (gaze direction) impacts on the communication with the surgeon, and consequently, on the speed of object transfer. When the scrub nurse was standing close to the surgeon, and "converged" to follow the surgeon's movements, the transfer occurred more seamlessly and faster (<1.0 s) than when the scrub nurse was standing further back from the surgeon and did not follow the surgeon's movements (>1.0 s). The smoothness of object transfer can be improved by adjusting the scrub nurse's instrument trolley position, enabling a better monitoring of surgeon's bodily conduct and affording early orientation (awareness) to an upcoming request (changing situation). Object transfer is facilitated by the surgeon's embodied practices, which can elicit the nurse's attention to the request and, as a response, maximise a faster object transfer. A simple intervention to highlight the significance of these factors could improve communication in the operating theatre. Copyright © 2014 Elsevier Ltd. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1997SPIE.2921..464K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1997SPIE.2921..464K"><span>Analysis of motion in speed skating</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Koga, Yuzo; Nishimura, Tetsu; Watanabe, Naoki; Okamoto, Kousuke; Wada, Yuhei</p> <p>1997-03-01</p> <p>A motion on sports has been studied by many researchers from the view of the medical, psychological and mechanical fields. Here, we try to analyze a speed skating motion dynamically for an aim of performing the best record. As an official competition of speed skating is performed on the round rink, the skating motion must be studied on the three phases, that is, starting phase, straight and curved course skating phase. It is indispensable to have a visual data of a skating motion in order to analyze kinematically. So we took a several subject's skating motion by 8 mm video cameras in order to obtain three dimensional data. As the first step, the movement of the center of gravity of skater (abbreviate to C. G.) is discussed in this paper, because a skating motion is very complicated. The movement of C. G. will give an information of the reaction force to a skate blade from the surface of ice. We discuss the discrepancy of several skating motion by studied subjects. Our final goal is to suggest the best skating form for getting the finest record.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25978405','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25978405"><span>Acidification reduced growth rate but not swimming speed of larval sea urchins.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Chan, Kit Yu Karen; García, Eliseba; Dupont, Sam</p> <p>2015-05-15</p> <p>Swimming behaviors of planktonic larvae impact dispersal and population dynamics of many benthic marine invertebrates. This key ecological function is modulated by larval development dynamics, biomechanics of the resulting morphology, and behavioral choices. Studies on ocean acidification effects on larval stages have yet to address this important interaction between development and swimming under environmentally-relevant flow conditions. Our video motion analysis revealed that pH covering present and future natural variability (pH 8.0, 7.6 and 7.2) did not affect age-specific swimming of larval green urchin Strongylocentrotus droebachiensis in still water nor in shear, despite acidified individuals being significantly smaller in size (reduced growth rate). This maintenance of speed and stability in shear was accompanied by an overall change in size-corrected shape, implying changes in swimming biomechanics. Our observations highlight strong evolutionary pressure to maintain swimming in a varying environment and the plasticity in larval responses to environmental change.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/12548430','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/12548430"><span>Head bobbing and the body movement of little egrets ( Egretta garzetta) during walking.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Fujita, Masaki</p> <p>2003-01-01</p> <p>Although previous studies have indicated that head bobbing of birds is an optokinetic movement, head bobbing can also be controlled by some biomechanical constraints when it occurs during walking. In the present study, the head bobbing, center of gravity, and body movements of little egrets (Egretta garzetta) during walking were examined by determination of the position of the center of gravity using carcasses and by motion analysis of video films of wild egrets during walking. The results showed that the hold phase occurs while the center of gravity is over the supporting foot during the single support phase. In addition, the peak speed of neck extension was coincident with the peak speed of the center of gravity. These movements are similar to those of pigeons, and suggest the presence of biomechanical constraints on the pattern of head bobbing and body movements during walking.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010ASPC..434..209B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010ASPC..434..209B"><span>Advanced Architectures for Astrophysical Supercomputing</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Barsdell, B. R.; Barnes, D. G.; Fluke, C. J.</p> <p>2010-12-01</p> <p>Astronomers have come to rely on the increasing performance of computers to reduce, analyze, simulate and visualize their data. In this environment, faster computation can mean more science outcomes or the opening up of new parameter spaces for investigation. If we are to avoid major issues when implementing codes on advanced architectures, it is important that we have a solid understanding of our algorithms. A recent addition to the high-performance computing scene that highlights this point is the graphics processing unit (GPU). The hardware originally designed for speeding-up graphics rendering in video games is now achieving speed-ups of O(100×) in general-purpose computation - performance that cannot be ignored. We are using a generalized approach, based on the analysis of astronomy algorithms, to identify the optimal problem-types and techniques for taking advantage of both current GPU hardware and future developments in computing architectures.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28068887','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28068887"><span>The Use of Smart Glasses for Surgical Video Streaming.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Hiranaka, Takafumi; Nakanishi, Yuta; Fujishiro, Takaaki; Hida, Yuichi; Tsubosaka, Masanori; Shibata, Yosaku; Okimura, Kenjiro; Uemoto, Harunobu</p> <p>2017-04-01</p> <p>Observation of surgical procedures performed by experts is extremely important for acquisition and improvement of surgical skills. Smart glasses are small computers, which comprise a head-mounted monitor and video camera, and can be connected to the internet. They can be used for remote observation of surgeries by video streaming. Although Google Glass is the most commonly used smart glasses for medical purposes, it is still unavailable commercially and has some limitations. This article reports the use of a different type of smart glasses, InfoLinker, for surgical video streaming. InfoLinker has been commercially available in Japan for industrial purposes for more than 2 years. It is connected to a video server via wireless internet directly, and streaming video can be seen anywhere an internet connection is available. We have attempted live video streaming of knee arthroplasty operations that were viewed at several different locations, including foreign countries, on a common web browser. Although the quality of video images depended on the resolution and dynamic range of the video camera, speed of internet connection, and the wearer's attention to minimize image shaking, video streaming could be easily performed throughout the procedure. The wearer could confirm the quality of the video as the video was being shot by the head-mounted display. The time and cost for observation of surgical procedures can be reduced by InfoLinker, and further improvement of hardware as well as the wearer's video shooting technique is expected. We believe that this can be used in other medical settings.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/AD1005160','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/AD1005160"><span>Strategies for Transporting Data Between Classified and Unclassified Networks</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>2016-03-01</p> <p>datagram protocol (UDP) must be used. The UDP is typically used when speed is a higher priority than data integrity, such as in music or video streaming ...and the exit point of data are separate and can be tightly controlled. This does effectively prevent the comingling of data and is used in industry to...perform functions such as streaming video and audio from secure to insecure networks (ref. 1). A second disadvantage lies in the fact that the</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017SPIE10396E..21L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017SPIE10396E..21L"><span>Real-time heart rate measurement for multi-people using compressive tracking</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Liu, Lingling; Zhao, Yuejin; Liu, Ming; Kong, Lingqin; Dong, Liquan; Ma, Feilong; Pang, Zongguang; Cai, Zhi; Zhang, Yachu; Hua, Peng; Yuan, Ruifeng</p> <p>2017-09-01</p> <p>The rise of aging population has created a demand for inexpensive, unobtrusive, automated health care solutions. Image PhotoPlethysmoGraphy(IPPG) aids in the development of these solutions by allowing for the extraction of physiological signals from video data. However, the main deficiencies of the recent IPPG methods are non-automated, non-real-time and susceptible to motion artifacts(MA). In this paper, a real-time heart rate(HR) detection method for multiple subjects simultaneously was proposed and realized using the open computer vision(openCV) library, which consists of getting multiple subjects' facial video automatically through a Webcam, detecting the region of interest (ROI) in the video, reducing the false detection rate by our improved Adaboost algorithm, reducing the MA by our improved compress tracking(CT) algorithm, wavelet noise-suppression algorithm for denoising and multi-threads for higher detection speed. For comparison, HR was measured simultaneously using a medical pulse oximetry device for every subject during all sessions. Experimental results on a data set of 30 subjects show that the max average absolute error of heart rate estimation is less than 8 beats per minute (BPM), and the processing speed of every frame has almost reached real-time: the experiments with video recordings of ten subjects under the condition of the pixel resolution of 600× 800 pixels show that the average HR detection time of 10 subjects was about 17 frames per second (fps).</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2009SPIE.7244E..05C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2009SPIE.7244E..05C"><span>Grayscale image segmentation for real-time traffic sign recognition: the hardware point of view</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Cao, Tam P.; Deng, Guang; Elton, Darrell</p> <p>2009-02-01</p> <p>In this paper, we study several grayscale-based image segmentation methods for real-time road sign recognition applications on an FPGA hardware platform. The performance of different image segmentation algorithms in different lighting conditions are initially compared using PC simulation. Based on these results and analysis, suitable algorithms are implemented and tested on a real-time FPGA speed sign detection system. Experimental results show that the system using segmented images uses significantly less hardware resources on an FPGA while maintaining comparable system's performance. The system is capable of processing 60 live video frames per second.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/9146961','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/9146961"><span>Using video-oriented instructions to speed up sequence comparison.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Wozniak, A</p> <p>1997-04-01</p> <p>This document presents an implementation of the well-known Smith-Waterman algorithm for comparison of proteic and nucleic sequences, using specialized video instructions. These instructions, SIMD-like in their design, make possible parallelization of the algorithm at the instruction level. Benchmarks on an ULTRA SPARC running at 167 MHz show a speed-up factor of two compared to the same algorithm implemented with integer instructions on the same machine. Performance reaches over 18 million matrix cells per second on a single processor, giving to our knowledge the fastest implementation of the Smith-Waterman algorithm on a workstation. The accelerated procedure was introduced in LASSAP--a LArge Scale Sequence compArison Package software developed at INRIA--which handles parallelism at higher level. On a SUN Enterprise 6000 server with 12 processors, a speed of nearly 200 million matrix cells per second has been obtained. A sequence of length 300 amino acids is scanned against SWISSPROT R33 (1,8531,385 residues) in 29 s. This procedure is not restricted to databank scanning. It applies to all cases handled by LASSAP (intra- and inter-bank comparisons, Z-score computation, etc.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li class="active"><span>17</span></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_17 --> <div id="page_18" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li class="active"><span>18</span></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="341"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25911958','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25911958"><span>Variability in venom volume, flow rate and duration in defensive stings of five scorpion species.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>van der Meijden, Arie; Coelho, Pedro; Rasko, Mykola</p> <p>2015-06-15</p> <p>Scorpions have been shown to control their venom usage in defensive encounters, depending on the perceived threat. Potentially, the venom amount that is injected could be controlled by reducing the flow speed, the flow duration, or both. We here investigated these variables by allowing scorpions to sting into an oil-filled chamber, and recording the accreting venom droplets with high-speed video. The size of the spherical droplets on the video can then be used to calculate their volume. We recorded defensive stings of 20 specimens representing 5 species. Significant differences in the flow rate and total expelled volume were found between species. These differences are likely due to differences in overall size between the species. Large variation in both venom flow speed and duration are described between stinging events of single individuals. Both venom flow rate and flow duration correlate highly with the total expelled volume, indicating that scorpions may control both variables in order to achieve a desired end volume of venom during a sting. Copyright © 2015 Elsevier Ltd. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4666755','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4666755"><span>Three-dimensional optical reconstruction of vocal fold kinematics using high-speed video with a laser projection system</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Luegmair, Georg; Mehta, Daryush D.; Kobler, James B.; Döllinger, Michael</p> <p>2015-01-01</p> <p>Vocal fold kinematics and its interaction with aerodynamic characteristics play a primary role in acoustic sound production of the human voice. Investigating the temporal details of these kinematics using high-speed videoendoscopic imaging techniques has proven challenging in part due to the limitations of quantifying complex vocal fold vibratory behavior using only two spatial dimensions. Thus, we propose an optical method of reconstructing the superior vocal fold surface in three spatial dimensions using a high-speed video camera and laser projection system. Using stereo-triangulation principles, we extend the camera-laser projector method and present an efficient image processing workflow to generate the three-dimensional vocal fold surfaces during phonation captured at 4000 frames per second. Initial results are provided for airflow-driven vibration of an ex vivo vocal fold model in which at least 75% of visible laser points contributed to the reconstructed surface. The method captures the vertical motion of the vocal folds at a high accuracy to allow for the computation of three-dimensional mucosal wave features such as vibratory amplitude, velocity, and asymmetry. PMID:26087485</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5061108','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5061108"><span>Intraoperative video-rate hemodynamic response assessment in human cortex using snapshot hyperspectral optical imaging</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Pichette, Julien; Laurence, Audrey; Angulo, Leticia; Lesage, Frederic; Bouthillier, Alain; Nguyen, Dang Khoa; Leblond, Frederic</p> <p>2016-01-01</p> <p>Abstract. Using light, we are able to visualize the hemodynamic behavior of the brain to better understand neurovascular coupling and cerebral metabolism. In vivo optical imaging of tissue using endogenous chromophores necessitates spectroscopic detection to ensure molecular specificity as well as sufficiently high imaging speed and signal-to-noise ratio, to allow dynamic physiological changes to be captured, isolated, and used as surrogate of pathophysiological processes. An optical imaging system is introduced using a 16-bands on-chip hyperspectral camera. Using this system, we show that up to three dyes can be imaged and quantified in a tissue phantom at video-rate through the optics of a surgical microscope. In vivo human patient data are presented demonstrating brain hemodynamic response can be measured intraoperatively with molecular specificity at high speed. PMID:27752519</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2006SPIE.6350E..0SY','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2006SPIE.6350E..0SY"><span>Fiber to the home: next generation network</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Yang, Chengxin; Guo, Baoping</p> <p>2006-07-01</p> <p>Next generation networks capable of carrying converged telephone, television (TV), very high-speed internet, and very high-speed bi-directional data services (like video-on-demand (VOD), Game etc.) strategy for Fiber To The Home (FTTH) is presented. The potential market is analyzed. The barriers and some proper strategy are also discussed. Several technical problems like various powering methods, optical fiber cables, and different network architecture are discussed too.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1406353','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/1406353"><span>Why Can’t You Go Faster than Light?</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Lincoln, Don</p> <p></p> <p>One of the most counterintuitive facts of our universe is that you can’t go faster than the speed of light. From this single observation arise all of the mind-bending behaviors of special relativity. But why is this so? In this in-depth video, Fermilab’s Dr. Don Lincoln explains the real reason that you can’t go faster than the speed of light. It will blow your mind.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1995SPIE.2608....2L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1995SPIE.2608....2L"><span>Copper link evaluations/solutions for fiber channel, SSA, SONET, ATM, and other services through 4 Gb/sec: basic information, test results, and evaluation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Leib, Michael J.</p> <p>1995-10-01</p> <p>Technitrol, the original designer of MIL-STD-1553 transformers, the original military 1Mb/s LAN, has advanced the state of the art one further notch, introducing a series of transceivers that allow high speed (through 1 Gb/s) data transmission over copper wire instead of fiber optic cable. One such device can be employed to implement the Fiber Channel Interface as defined by hte X3T11 ANSI Fibre Channel Committee using either mini coax, Type 1 shielded twisted pair, twinax or video cable. The technology now exists to upgrade data transmission rates on current physical media to speeds formerly only available with fiber optic cabling. Copper transceiver technology provides a cost effective alternative for dealing with demanding high speed applications such as high speed serial data transfer, high speed disk and tape storage transfer, imaging telemetry, radar, and other avionics applications. Eye diagrams will be presented to show that excellent data transmission at rates of 1 gigabit/sec with low jitter is capable over mini coax at distances to approximately 50 meters, shielded twisted pair and twinax cable to distances of 105 meters, and video cable to distances of 175 meters. Distances are further at lower data rates. As a member of the X3T11 ANSI Fiber Channel Committee, Technitrol has developed a Physical Media (copper wire) Dependant (PMD) transceiver not only compliant with the Fibre Channel Specifications but exceeding the specifications by a factor greater than four. Conceivably, this opens high speed interconnections for today's high data rate requirements to copper cabling systems. Fibre Optic problems need not be dealt with to obtain data transfers for high speed information transfers.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26550603','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26550603"><span>Ensemble of Chaotic and Naive Approaches for Performance Enhancement in Video Encryption.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Chandrasekaran, Jeyamala; Thiruvengadam, S J</p> <p>2015-01-01</p> <p>Owing to the growth of high performance network technologies, multimedia applications over the Internet are increasing exponentially. Applications like video conferencing, video-on-demand, and pay-per-view depend upon encryption algorithms for providing confidentiality. Video communication is characterized by distinct features such as large volume, high redundancy between adjacent frames, video codec compliance, syntax compliance, and application specific requirements. Naive approaches for video encryption encrypt the entire video stream with conventional text based cryptographic algorithms. Although naive approaches are the most secure for video encryption, the computational cost associated with them is very high. This research work aims at enhancing the speed of naive approaches through chaos based S-box design. Chaotic equations are popularly known for randomness, extreme sensitivity to initial conditions, and ergodicity. The proposed methodology employs two-dimensional discrete Henon map for (i) generation of dynamic and key-dependent S-box that could be integrated with symmetric algorithms like Blowfish and Data Encryption Standard (DES) and (ii) generation of one-time keys for simple substitution ciphers. The proposed design is tested for randomness, nonlinearity, avalanche effect, bit independence criterion, and key sensitivity. Experimental results confirm that chaos based S-box design and key generation significantly reduce the computational cost of video encryption with no compromise in security.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4621361','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4621361"><span>Ensemble of Chaotic and Naive Approaches for Performance Enhancement in Video Encryption</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Chandrasekaran, Jeyamala; Thiruvengadam, S. J.</p> <p>2015-01-01</p> <p>Owing to the growth of high performance network technologies, multimedia applications over the Internet are increasing exponentially. Applications like video conferencing, video-on-demand, and pay-per-view depend upon encryption algorithms for providing confidentiality. Video communication is characterized by distinct features such as large volume, high redundancy between adjacent frames, video codec compliance, syntax compliance, and application specific requirements. Naive approaches for video encryption encrypt the entire video stream with conventional text based cryptographic algorithms. Although naive approaches are the most secure for video encryption, the computational cost associated with them is very high. This research work aims at enhancing the speed of naive approaches through chaos based S-box design. Chaotic equations are popularly known for randomness, extreme sensitivity to initial conditions, and ergodicity. The proposed methodology employs two-dimensional discrete Henon map for (i) generation of dynamic and key-dependent S-box that could be integrated with symmetric algorithms like Blowfish and Data Encryption Standard (DES) and (ii) generation of one-time keys for simple substitution ciphers. The proposed design is tested for randomness, nonlinearity, avalanche effect, bit independence criterion, and key sensitivity. Experimental results confirm that chaos based S-box design and key generation significantly reduce the computational cost of video encryption with no compromise in security. PMID:26550603</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016SPIE.9971E..1MG','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016SPIE.9971E..1MG"><span>Layer-based buffer aware rate adaptation design for SHVC video streaming</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Gudumasu, Srinivas; Hamza, Ahmed; Asbun, Eduardo; He, Yong; Ye, Yan</p> <p>2016-09-01</p> <p>This paper proposes a layer based buffer aware rate adaptation design which is able to avoid abrupt video quality fluctuation, reduce re-buffering latency and improve bandwidth utilization when compared to a conventional simulcast based adaptive streaming system. The proposed adaptation design schedules DASH segment requests based on the estimated bandwidth, dependencies among video layers and layer buffer fullness. Scalable HEVC video coding is the latest state-of-art video coding technique that can alleviate various issues caused by simulcast based adaptive video streaming. With scalable coded video streams, the video is encoded once into a number of layers representing different qualities and/or resolutions: a base layer (BL) and one or more enhancement layers (EL), each incrementally enhancing the quality of the lower layers. Such layer based coding structure allows fine granularity rate adaptation for the video streaming applications. Two video streaming use cases are presented in this paper. The first use case is to stream HD SHVC video over a wireless network where available bandwidth varies, and the performance comparison between proposed layer-based streaming approach and conventional simulcast streaming approach is provided. The second use case is to stream 4K/UHD SHVC video over a hybrid access network that consists of a 5G millimeter wave high-speed wireless link and a conventional wired or WiFi network. The simulation results verify that the proposed layer based rate adaptation approach is able to utilize the bandwidth more efficiently. As a result, a more consistent viewing experience with higher quality video content and minimal video quality fluctuations can be presented to the user.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19900011344','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19900011344"><span>Tools for 3D scientific visualization in computational aerodynamics</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Bancroft, Gordon; Plessel, Todd; Merritt, Fergus; Watson, Val</p> <p>1989-01-01</p> <p>The purpose is to describe the tools and techniques in use at the NASA Ames Research Center for performing visualization of computational aerodynamics, for example visualization of flow fields from computer simulations of fluid dynamics about vehicles such as the Space Shuttle. The hardware used for visualization is a high-performance graphics workstation connected to a super computer with a high speed channel. At present, the workstation is a Silicon Graphics IRIS 3130, the supercomputer is a CRAY2, and the high speed channel is a hyperchannel. The three techniques used for visualization are post-processing, tracking, and steering. Post-processing analysis is done after the simulation. Tracking analysis is done during a simulation but is not interactive, whereas steering analysis involves modifying the simulation interactively during the simulation. Using post-processing methods, a flow simulation is executed on a supercomputer and, after the simulation is complete, the results of the simulation are processed for viewing. The software in use and under development at NASA Ames Research Center for performing these types of tasks in computational aerodynamics is described. Workstation performance issues, benchmarking, and high-performance networks for this purpose are also discussed as well as descriptions of other hardware for digital video and film recording.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/21654094','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/21654094"><span>Physical game demands in elite rugby union: a global positioning system analysis and possible implications for rehabilitation.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Coughlan, Garrett F; Green, Brian S; Pook, Paul T; Toolan, Eoin; O'Connor, Sean P</p> <p>2011-08-01</p> <p>Descriptive. To evaluate the physical demands of an international Rugby Union-level game using a global positioning system (GPS). Elite Rugby Union teams currently employ the latest technology to monitor and evaluate physical demands of training and games on their players. GPS data from 2 players, a back and a forward, were collected during an international Rugby Union game. Locomotion speed, total body load, and body load sustained in tackles and scrums were analyzed. Players completed an average distance of 6715 m and spent the major portion of the game standing or walking, interspersed with medium- and high-intensity running activities. The back performed a higher number of high-intensity sprints and reached a greater maximal speed. Body load data revealed that high levels of gravitational force are sustained in tackling and scrum tasks. The current study provides a detailed GPS analysis of the physical demands of international Rugby Union players. These data, when combined with game video footage, may assist sports medicine professionals in understanding the demands of the game and mechanism of injury, as well as improving injury rehabilitation.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://images.nasa.gov/#/details-9903700.html','SCIGOVIMAGE-NASA'); return false;" href="https://images.nasa.gov/#/details-9903700.html"><span>Benefit from NASA</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://images.nasa.gov/">NASA Image and Video Library</a></p> <p></p> <p>1999-06-01</p> <p>Two scientists at NASA Marshall Space Flight Center, atmospheric scientist Paul Meyer (left) and solar physicist Dr. David Hathaway, have developed promising new software, called Video Image Stabilization and Registration (VISAR), that may help law enforcement agencies to catch criminals by improving the quality of video recorded at crime scenes, VISAR stabilizes camera motion in the horizontal and vertical as well as rotation and zoom effects; produces clearer images of moving objects; smoothes jagged edges; enhances still images; and reduces video noise of snow. VISAR could also have applications in medical and meteorological imaging. It could steady images of Ultrasounds which are infamous for their grainy, blurred quality. It would be especially useful for tornadoes, tracking whirling objects and helping to determine the tornado's wind speed. This image shows two scientists reviewing an enhanced video image of a license plate taken from a moving automobile.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2009LNCS.5889..511P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2009LNCS.5889..511P"><span>Video Relay Service for Signing Deaf - Lessons Learnt from a Pilot Study</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ponsard, Christophe; Sutera, Joelle; Henin, Michael</p> <p></p> <p>The generalization of high speed Internet, efficient compression techniques and low cost hardware have resulted in low cost video communication since the year 2000. For the Deaf community, this enables native communication in sign language and a better communication with hearing people over the phone. This implies that Video Relay Service can take over the old Text Relay Service which is less natural and requires mastering written language. A number of such services have developed throughout the world. The objectives of this paper are to present the experience gained in the Walloon Region of Belgium, to share a number of lessons learnt, and to provide recommendations at the technical, user adoption and political levels. A survey of video relay services around the world is presented together with the feedback from users both before and after using the pilot service.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017JaJAP..56dCF06L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017JaJAP..56dCF06L"><span>Low-power coprocessor for Haar-like feature extraction with pixel-based pipelined architecture</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Luo, Aiwen; An, Fengwei; Fujita, Yuki; Zhang, Xiangyu; Chen, Lei; Jürgen Mattausch, Hans</p> <p>2017-04-01</p> <p>Intelligent analysis of image and video data requires image-feature extraction as an important processing capability for machine-vision realization. A coprocessor with pixel-based pipeline (CFEPP) architecture is developed for real-time Haar-like cell-based feature extraction. Synchronization with the image sensor’s pixel frequency and immediate usage of each input pixel for the feature-construction process avoids the dependence on memory-intensive conventional strategies like integral-image construction or frame buffers. One 180 nm CMOS prototype can extract the 1680-dimensional Haar-like feature vectors, applied in the speeded up robust features (SURF) scheme, using an on-chip memory of only 96 kb (kilobit). Additionally, a low power dissipation of only 43.45 mW at 1.8 V supply voltage is achieved during VGA video procession at 120 MHz frequency with more than 325 fps. The Haar-like feature-extraction coprocessor is further evaluated by the practical application of vehicle recognition, achieving the expected high accuracy which is comparable to previous work.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFMAE33A2516S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFMAE33A2516S"><span>Utilizing ISS Camera Systems for Scientific Analysis of Lightning Characteristics and comparison with ISS-LIS and GLM</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Schultz, C. J.; Lang, T. J.; Leake, S.; Runco, M.; Blakeslee, R. J.</p> <p>2017-12-01</p> <p>Video and still frame images from cameras aboard the International Space Station (ISS) are used to inspire, educate, and provide a unique vantage point from low-Earth orbit that is second to none; however, these cameras have overlooked capabilities for contributing to scientific analysis of the Earth and near-space environment. The goal of this project is to study how georeferenced video/images from available ISS camera systems can be useful for scientific analysis, using lightning properties as a demonstration. Camera images from the crew cameras and high definition video from the Chiba University Meteor Camera were combined with lightning data from the National Lightning Detection Network (NLDN), ISS-Lightning Imaging Sensor (ISS-LIS), the Geostationary Lightning Mapper (GLM) and lightning mapping arrays. These cameras provide significant spatial resolution advantages ( 10 times or better) over ISS-LIS and GLM, but with lower temporal resolution. Therefore, they can serve as a complementarity analysis tool for studying lightning and thunderstorm processes from space. Lightning sensor data, Visible Infrared Imaging Radiometer Suite (VIIRS) derived city light maps, and other geographic databases were combined with the ISS attitude and position data to reverse geolocate each image or frame. An open-source Python toolkit has been developed to assist with this effort. Next, the locations and sizes of all flashes in each frame or image were computed and compared with flash characteristics from all available lightning datasets. This allowed for characterization of cloud features that are below the 4-km and 8-km resolution of ISS-LIS and GLM which may reduce the light that reaches the ISS-LIS or GLM sensor. In the case of video, consecutive frames were overlaid to determine the rate of change of the light escaping cloud top. Characterization of the rate of change in geometry, more generally the radius, of light escaping cloud top was integrated with the NLDN, ISS-LIS and GLM to understand how the peak rate of change and the peak area of each flash aligned with each lightning system in time. Flash features like leaders could be inferred from the video frames as well. Testing is being done to see if leader speeds may be accurately calculated under certain circumstances.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2003SPIE.5286..962Y','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2003SPIE.5286..962Y"><span>Content-based TV sports video retrieval using multimodal analysis</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Yu, Yiqing; Liu, Huayong; Wang, Hongbin; Zhou, Dongru</p> <p>2003-09-01</p> <p>In this paper, we propose content-based video retrieval, which is a kind of retrieval by its semantical contents. Because video data is composed of multimodal information streams such as video, auditory and textual streams, we describe a strategy of using multimodal analysis for automatic parsing sports video. The paper first defines the basic structure of sports video database system, and then introduces a new approach that integrates visual stream analysis, speech recognition, speech signal processing and text extraction to realize video retrieval. The experimental results for TV sports video of football games indicate that the multimodal analysis is effective for video retrieval by quickly browsing tree-like video clips or inputting keywords within predefined domain.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2008APS..DFD.AW001S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2008APS..DFD.AW001S"><span>Videos and images from 25 years of teaching compressible flow</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Settles, Gary</p> <p>2008-11-01</p> <p>Compressible flow is a very visual topic due to refractive optical flow visualization and the public fascination with high-speed flight. Films, video clips, and many images are available to convey this in the classroom. An overview of this material is given and selected examples are shown, drawn from educational films, the movies, television, etc., and accumulated over 25 years of teaching basic and advanced compressible-flow courses. The impact of copyright protection and the doctrine of fair use is also discussed.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/ADA394741','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/ADA394741"><span>U.S. Army Dugway Proving Ground, UT and the West Desert Test Center</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>2001-04-30</p> <p>Optic Network •32 Channels of Video •7 Communications Consoles with 20 Radio Nets and Phone Patch •Over 40 Communication Drops & Cell Phones •Centralized... Phone Tower Sites Cell Phone Activation May ‘01 Logistical Support Full Serv ice Com mun ity l Ground Transportation and Maintenance – Vehicle Support...Voice, Data & Video Communications in Mission Control Center •Data Speed: 100 MB/sec •Ethernet Connections •Commercial Power to most Sites Cell</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19850026653','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19850026653"><span>Balloon-borne video cassette recorders for digital data storage</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Althouse, W. E.; Cook, W. R.</p> <p>1985-01-01</p> <p>A high speed, high capacity digital data storage system was developed for a new balloon-borne gamma-ray telescope. The system incorporates economical consumer products: the portable video cassette recorder (VCR) and a relatively newer item - the digital audio processor. The in-flight recording system employs eight VCRs and will provide a continuous data storage rate of 1.4 megabits/sec throughout a 40 hour balloon flight. Data storage capacity is 25 gigabytes and power consumption is only 10 watts.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19880002543','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19880002543"><span>Lewis Information Network (LINK): Background and overview</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Schulte, Roger R.</p> <p>1987-01-01</p> <p>The NASA Lewis Research Center supports many research facilities with many isolated buildings, including wind tunnels, test cells, and research laboratories. These facilities are all located on a 350 acre campus adjacent to the Cleveland Hopkins Airport. The function of NASA-Lewis is to do basic and applied research in all areas of aeronautics, fluid mechanics, materials and structures, space propulsion, and energy systems. These functions require a great variety of remote high speed, high volume data communications for computing and interactive graphic capabilities. In addition, new requirements for local distribution of intercenter video teleconferencing and data communications via satellite have developed. To address these and future communications requirements for the next 15 yrs, a project team was organized to design and implement a new high speed communication system that would handle both data and video information in a common lab-wide Local Area Network. The project team selected cable television broadband coaxial cable technology as the communications medium and first installation of in-ground cable began in the summer of 1980. The Lewis Information Network (LINK) became operational in August 1982 and has become the backbone of all data communications and video.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li class="active"><span>18</span></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_18 --> <div id="page_19" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li class="active"><span>19</span></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="361"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/870158','DOE-PATENT-XML'); return false;" href="https://www.osti.gov/servlets/purl/870158"><span>High speed imager test station</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/doepatents">DOEpatents</a></p> <p>Yates, George J.; Albright, Kevin L.; Turko, Bojan T.</p> <p>1995-01-01</p> <p>A test station enables the performance of a solid state imager (herein called a focal plane array or FPA) to be determined at high image frame rates. A programmable waveform generator is adapted to generate clock pulses at determinable rates for clock light-induced charges from a FPA. The FPA is mounted on an imager header board for placing the imager in operable proximity to level shifters for receiving the clock pulses and outputting pulses effective to clock charge from the pixels forming the FPA. Each of the clock level shifters is driven by leading and trailing edge portions of the clock pulses to reduce power dissipation in the FPA. Analog circuits receive output charge pulses clocked from the FPA pixels. The analog circuits condition the charge pulses to cancel noise in the pulses and to determine and hold a peak value of the charge for digitizing. A high speed digitizer receives the peak signal value and outputs a digital representation of each one of the charge pulses. A video system then displays an image associated with the digital representation of the output charge pulses clocked from the FPA. In one embodiment, the FPA image is formatted to a standard video format for display on conventional video equipment.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/131910','DOE-PATENT-XML'); return false;" href="https://www.osti.gov/biblio/131910"><span>High speed imager test station</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/doepatents">DOEpatents</a></p> <p>Yates, G.J.; Albright, K.L.; Turko, B.T.</p> <p>1995-11-14</p> <p>A test station enables the performance of a solid state imager (herein called a focal plane array or FPA) to be determined at high image frame rates. A programmable waveform generator is adapted to generate clock pulses at determinable rates for clock light-induced charges from a FPA. The FPA is mounted on an imager header board for placing the imager in operable proximity to level shifters for receiving the clock pulses and outputting pulses effective to clock charge from the pixels forming the FPA. Each of the clock level shifters is driven by leading and trailing edge portions of the clock pulses to reduce power dissipation in the FPA. Analog circuits receive output charge pulses clocked from the FPA pixels. The analog circuits condition the charge pulses to cancel noise in the pulses and to determine and hold a peak value of the charge for digitizing. A high speed digitizer receives the peak signal value and outputs a digital representation of each one of the charge pulses. A video system then displays an image associated with the digital representation of the output charge pulses clocked from the FPA. In one embodiment, the FPA image is formatted to a standard video format for display on conventional video equipment. 12 figs.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25898843','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25898843"><span>Not So Fast: Swimming Behavior of Sailfish during Predator-Prey Interactions using High-Speed Video and Accelerometry.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Marras, Stefano; Noda, Takuji; Steffensen, John F; Svendsen, Morten B S; Krause, Jens; Wilson, Alexander D M; Kurvers, Ralf H J M; Herbert-Read, James; Boswell, Kevin M; Domenici, Paolo</p> <p>2015-10-01</p> <p>Billfishes are considered among the fastest swimmers in the oceans. Despite early estimates of extremely high speeds, more recent work showed that these predators (e.g., blue marlin) spend most of their time swimming slowly, rarely exceeding 2 m s(-1). Predator-prey interactions provide a context within which one may expect maximal speeds both by predators and prey. Beyond speed, however, an important component determining the outcome of predator-prey encounters is unsteady swimming (i.e., turning and accelerating). Although large predators are faster than their small prey, the latter show higher performance in unsteady swimming. To contrast the evading behaviors of their highly maneuverable prey, sailfish and other large aquatic predators possess morphological adaptations, such as elongated bills, which can be moved more rapidly than the whole body itself, facilitating capture of the prey. Therefore, it is an open question whether such supposedly very fast swimmers do use high-speed bursts when feeding on evasive prey, in addition to using their bill for slashing prey. Here, we measured the swimming behavior of sailfish by using high-frequency accelerometry and high-speed video observations during predator-prey interactions. These measurements allowed analyses of tail beat frequencies to estimate swimming speeds. Our results suggest that sailfish burst at speeds of about 7 m s(-1) and do not exceed swimming speeds of 10 m s(-1) during predator-prey interactions. These speeds are much lower than previous estimates. In addition, the oscillations of the bill during swimming with, and without, extension of the dorsal fin (i.e., the sail) were measured. We suggest that extension of the dorsal fin may allow sailfish to improve the control of the bill and minimize its yaw, hence preventing disturbance of the prey. Therefore, sailfish, like other large predators, may rely mainly on accuracy of movement and the use of the extensions of their bodies, rather than resorting to top speeds when hunting evasive prey. © The Author 2015. Published by Oxford University Press on behalf of the Society for Integrative and Comparative Biology. All rights reserved. For permissions please email: journals.permissions@oup.com.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/17210957','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/17210957"><span>The desert ant odometer: a stride integrator that accounts for stride length and walking speed.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Wittlinger, Matthias; Wehner, Rüdiger; Wolf, Harald</p> <p>2007-01-01</p> <p>Desert ants, Cataglyphis, use path integration as a major means of navigation. Path integration requires measurement of two parameters, namely, direction and distance of travel. Directional information is provided by a celestial compass, whereas distance measurement is accomplished by a stride integrator, or pedometer. Here we examine the recently demonstrated pedometer function in more detail. By manipulating leg lengths in foraging desert ants we could also change their stride lengths. Ants with elongated legs ('stilts') or shortened legs ('stumps') take larger or shorter strides, respectively, and misgauge travel distance. Travel distance is overestimated by experimental animals walking on stilts, and underestimated by animals walking on stumps - strongly indicative of stride integrator function in distance measurement. High-speed video analysis was used to examine the actual changes in stride length, stride frequency and walking speed caused by the manipulations of leg length. Unexpectedly, quantitative characteristics of walking behaviour remained almost unaffected by imposed changes in leg length, demonstrating remarkable robustness of leg coordination and walking performance. These data further allowed normalisation of homing distances displayed by manipulated animals with regard to scaling and speed effects. The predicted changes in homing distance are in quantitative agreement with the experimental data, further supporting the pedometer hypothesis.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20040086464','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20040086464"><span>Helping Video Games Rewire "Our Minds"</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Pope, Alan T.; Palsson, Olafur S.</p> <p>2001-01-01</p> <p>Biofeedback-modulated video games are games that respond to physiological signals as well as mouse, joystick or game controller input; they embody the concept of improving physiological functioning by rewarding specific healthy body signals with success at playing a video game. The NASA patented biofeedback-modulated game method blends biofeedback into popular off-the- shelf video games in such a way that the games do not lose their entertainment value. This method uses physiological signals (e.g., electroencephalogram frequency band ratio) not simply to drive a biofeedback display directly, or periodically modify a task as in other systems, but to continuously modulate parameters (e.g., game character speed and mobility) of a game task in real time while the game task is being performed by other means (e.g., a game controller). Biofeedback-modulated video games represent a new generation of computer and video game environments that train valuable mental skills beyond eye-hand coordination. These psychophysiological training technologies are poised to exploit the revolution in interactive multimedia home entertainment for the personal improvement, not just the diversion, of the user.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26835954','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26835954"><span>Three-directional motion-compensation mask-based novel look-up table on graphics processing units for video-rate generation of digital holographic videos of three-dimensional scenes.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Kwon, Min-Woo; Kim, Seung-Cheol; Kim, Eun-Soo</p> <p>2016-01-20</p> <p>A three-directional motion-compensation mask-based novel look-up table method is proposed and implemented on graphics processing units (GPUs) for video-rate generation of digital holographic videos of three-dimensional (3D) scenes. Since the proposed method is designed to be well matched with the software and memory structures of GPUs, the number of compute-unified-device-architecture kernel function calls can be significantly reduced. This results in a great increase of the computational speed of the proposed method, allowing video-rate generation of the computer-generated hologram (CGH) patterns of 3D scenes. Experimental results reveal that the proposed method can generate 39.8 frames of Fresnel CGH patterns with 1920×1080 pixels per second for the test 3D video scenario with 12,088 object points on dual GPU boards of NVIDIA GTX TITANs, and they confirm the feasibility of the proposed method in the practical application fields of electroholographic 3D displays.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5738070','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5738070"><span>Improving vehicle tracking rate and speed estimation in dusty and snowy weather conditions with a vibrating camera</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Yaghoobi Ershadi, Nastaran</p> <p>2017-01-01</p> <p>Traffic surveillance systems are interesting to many researchers to improve the traffic control and reduce the risk caused by accidents. In this area, many published works are only concerned about vehicle detection in normal conditions. The camera may vibrate due to wind or bridge movement. Detection and tracking of vehicles is a very difficult task when we have bad weather conditions in winter (snowy, rainy, windy, etc.), dusty weather in arid and semi-arid regions, at night, etc. Also, it is very important to consider speed of vehicles in the complicated weather condition. In this paper, we improved our method to track and count vehicles in dusty weather with vibrating camera. For this purpose, we used a background subtraction based strategy mixed with an extra processing to segment vehicles. In this paper, the extra processing included the analysis of the headlight size, location, and area. In our work, tracking was done between consecutive frames via a generalized particle filter to detect the vehicle and pair the headlights using the connected component analysis. So, vehicle counting was performed based on the pairing result, with Centroid of each blob we calculated distance between two frames by simple formula and hence dividing it by the time between two frames obtained from the video. Our proposed method was tested on several video surveillance records in different conditions such as dusty or foggy weather, vibrating camera, and in roads with medium-level traffic volumes. The results showed that the new proposed method performed better than our previously published method and other methods, including the Kalman filter or Gaussian model, in different traffic conditions. PMID:29261719</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29261719','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29261719"><span>Improving vehicle tracking rate and speed estimation in dusty and snowy weather conditions with a vibrating camera.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Yaghoobi Ershadi, Nastaran</p> <p>2017-01-01</p> <p>Traffic surveillance systems are interesting to many researchers to improve the traffic control and reduce the risk caused by accidents. In this area, many published works are only concerned about vehicle detection in normal conditions. The camera may vibrate due to wind or bridge movement. Detection and tracking of vehicles is a very difficult task when we have bad weather conditions in winter (snowy, rainy, windy, etc.), dusty weather in arid and semi-arid regions, at night, etc. Also, it is very important to consider speed of vehicles in the complicated weather condition. In this paper, we improved our method to track and count vehicles in dusty weather with vibrating camera. For this purpose, we used a background subtraction based strategy mixed with an extra processing to segment vehicles. In this paper, the extra processing included the analysis of the headlight size, location, and area. In our work, tracking was done between consecutive frames via a generalized particle filter to detect the vehicle and pair the headlights using the connected component analysis. So, vehicle counting was performed based on the pairing result, with Centroid of each blob we calculated distance between two frames by simple formula and hence dividing it by the time between two frames obtained from the video. Our proposed method was tested on several video surveillance records in different conditions such as dusty or foggy weather, vibrating camera, and in roads with medium-level traffic volumes. The results showed that the new proposed method performed better than our previously published method and other methods, including the Kalman filter or Gaussian model, in different traffic conditions.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016ISPAr49B2..497B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016ISPAr49B2..497B"><span>Analysis of Spatio-Temporal Traffic Patterns Based on Pedestrian Trajectories</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Busch, S.; Schindler, T.; Klinger, T.; Brenner, C.</p> <p>2016-06-01</p> <p>For driver assistance and autonomous driving systems, it is essential to predict the behaviour of other traffic participants. Usually, standard filter approaches are used to this end, however, in many cases, these are not sufficient. For example, pedestrians are able to change their speed or direction instantly. Also, there may be not enough observation data to determine the state of an object reliably, e.g. in case of occlusions. In those cases, it is very useful if a prior model exists, which suggests certain outcomes. For example, it is useful to know that pedestrians are usually crossing the road at a certain location and at certain times. This information can then be stored in a map which then can be used as a prior in scene analysis, or in practical terms to reduce the speed of a vehicle in advance in order to minimize critical situations. In this paper, we present an approach to derive such a spatio-temporal map automatically from the observed behaviour of traffic participants in everyday traffic situations. In our experiments, we use one stationary camera to observe a complex junction, where cars, public transportation and pedestrians interact. We concentrate on the pedestrians trajectories to map traffic patterns. In the first step, we extract trajectory segments from the video data. These segments are then clustered in order to derive a spatial model of the scene, in terms of a spatially embedded graph. In the second step, we analyse the temporal patterns of pedestrian movement on this graph. We are able to derive traffic light sequences as well as the timetables of nearby public transportation. To evaluate our approach, we used a 4 hour video sequence. We show that we are able to derive traffic light sequences as well as time tables of nearby public transportation.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/18244634','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/18244634"><span>2D-pattern matching image and video compression: theory, algorithms, and experiments.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Alzina, Marc; Szpankowski, Wojciech; Grama, Ananth</p> <p>2002-01-01</p> <p>In this paper, we propose a lossy data compression framework based on an approximate two-dimensional (2D) pattern matching (2D-PMC) extension of the Lempel-Ziv (1977, 1978) lossless scheme. This framework forms the basis upon which higher level schemes relying on differential coding, frequency domain techniques, prediction, and other methods can be built. We apply our pattern matching framework to image and video compression and report on theoretical and experimental results. Theoretically, we show that the fixed database model used for video compression leads to suboptimal but computationally efficient performance. The compression ratio of this model is shown to tend to the generalized entropy. For image compression, we use a growing database model for which we provide an approximate analysis. The implementation of 2D-PMC is a challenging problem from the algorithmic point of view. We use a range of techniques and data structures such as k-d trees, generalized run length coding, adaptive arithmetic coding, and variable and adaptive maximum distortion level to achieve good compression ratios at high compression speeds. We demonstrate bit rates in the range of 0.25-0.5 bpp for high-quality images and data rates in the range of 0.15-0.5 Mbps for a baseline video compression scheme that does not use any prediction or interpolation. We also demonstrate that this asymmetric compression scheme is capable of extremely fast decompression making it particularly suitable for networked multimedia applications.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20120016681','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20120016681"><span>Modernization of B-2 Data, Video, and Control Systems Infrastructure</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Cmar, Mark D.; Maloney, Christian T.; Butala, Vishal D.</p> <p>2012-01-01</p> <p>The National Aeronautics and Space Administration (NASA) Glenn Research Center (GRC) Plum Brook Station (PBS) Spacecraft Propulsion Research Facility, commonly referred to as B-2, is NASA s third largest thermal-vacuum facility with propellant systems capability. B-2 has completed a modernization effort of its facility legacy data, video and control systems infrastructure to accommodate modern integrated testing and Information Technology (IT) Security requirements. Integrated systems tests have been conducted to demonstrate the new data, video and control systems functionality and capability. Discrete analog signal conditioners have been replaced by new programmable, signal processing hardware that is integrated with the data system. This integration supports automated calibration and verification of the analog subsystem. Modern measurement systems analysis (MSA) tools are being developed to help verify system health and measurement integrity. Legacy hard wired digital data systems have been replaced by distributed Fibre Channel (FC) network connected digitizers where high speed sampling rates have increased to 256,000 samples per second. Several analog video cameras have been replaced by digital image and storage systems. Hard-wired analog control systems have been replaced by Programmable Logic Controllers (PLC), fiber optic networks (FON) infrastructure and human machine interface (HMI) operator screens. New modern IT Security procedures and schemes have been employed to control data access and process control flows. Due to the nature of testing possible at B-2, flexibility and configurability of systems has been central to the architecture during modernization.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20130000318','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20130000318"><span>Modernization of B-2 Data, Video, and Control Systems Infrastructure</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Cmar, Mark D.; Maloney, Christian T.; Butala, Vishal D.</p> <p>2012-01-01</p> <p>The National Aeronautics and Space Administration (NASA) Glenn Research Center (GRC) Plum Brook Station (PBS) Spacecraft Propulsion Research Facility, commonly referred to as B-2, is NASA's third largest thermal-vacuum facility with propellant systems capability. B-2 has completed a modernization effort of its facility legacy data, video and control systems infrastructure to accommodate modern integrated testing and Information Technology (IT) Security requirements. Integrated systems tests have been conducted to demonstrate the new data, video and control systems functionality and capability. Discrete analog signal conditioners have been replaced by new programmable, signal processing hardware that is integrated with the data system. This integration supports automated calibration and verification of the analog subsystem. Modern measurement systems analysis (MSA) tools are being developed to help verify system health and measurement integrity. Legacy hard wired digital data systems have been replaced by distributed Fibre Channel (FC) network connected digitizers where high speed sampling rates have increased to 256,000 samples per second. Several analog video cameras have been replaced by digital image and storage systems. Hard-wired analog control systems have been replaced by Programmable Logic Controllers (PLC), fiber optic networks (FON) infrastructure and human machine interface (HMI) operator screens. New modern IT Security procedures and schemes have been employed to control data access and process control flows. Due to the nature of testing possible at B-2, flexibility and configurability of systems has been central to the architecture during modernization.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29769558','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29769558"><span>High-speed Fourier ptychographic microscopy based on programmable annular illuminations.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Sun, Jiasong; Zuo, Chao; Zhang, Jialin; Fan, Yao; Chen, Qian</p> <p>2018-05-16</p> <p>High-throughput quantitative phase imaging (QPI) is essential to cellular phenotypes characterization as it allows high-content cell analysis and avoids adverse effects of staining reagents on cellular viability and cell signaling. Among different approaches, Fourier ptychographic microscopy (FPM) is probably the most promising technique to realize high-throughput QPI by synthesizing a wide-field, high-resolution complex image from multiple angle-variably illuminated, low-resolution images. However, the large dataset requirement in conventional FPM significantly limits its imaging speed, resulting in low temporal throughput. Moreover, the underlying theoretical mechanism as well as optimum illumination scheme for high-accuracy phase imaging in FPM remains unclear. Herein, we report a high-speed FPM technique based on programmable annular illuminations (AIFPM). The optical-transfer-function (OTF) analysis of FPM reveals that the low-frequency phase information can only be correctly recovered if the LEDs are precisely located at the edge of the objective numerical aperture (NA) in the frequency space. By using only 4 low-resolution images corresponding to 4 tilted illuminations matching a 10×, 0.4 NA objective, we present the high-speed imaging results of in vitro Hela cells mitosis and apoptosis at a frame rate of 25 Hz with a full-pitch resolution of 655 nm at a wavelength of 525 nm (effective NA = 0.8) across a wide field-of-view (FOV) of 1.77 mm 2 , corresponding to a space-bandwidth-time product of 411 megapixels per second. Our work reveals an important capability of FPM towards high-speed high-throughput imaging of in vitro live cells, achieving video-rate QPI performance across a wide range of scales, both spatial and temporal.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018AMT....11.1377B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018AMT....11.1377B"><span>Raindrop fall velocities from an optical array probe and 2-D video disdrometer</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Bringi, Viswanathan; Thurai, Merhala; Baumgardner, Darrel</p> <p>2018-03-01</p> <p>We report on fall speed measurements of raindrops in light-to-heavy rain events from two climatically different regimes (Greeley, Colorado, and Huntsville, Alabama) using the high-resolution (50 µm) Meteorological Particle Spectrometer (MPS) and a third-generation (170 µm resolution) 2-D video disdrometer (2DVD). To mitigate wind effects, especially for the small drops, both instruments were installed within a 2/3-scale Double Fence Intercomparison Reference (DFIR) enclosure. Two cases involved light-to-moderate wind speeds/gusts while the third case was a tornadic supercell and several squall lines that passed over the site with high wind speeds/gusts. As a proxy for turbulent intensity, maximum wind speeds from 10 m height at the instrumented site recorded every 3 s were differenced with the 5 min average wind speeds and then squared. The fall speeds vs. size from 0.1 to 2 and > 0.7 mm were derived from the MPS and the 2DVD, respectively. Consistency of fall speeds from the two instruments in the overlap region (0.7-2 mm) gave confidence in the data quality and processing methodologies. Our results indicate that under low turbulence, the mean fall speeds agree well with fits to the terminal velocity measured in the laboratory by Gunn and Kinzer from 100 µm up to precipitation sizes. The histograms of fall speeds for 0.5, 0.7, 1 and 1.5 mm sizes were examined in detail under the same conditions. The histogram shapes for the 1 and 1.5 mm sizes were symmetric and in good agreement between the two instruments with no evidence of skewness or of sub- or super-terminal fall speeds. The histograms of the smaller 0.5 and 0.7 mm drops from MPS, while generally symmetric, showed that occasional occurrences of sub- and super-terminal fall speeds could not be ruled out. In the supercell case, the very strong gusts and inferred high turbulence intensity caused a significant broadening of the fall speed distributions with negative skewness (for drops of 1.3, 2 and 3 mm). The mean fall speeds were also found to decrease nearly linearly with increasing turbulent intensity attaining values about 25-30 % less than the terminal velocity of Gunn-Kinzer, i.e., sub-terminal fall speeds.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014MeScT..25a5002S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014MeScT..25a5002S"><span>Video-based measurements for wireless capsule endoscope tracking</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Spyrou, Evaggelos; Iakovidis, Dimitris K.</p> <p>2014-01-01</p> <p>The wireless capsule endoscope is a swallowable medical device equipped with a miniature camera enabling the visual examination of the gastrointestinal (GI) tract. It wirelessly transmits thousands of images to an external video recording system, while its location and orientation are being tracked approximately by external sensor arrays. In this paper we investigate a video-based approach to tracking the capsule endoscope without requiring any external equipment. The proposed method involves extraction of speeded up robust features from video frames, registration of consecutive frames based on the random sample consensus algorithm, and estimation of the displacement and rotation of interest points within these frames. The results obtained by the application of this method on wireless capsule endoscopy videos indicate its effectiveness and improved performance over the state of the art. The findings of this research pave the way for a cost-effective localization and travel distance measurement of capsule endoscopes in the GI tract, which could contribute in the planning of more accurate surgical interventions.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2008SPIE.7116E..0NC','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2008SPIE.7116E..0NC"><span>A reagentless real-time method for the multiparameter analysis of nanoparticles as a potential 'trigger' device</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Carr, Bob; Knowles, John; Warren, Jeremy</p> <p>2008-10-01</p> <p>We describe the continuing development of a laser-based, light scattering detector system capable of detecting and analysing liquid-borne nanoparticles. Using a finely focussed and specially configured laser beam to illuminate a suspension of nanoparticles in a small (250ul) sample and videoing the Brownian motion of each and every particle in the detection zone should allow individual but simultaneous detection and measurement of particle size, scattered light intensity, electrophoretic mobility and, where applicable, shape asymmetry. This real-time, multi-parameter analysis capability offers the prospect of reagentlessly differentiating between different particle types within a complex sample of potentially high and variable background. Employing relatively low powered (50-100mW) laser diode modules and low resolution CCD arrays, each component could be run off battery power, allowing distributed/remote or personal deployment. Voltages needed for electrophoresis measurement s would be similarly low (e.g. 20V, low current) and 30second videos (exported at mobile/cell phone download speeds) analysed remotely. The potential of such low-cost technology as a field-deployable grid of remote, battery powered and reagentless, multi-parameter sensors for use as trigger devices is discussed.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1349516','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/1349516"><span>Development of a ROV Deployed Video Analysis Tool for Rapid Measurement of Submerged Oil/Gas Leaks</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Savas, Omer</p> <p></p> <p>Expanded deep sea drilling around the globe makes it necessary to have readily available tools to quickly and accurately measure discharge rates from accidental submerged oil/gas leak jets for the first responders to deploy adequate resources for containment. We have developed and tested a field deployable video analysis software package which is able to provide in the field sufficiently accurate flow rate estimates for initial responders in accidental oil discharges in submarine operations. The essence of our approach is based on tracking coherent features at the interface in the near field of immiscible turbulent jets. The software package, UCB_Plume, ismore » ready to be used by the first responders for field implementation. We have tested the tool on submerged water and oil jets which are made visible using fluorescent dyes. We have been able to estimate the discharge rate within 20% accuracy. A high end WINDOWS laptop computer is suggested as the operating platform and a USB connected high speed, high resolution monochrome camera as the imaging device are sufficient for acquiring flow images under continuous unidirectional illumination and running the software in the field. Results are obtained over a matter of minutes.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19940024603','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19940024603"><span>Video transmission on ATM networks. Ph.D. Thesis</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Chen, Yun-Chung</p> <p>1993-01-01</p> <p>The broadband integrated services digital network (B-ISDN) is expected to provide high-speed and flexible multimedia applications. Multimedia includes data, graphics, image, voice, and video. Asynchronous transfer mode (ATM) is the adopted transport techniques for B-ISDN and has the potential for providing a more efficient and integrated environment for multimedia. It is believed that most broadband applications will make heavy use of visual information. The prospect of wide spread use of image and video communication has led to interest in coding algorithms for reducing bandwidth requirements and improving image quality. The major results of a study on the bridging of network transmission performance and video coding are: Using two representative video sequences, several video source models are developed. The fitness of these models are validated through the use of statistical tests and network queuing performance. A dual leaky bucket algorithm is proposed as an effective network policing function. The concept of the dual leaky bucket algorithm can be applied to a prioritized coding approach to achieve transmission efficiency. A mapping of the performance/control parameters at the network level into equivalent parameters at the video coding level is developed. Based on that, a complete set of principles for the design of video codecs for network transmission is proposed.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017JSV...390..232Y','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017JSV...390..232Y"><span>Blind identification of full-field vibration modes of output-only structures from uniformly-sampled, possibly temporally-aliased (sub-Nyquist), video measurements</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Yang, Yongchao; Dorn, Charles; Mancini, Tyler; Talken, Zachary; Nagarajaiah, Satish; Kenyon, Garrett; Farrar, Charles; Mascareñas, David</p> <p>2017-03-01</p> <p>Enhancing the spatial and temporal resolution of vibration measurements and modal analysis could significantly benefit dynamic modelling, analysis, and health monitoring of structures. For example, spatially high-density mode shapes are critical for accurate vibration-based damage localization. In experimental or operational modal analysis, higher (frequency) modes, which may be outside the frequency range of the measurement, contain local structural features that can improve damage localization as well as the construction and updating of the modal-based dynamic model of the structure. In general, the resolution of vibration measurements can be increased by enhanced hardware. Traditional vibration measurement sensors such as accelerometers have high-frequency sampling capacity; however, they are discrete point-wise sensors only providing sparse, low spatial sensing resolution measurements, while dense deployment to achieve high spatial resolution is expensive and results in the mass-loading effect and modification of structure's surface. Non-contact measurement methods such as scanning laser vibrometers provide high spatial and temporal resolution sensing capacity; however, they make measurements sequentially that requires considerable acquisition time. As an alternative non-contact method, digital video cameras are relatively low-cost, agile, and provide high spatial resolution, simultaneous, measurements. Combined with vision based algorithms (e.g., image correlation or template matching, optical flow, etc.), video camera based measurements have been successfully used for experimental and operational vibration measurement and subsequent modal analysis. However, the sampling frequency of most affordable digital cameras is limited to 30-60 Hz, while high-speed cameras for higher frequency vibration measurements are extremely costly. This work develops a computational algorithm capable of performing vibration measurement at a uniform sampling frequency lower than what is required by the Shannon-Nyquist sampling theorem for output-only modal analysis. In particular, the spatio-temporal uncoupling property of the modal expansion of structural vibration responses enables a direct modal decoupling of the temporally-aliased vibration measurements by existing output-only modal analysis methods, yielding (full-field) mode shapes estimation directly. Then the signal aliasing properties in modal analysis is exploited to estimate the modal frequencies and damping ratios. The proposed method is validated by laboratory experiments where output-only modal identification is conducted on temporally-aliased acceleration responses and particularly the temporally-aliased video measurements of bench-scale structures, including a three-story building structure and a cantilever beam.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=Color+AND+preference+AND+psychology&pg=2&id=EJ336760','ERIC'); return false;" href="https://eric.ed.gov/?q=Color+AND+preference+AND+psychology&pg=2&id=EJ336760"><span>Using Microcomputers in the Undergraduate Laboratory.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Hovancik, John R.</p> <p>1986-01-01</p> <p>A computer-controlled experimental psychology investigation suitable for use in an undergraduate laboratory is described. The investigation examines the relationship between aesthetic preference and speed of reaction in making choices between colors generated on a video monitor. (Author)</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li class="active"><span>19</span></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_19 --> <div id="page_20" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li class="active"><span>20</span></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="381"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.gpo.gov/fdsys/pkg/FR-2013-03-19/pdf/2013-06272.pdf','FEDREG'); return false;" href="https://www.gpo.gov/fdsys/pkg/FR-2013-03-19/pdf/2013-06272.pdf"><span>78 FR 16856 - FDIC Advisory Committee on Community Banking; Notice of Meeting</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.gpo.gov/fdsys/browse/collection.action?collectionCode=FR">Federal Register 2010, 2011, 2012, 2013, 2014</a></p> <p></p> <p>2013-03-19</p> <p>... viewing, a high speed Internet connection is recommended. The Community Banking meeting videos are made... Insurance Corporation. Valerie J. Best, Assistant Executive Secretary. [FR Doc. 2013-06272 Filed 3-18-13; 8...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016SPIE.9897E..0GB','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016SPIE.9897E..0GB"><span>Architecture of web services in the enhancement of real-time 3D video virtualization in cloud environment</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Bada, Adedayo; Wang, Qi; Alcaraz-Calero, Jose M.; Grecos, Christos</p> <p>2016-04-01</p> <p>This paper proposes a new approach to improving the application of 3D video rendering and streaming by jointly exploring and optimizing both cloud-based virtualization and web-based delivery. The proposed web service architecture firstly establishes a software virtualization layer based on QEMU (Quick Emulator), an open-source virtualization software that has been able to virtualize system components except for 3D rendering, which is still in its infancy. The architecture then explores the cloud environment to boost the speed of the rendering at the QEMU software virtualization layer. The capabilities and inherent limitations of Virgil 3D, which is one of the most advanced 3D virtual Graphics Processing Unit (GPU) available, are analyzed through benchmarking experiments and integrated into the architecture to further speed up the rendering. Experimental results are reported and analyzed to demonstrate the benefits of the proposed approach.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017SPIE10396E..0QR','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017SPIE10396E..0QR"><span>High-speed low-complexity video coding with EDiCTius: a DCT coding proposal for JPEG XS</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Richter, Thomas; Fößel, Siegfried; Keinert, Joachim; Scherl, Christian</p> <p>2017-09-01</p> <p>In its 71th meeting, the JPEG committee issued a call for low complexity, high speed image coding, designed to address the needs of low-cost video-over-ip applications. As an answer to this call, Fraunhofer IIS and the Computing Center of the University of Stuttgart jointly developed an embedded DCT image codec requiring only minimal resources while maximizing throughput on FPGA and GPU implementations. Objective and subjective tests performed for the 73rd meeting confirmed its excellent performance and suitability for its purpose, and it was selected as one of the two key contributions for the development of a joined test model. In this paper, its authors describe the design principles of the codec, provide a high-level overview of the encoder and decoder chain and provide evaluation results on the test corpus selected by the JPEG committee.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015SPIE.9599E..17G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015SPIE.9599E..17G"><span>Video streaming with SHVC to HEVC transcoding</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Gudumasu, Srinivas; He, Yuwen; Ye, Yan; Xiu, Xiaoyu</p> <p>2015-09-01</p> <p>This paper proposes an efficient Scalable High efficiency Video Coding (SHVC) to High Efficiency Video Coding (HEVC) transcoder, which can reduce the transcoding complexity significantly, and provide a desired trade-off between the transcoding complexity and the transcoded video quality. To reduce the transcoding complexity, some of coding information, such as coding unit (CU) depth, prediction mode, merge mode, motion vector information, intra direction information and transform unit (TU) depth information, in the SHVC bitstream are mapped and transcoded to single layer HEVC bitstream. One major difficulty in transcoding arises when trying to reuse the motion information from SHVC bitstream since motion vectors referring to inter-layer reference (ILR) pictures cannot be reused directly in transcoding. Reusing motion information obtained from ILR pictures for those prediction units (PUs) will reduce the complexity of the SHVC transcoder greatly but a significant reduction in the quality of the picture is observed. Pictures corresponding to the intra refresh pictures in the base layer (BL) will be coded as P pictures in enhancement layer (EL) in the SHVC bitstream; and directly reusing the intra information from the BL for transcoding will not get a good coding efficiency. To solve these problems, various transcoding technologies are proposed. The proposed technologies offer different trade-offs between transcoding speed and transcoding quality. They are implemented on the basis of reference software SHM-6.0 and HM-14.0 for the two layer spatial scalability configuration. Simulations show that the proposed SHVC software transcoder reduces the transcoding complexity by up to 98-99% using low complexity transcoding mode when compared with cascaded re-encoding method. The transcoder performance at various bitrates with different transcoding modes are compared in terms of transcoding speed and transcoded video quality.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26620200','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26620200"><span>A Feasibility Study of Smartphone-Based Telesonography for Evaluating Cardiac Dynamic Function and Diagnosing Acute Appendicitis with Control of the Image Quality of the Transmitted Videos.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Kim, Changsun; Cha, Hyunmin; Kang, Bo Seung; Choi, Hyuk Joong; Lim, Tae Ho; Oh, Jaehoon</p> <p>2016-06-01</p> <p>Our aim was to prove the feasibility of the remote interpretation of real-time transmitted ultrasound videos of dynamic and static organs using a smartphone with control of the image quality given a limited internet connection speed. For this study, 100 cases of echocardiography videos (dynamic organ)-50 with an ejection fraction (EF) of ≥50 s and 50 with EF <50 %-and 100 cases of suspected pediatric appendicitis (static organ)-50 with signs of acute appendicitis and 50 with no findings of appendicitis-were consecutively selected. Twelve reviewers reviewed the original videos using the liquid crystal display (LCD) monitor of an ultrasound machine and using a smartphone, to which the images were transmitted from the ultrasound machine. The resolution of the transmitted echocardiography videos was reduced by approximately 20 % to increase the frame rate of transmission given the limited internet speed. The differences in diagnostic performance between the two devices when evaluating left ventricular (LV) systolic function by measuring the EF and when evaluating the presence of acute appendicitis were investigated using a five-point Likert scale. The average areas under the receiver operating characteristic curves for each reviewer's interpretations using the LCD monitor and smartphone were respectively 0.968 (0.949-0.986) and 0.963 (0.945-0.982) (P = 0.548) for echocardiography and 0.972 (0.954-0.989) and 0.966 (0.947-0.984) (P = 0.175) for abdominal ultrasonography. We confirmed the feasibility of remotely interpreting ultrasound images using smartphones, specifically for evaluating LV function and diagnosing pediatric acute appendicitis; the images were transferred from the ultrasound machine using image quality-controlled telesonography.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017PhyEd..52d5001K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017PhyEd..52d5001K"><span>Finding the average speed of a light-emitting toy car with a smartphone light sensor</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kapucu, Serkan</p> <p>2017-07-01</p> <p>This study aims to demonstrate how the average speed of a light-emitting toy car may be determined using a smartphone’s light sensor. The freely available Android smartphone application, ‘AndroSensor’, was used for the experiment. The classroom experiment combines complementary physics knowledge of optics and kinematics to find the average speed of a moving object. The speed of the toy car is found by determining the distance between the light-emitting toy car and the smartphone, and the time taken to travel these distances. To ensure that the average speed of the toy car calculated with the help of the AndroSensor was correct, the average speed was also calculated by analyzing video-recordings of the toy car. The resulting speeds found with these different methods were in good agreement with each other. Hence, it can be concluded that reliable measurements of the average speed of light-emitting objects can be determined with the help of the light sensor of an Android smartphone.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23706754','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23706754"><span>Surgical gesture classification from video and kinematic data.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Zappella, Luca; Béjar, Benjamín; Hager, Gregory; Vidal, René</p> <p>2013-10-01</p> <p>Much of the existing work on automatic classification of gestures and skill in robotic surgery is based on dynamic cues (e.g., time to completion, speed, forces, torque) or kinematic data (e.g., robot trajectories and velocities). While videos could be equally or more discriminative (e.g., videos contain semantic information not present in kinematic data), they are typically not used because of the difficulties associated with automatic video interpretation. In this paper, we propose several methods for automatic surgical gesture classification from video data. We assume that the video of a surgical task (e.g., suturing) has been segmented into video clips corresponding to a single gesture (e.g., grabbing the needle, passing the needle) and propose three methods to classify the gesture of each video clip. In the first one, we model each video clip as the output of a linear dynamical system (LDS) and use metrics in the space of LDSs to classify new video clips. In the second one, we use spatio-temporal features extracted from each video clip to learn a dictionary of spatio-temporal words, and use a bag-of-features (BoF) approach to classify new video clips. In the third one, we use multiple kernel learning (MKL) to combine the LDS and BoF approaches. Since the LDS approach is also applicable to kinematic data, we also use MKL to combine both types of data in order to exploit their complementarity. Our experiments on a typical surgical training setup show that methods based on video data perform equally well, if not better, than state-of-the-art approaches based on kinematic data. In turn, the combination of both kinematic and video data outperforms any other algorithm based on one type of data alone. Copyright © 2013 Elsevier B.V. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015IJCA..116a..33C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015IJCA..116a..33C"><span>Dual-Layer Video Encryption using RSA Algorithm</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Chadha, Aman; Mallik, Sushmit; Chadha, Ankit; Johar, Ravdeep; Mani Roja, M.</p> <p>2015-04-01</p> <p>This paper proposes a video encryption algorithm using RSA and Pseudo Noise (PN) sequence, aimed at applications requiring sensitive video information transfers. The system is primarily designed to work with files encoded using the Audio Video Interleaved (AVI) codec, although it can be easily ported for use with Moving Picture Experts Group (MPEG) encoded files. The audio and video components of the source separately undergo two layers of encryption to ensure a reasonable level of security. Encryption of the video component involves applying the RSA algorithm followed by the PN-based encryption. Similarly, the audio component is first encrypted using PN and further subjected to encryption using the Discrete Cosine Transform. Combining these techniques, an efficient system, invulnerable to security breaches and attacks with favorable values of parameters such as encryption/decryption speed, encryption/decryption ratio and visual degradation; has been put forth. For applications requiring encryption of sensitive data wherein stringent security requirements are of prime concern, the system is found to yield negligible similarities in visual perception between the original and the encrypted video sequence. For applications wherein visual similarity is not of major concern, we limit the encryption task to a single level of encryption which is accomplished by using RSA, thereby quickening the encryption process. Although some similarity between the original and encrypted video is observed in this case, it is not enough to comprehend the happenings in the video.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/21926649','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/21926649"><span>Ballistic fractures: indirect fracture to bone.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Dougherty, Paul J; Sherman, Don; Dau, Nathan; Bir, Cynthia</p> <p>2011-11-01</p> <p>Two mechanisms of injury, the temporary cavity and the sonic wave, have been proposed to produce indirect fractures as a projectile passes nearby in tissue. The purpose of this study is to evaluate the temporal relationship of pressure waves using strain gauge technology and high-speed video to elucidate whether the sonic wave, the temporary cavity, or both are responsible for the formation of indirect fractures. Twenty-eight fresh frozen cadaveric diaphyseal tibia (2) and femurs (26) were implanted into ordnance gelatin blocks. Shots were fired using 9- and 5.56-mm bullets traversing through the gelatin only, passing close to the edge of the bone, but not touching, to produce an indirect fracture. High-speed video of the impact event was collected at 20,000 frames/s. Acquisition of the strain data were synchronized with the video at 20,000 Hz. The exact time of fracture was determined by analyzing and comparing the strain gauge output and video. Twenty-eight shots were fired, 2 with 9-mm bullets and 26 with 5.56-mm bullets. Eight indirect fractures that occurred were of a simple (oblique or wedge) pattern. Comparison of the average distance of the projectile from the bone was 9.68 mm (range, 3-20 mm) for fractured specimens and 15.15 mm (range, 7-28 mm) for nonfractured specimens (Student's t test, p = 0.036). In this study, indirect fractures were produced after passage of the projectile. Thus, the temporary cavity, not the sonic wave, was responsible for the indirect fractures.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018SPIE10498E..2KM','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018SPIE10498E..2KM"><span>Wide field video-rate two-photon imaging by using spinning disk beam scanner</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Maeda, Yasuhiro; Kurokawa, Kazuo; Ito, Yoko; Wada, Satoshi; Nakano, Akihiko</p> <p>2018-02-01</p> <p>The microscope technology with wider view field, deeper penetration depth, higher spatial resolution and higher imaging speed are required to investigate the intercellular dynamics or interactions of molecules and organs in cells or a tissue in more detail. The two-photon microscope with a near infrared (NIR) femtosecond laser is one of the technique to improve the penetration depth and spatial resolution. However, the video-rate or high-speed imaging with wide view field is difficult to perform with the conventional two-photon microscope. Because point-to-point scanning method is used in conventional one, so it's difficult to achieve video-rate imaging. In this study, we developed a two-photon microscope with spinning disk beam scanner and femtosecond NIR fiber laser with around 10 W average power for the microscope system to achieve above requirements. The laser is consisted of an oscillator based on mode-locked Yb fiber laser, a two-stage pre-amplifier, a main amplifier based on a Yb-doped photonic crystal fiber (PCF), and a pulse compressor with a pair of gratings. The laser generates a beam with maximally 10 W average power, 300 fs pulse width and 72 MHz repetition rate. And the beam incident to a spinning beam scanner (Yokogawa Electric) optimized for two-photon imaging. By using this system, we achieved to obtain the 3D images with over 1mm-penetration depth and video-rate image with 350 x 350 um view field from the root of Arabidopsis thaliana.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2005SPIE.5749..425A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2005SPIE.5749..425A"><span>Fundamental study of compression for movie files of coronary angiography</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ando, Takekazu; Tsuchiya, Yuichiro; Kodera, Yoshie</p> <p>2005-04-01</p> <p>When network distribution of movie files was considered as reference, it could be useful that the lossy compression movie files which has small file size. We chouse three kinds of coronary stricture movies with different moving speed as an examination object; heart rate of slow, normal and fast movies. The movies of MPEG-1, DivX5.11, WMV9 (Windows Media Video 9), and WMV9-VCM (Windows Media Video 9-Video Compression Manager) were made from three kinds of AVI format movies with different moving speeds. Five kinds of movies that are four kinds of compression movies and non-compression AVI instead of the DICOM format were evaluated by Thurstone's method. The Evaluation factors of movies were determined as "sharpness, granularity, contrast, and comprehensive evaluation." In the virtual bradycardia movie, AVI was the best evaluation at all evaluation factors except the granularity. In the virtual normal movie, an excellent compression technique is different in all evaluation factors. In the virtual tachycardia movie, MPEG-1 was the best evaluation at all evaluation factors expects the contrast. There is a good compression form depending on the speed of movies because of the difference of compression algorithm. It is thought that it is an influence by the difference of the compression between frames. The compression algorithm for movie has the compression between the frames and the intra-frame compression. As the compression algorithm give the different influence to image by each compression method, it is necessary to examine the relation of the compression algorithm and our results.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26046799','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26046799"><span>Direct Evidence for Vision-based Control of Flight Speed in Budgerigars.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Schiffner, Ingo; Srinivasan, Mandyam V</p> <p>2015-06-05</p> <p>We have investigated whether, and, if so, how birds use vision to regulate the speed of their flight. Budgerigars, Melopsittacus undulatus, were filmed in 3-D using high-speed video cameras as they flew along a 25 m tunnel in which stationary or moving vertically oriented black and white stripes were projected on the side walls. We found that the birds increased their flight speed when the stripes were moved in the birds' flight direction, but decreased it only marginally when the stripes were moved in the opposite direction. The results provide the first direct evidence that Budgerigars use cues based on optic flow, to regulate their flight speed. However, unlike the situation in flying insects, it appears that the control of flight speed in Budgerigars is direction-specific. It does not rely solely on cues derived from optic flow, but may also be determined by energy constraints.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4457151','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4457151"><span>Direct Evidence for Vision-based Control of Flight Speed in Budgerigars</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Schiffner, Ingo; Srinivasan, Mandyam V.</p> <p>2015-01-01</p> <p>We have investigated whether, and, if so, how birds use vision to regulate the speed of their flight. Budgerigars, Melopsittacus undulatus, were filmed in 3-D using high-speed video cameras as they flew along a 25 m tunnel in which stationary or moving vertically oriented black and white stripes were projected on the side walls. We found that the birds increased their flight speed when the stripes were moved in the birds’ flight direction, but decreased it only marginally when the stripes were moved in the opposite direction. The results provide the first direct evidence that Budgerigars use cues based on optic flow, to regulate their flight speed. However, unlike the situation in flying insects, it appears that the control of flight speed in Budgerigars is direction-specific. It does not rely solely on cues derived from optic flow, but may also be determined by energy constraints. PMID:26046799</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19950016162','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19950016162"><span>Method of encouraging attention by correlating video game difficulty with attention level</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Pope, Alan T. (Inventor); Bogart, Edward H. (Inventor)</p> <p>1994-01-01</p> <p>A method of encouraging attention in persons such as those suffering from Attention Deficit Disorder is provided by correlating the level of difficulty of a video game with the level of attention in a subject. A conventional video game comprises a video display which depicts objects for interaction with a player and a difficulty adjuster which increases the difficulty level, e.g., action speed and/or evasiveness of the depicted object, in a predetermined manner. The electrical activity of the brain is measured at selected sites to determine levels of awareness, e.g., activity in the beta, theta, and alpha states. A value is generated based on this measured electrical signal which is indicative of the level of awareness. The difficulty level of the game is increased as the awareness level value decreases and is decreased as this awareness level value increases.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1994SPIE.2188..122R','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1994SPIE.2188..122R"><span>Schemes for efficient transmission of encoded video streams on high-speed networks</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ramanathan, Srinivas; Vin, Harrick M.; Rangan, P. Venkat</p> <p>1994-04-01</p> <p>In this paper, we argue that significant performance benefits can accrue if integrated networks implement application-specific mechanisms that account for the diversities in media compression schemes. Towards this end, we propose a simple, yet effective, strategy called Frame Induced Packet Discarding (FIPD), in which, upon detection of loss of a threshold number (determined by an application's video encoding scheme) of packets belonging to a video frame, the network attempts to discard all the remaining packets of that frame. In order to analytically quantify the performance of FIPD so as to obtain fractional frame losses that can be guaranteed to video channels, we develop a finite state, discrete time markov chain model of the FIPD strategy. The fractional frame loss thus computed can serve as the criterion for admission control at the network. Performance evaluations demonstrate the utility of the FIPD strategy.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1340006','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/1340006"><span>Accelerator Science: Why RF?</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Lincoln, Don</p> <p></p> <p>Particle accelerators can fire beams of subatomic particles at near the speed of light. The accelerating force is generated using radio frequency technology and a whole lot of interesting features. In this video, Fermilab’s Dr. Don Lincoln explains how it all works.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=laser+AND+electronics&pg=3&id=EJ573879','ERIC'); return false;" href="https://eric.ed.gov/?q=laser+AND+electronics&pg=3&id=EJ573879"><span>Selection of Electronic Resources.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Weathers, Barbara</p> <p>1998-01-01</p> <p>Discusses the impact of electronic resources on collection development; selection of CD-ROMs, (platform, speed, video and sound, networking capability, installation and maintenance); selection of laser disks; and Internet evaluation (accuracy of content, authority, objectivity, currency, technical characteristics). Lists Web sites for evaluating…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19910000472&hterms=centrifuge&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D40%26Ntt%3Dcentrifuge','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19910000472&hterms=centrifuge&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D40%26Ntt%3Dcentrifuge"><span>Variable-Speed Instrumented Centrifuges</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Chapman, David K.; Brown, Allan H.</p> <p>1991-01-01</p> <p>Report describes conceptual pair of centrifuges, speed of which varied to produce range of artificial gravities in zero-gravity environment. Image and data recording and controlled temperature and gravity provided for 12 experiments. Microprocessor-controlled centrifuges include video cameras to record stop-motion images of experiments. Potential applications include studies of effect of gravity on growth and on production of hormones in corn seedlings, experiments with magnetic flotation to separate cells, and electrophoresis to separate large fragments of deoxyribonucleic acid.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013EJPh...34..915C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013EJPh...34..915C"><span>Studying the internal ballistics of a combustion-driven potato cannon using high-speed video</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Courtney, E. D. S.; Courtney, M. W.</p> <p>2013-07-01</p> <p>A potato cannon was designed to accommodate several different experimental propellants and have a transparent barrel so the movement of the projectile could be recorded on high-speed video (at 2000 frames per second). Five experimental propellants were tested: propane (C3H8), acetylene (C2H2), ethanol (C2H6O), methanol (CH4O) and butane (C4H10). The quantity of each experimental propellant was calculated to approximate a stoichometric mixture and considering the upper and lower flammability limits, which in turn were affected by the volume of the combustion chamber. Cylindrical projectiles were cut from raw potatoes so that there was an airtight fit, and each weighed 50 (± 0.5) g. For each trial, position as a function of time was determined via frame-by-frame analysis. Five trials were made for each experimental propellant and the results analyzed to compute velocity and acceleration as functions of time. Additional quantities, including force on the potato and the pressure applied to the potato, were also computed. For each experimental propellant average velocity versus barrel position curves were plotted. The most effective experimental propellant was defined as that which accelerated the potato to the highest muzzle velocity. The experimental propellant acetylene performed the best on average (138.1 m s-1), followed by methanol (48.2 m s-1), butane (34.6 m s-1), ethanol (33.3 m s-1) and propane (27.9 m s-1), respectively.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5630145','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5630145"><span>Experimental Investigation on Minimum Frame Rate Requirements of High-Speed Videoendoscopy for Clinical Voice Assessment</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Deliyski, Dimitar D; Powell, Maria EG; Zacharias, Stephanie RC; Gerlach, Terri Treman; de Alarcon, Alessandro</p> <p>2015-01-01</p> <p>This study investigated the impact of high-speed videoendoscopy (HSV) frame rates on the assessment of nine clinically-relevant vocal-fold vibratory features. Fourteen adult patients with voice disorder and 14 adult normal controls were recorded using monochromatic rigid HSV at a rate of 16000 frames per second (fps) and spatial resolution of 639×639 pixels. The 16000-fps data were downsampled to 16 other rate denominations. Using paired comparisons design, nine common clinical vibratory features were visually compared between the downsampled and the original images. Three raters reported the thresholds at which: (1) a detectable difference between the two videos was first noticed, and (2) differences between the two videos would result in a change of clinical rating. Results indicated that glottal edge, mucosal wave magnitude and extent, aperiodicity, contact and loss of contact of the vocal folds were the vibratory features most sensitive to frame rate. Of these vibratory features, the glottal edge was selected for further analysis, due to its higher rating reliability, universal prevalence and consistent definition. Rates of 8000 fps were found to be free from visually-perceivable feature degradation, and for rates of 5333 fps, degradation was minimal. For rates of 4000 fps and higher, clinical assessments of glottal edge were not affected. Rates of 2000 fps changed the clinical ratings in over 16% of the samples, which could lead to inaccurate functional assessment. PMID:28989342</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li class="active"><span>20</span></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_20 --> <div id="page_21" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li class="active"><span>21</span></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="401"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3065345','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3065345"><span>Measuring contraction propagation and localizing pacemaker cells using high speed video microscopy</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Akl, Tony J.; Nepiyushchikh, Zhanna V.; Gashev, Anatoliy A.; Zawieja, David C.; Coté, Gerard L.</p> <p>2011-01-01</p> <p>Previous studies have shown the ability of many lymphatic vessels to contract phasically to pump lymph. Every lymphangion can act like a heart with pacemaker sites that initiate the phasic contractions. The contractile wave propagates along the vessel to synchronize the contraction. However, determining the location of the pacemaker sites within these vessels has proven to be very difficult. A high speed video microscopy system with an automated algorithm to detect pacemaker location and calculate the propagation velocity, speed, duration, and frequency of the contractions is presented in this paper. Previous methods for determining the contractile wave propagation velocity manually were time consuming and subject to errors and potential bias. The presented algorithm is semiautomated giving objective results based on predefined criteria with the option of user intervention. The system was first tested on simulation images and then on images acquired from isolated microlymphatic mesenteric vessels. We recorded contraction propagation velocities around 10 mm∕s with a shortening speed of 20.4 to 27.1 μm∕s on average and a contraction frequency of 7.4 to 21.6 contractions∕min. The simulation results showed that the algorithm has no systematic error when compared to manual tracking. The system was used to determine the pacemaker location with a precision of 28 μm when using a frame rate of 300 frames per second. PMID:21361700</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016EGUGA..18.8101T','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016EGUGA..18.8101T"><span>Bag-breakup control of surface drag in hurricanes</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Troitskaya, Yuliya; Zilitinkevich, Sergej; Kandaurov, Alexander; Ermakova, Olga; Kozlov, Dmitry; Sergeev, Daniil</p> <p>2016-04-01</p> <p>Air-sea interaction at extreme winds is of special interest now in connection with the problem of the sea surface drag reduction at the wind speed exceeding 30-35 m/s. This phenomenon predicted by Emanuel (1995) and confirmed by a number of field (e.g., Powell, et al, 2003) and laboratory (Donelan et al, 2004) experiments still waits its physical explanation. Several papers attributed the drag reduction to spume droplets - spray turning off the crests of breaking waves (e.g., Kudryavtsev, Makin, 2011, Bao, et al, 2011). The fluxes associated with the spray are determined by the rate of droplet production at the surface quantified by the sea spray generation function (SSGF), defined as the number of spray particles of radius r produced from the unit area of water surface in unit time. However, the mechanism of spume droplets' formation is unknown and empirical estimates of SSGF varied over six orders of magnitude; therefore, the production rate of large sea spray droplets is not adequately described and there are significant uncertainties in estimations of exchange processes in hurricanes. Herewith, it is unknown what is air-sea interface and how water is fragmented to spray at hurricane wind. Using high-speed video, we observed mechanisms of production of spume droplets at strong winds by high-speed video filming, investigated statistics and compared their efficiency. Experiments showed, that the generation of the spume droplets near the wave crest is caused by the following events: bursting of submerged bubbles, generation and breakup of "projections" and "bag breakup". Statistical analysis of results of these experiments showed that the main mechanism of spray-generation is attributed to "bag-breakup mechanism", namely, inflating and consequent blowing of short-lived, sail-like pieces of the water-surface film. Using high-speed video, we show that at hurricane winds the main mechanism of spray production is attributed to "bag-breakup", namely, inflating and consequent breaking of short-lived, sail-like pieces of the water-surface film - "bags". On the base of general principles of statistical physics (model of a canonical ensemble) we developed statistics of the "bag-breakup" events: their number and statistical distribution of geometrical parameters depending on wind speed. Basing on the developed statistics, we estimated the surface stress caused by bags as the average sum of stresses caused by individual bags depending on their eometrical parameters. The resulting stress is subjected to counteracting impacts of the increasing wind speed: the increasing number of bags, and their decreasing sizes and life times and the balance yields a peaking dependence of the bag resistance on the wind speed: the share of bag-stress peaks at U10  35 m/s and then reduces. Peaking of surface stress associated with the "bag-breakup" explains seemingly paradoxical non-monotonous wind-dependence of surface drag coefficient peaking at winds about 35 m/s. This work was supported by the Russian Foundation of Basic Research (14-05-91767, 13-05-12093, 16-05-00839, 14-05-91767, 16-55-52025, 15-35-20953) and experiment and equipment was supported by Russian Science Foundation (Agreements 14-17-00667 and 15-17-20009 respectively), Yu.Troitskaya, A.Kandaurov and D.Sergeev were partially supported by FP7 Collaborative Project No. 612610.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5040264','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5040264"><span>Strategies for Pre-Emptive Mid-Air Collision Avoidance in Budgerigars</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Schiffner, Ingo; Srinivasan, Mandyam V.</p> <p>2016-01-01</p> <p>We have investigated how birds avoid mid-air collisions during head-on encounters. Trajectories of birds flying towards each other in a tunnel were recorded using high speed video cameras. Analysis and modelling of the data suggest two simple strategies for collision avoidance: (a) each bird veers to its right and (b) each bird changes its altitude relative to the other bird according to a preset preference. Both strategies suggest simple rules by which collisions can be avoided in head-on encounters by two agents, be they animals or machines. The findings are potentially applicable to the design of guidance algorithms for automated collision avoidance on aircraft. PMID:27680488</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19790027993&hterms=Robot&qs=N%3D0%26Ntk%3DAll%26Ntx%3Dmode%2Bmatchall%26Ntt%3DRobot%2527s','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19790027993&hterms=Robot&qs=N%3D0%26Ntk%3DAll%26Ntx%3Dmode%2Bmatchall%26Ntt%3DRobot%2527s"><span>The robot's eyes - Stereo vision system for automated scene analysis</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Williams, D. S.</p> <p>1977-01-01</p> <p>Attention is given to the robot stereo vision system which maintains the image produced by solid-state detector television cameras in a dynamic random access memory called RAPID. The imaging hardware consists of sensors (two solid-state image arrays using a charge injection technique), a video-rate analog-to-digital converter, the RAPID memory, and various types of computer-controlled displays, and preprocessing equipment (for reflexive actions, processing aids, and object detection). The software is aimed at locating objects and transversibility. An object-tracking algorithm is discussed and it is noted that tracking speed is in the 50-75 pixels/s range.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70177807','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70177807"><span>3-D high-speed imaging of volcanic bomb trajectory in basaltic explosive eruptions</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Gaudin, D.; Taddeucci, J; Houghton, Bruce F.; Orr, Tim R.; Andronico, D.; Del Bello, E.; Kueppers, U.; Ricci, T.; Scarlato, P.</p> <p>2016-01-01</p> <p>Imaging, in general, and high speed imaging in particular are important emerging tools for the study of explosive volcanic eruptions. However, traditional 2-D video observations cannot measure volcanic ejecta motion toward and away from the camera, strongly hindering our capability to fully determine crucial hazard-related parameters such as explosion directionality and pyroclasts' absolute velocity. In this paper, we use up to three synchronized high-speed cameras to reconstruct pyroclasts trajectories in three dimensions. Classical stereographic techniques are adapted to overcome the difficult observation conditions of active volcanic vents, including the large number of overlapping pyroclasts which may change shape in flight, variable lighting and clouding conditions, and lack of direct access to the target. In particular, we use a laser rangefinder to measure the geometry of the filming setup and manually track pyroclasts on the videos. This method reduces uncertainties to 10° in azimuth and dip angle of the pyroclasts, and down to 20% in the absolute velocity estimation. We demonstrate the potential of this approach by three examples: the development of an explosion at Stromboli, a bubble burst at Halema'uma'u lava lake, and an in-flight collision between two bombs at Stromboli.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFMAE12A..07S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFMAE12A..07S"><span>Lightning-channel conditioning</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Sonnenfeld, R.; da Silva, C. L.; Eack, K.; Edens, H. E.; Harley, J.; McHarg, M.; Contreras Vidal, L.</p> <p>2017-12-01</p> <p>The concept of "conditioning" has several distinct applications in understanding lightning. It is commonly associated to the greater speed of dart-leaders vs. stepped leaders and the retrace of a cloud-to-ground channel by later return strokes. We will showadditional examples of conditioning: (A) High-speed videos of triggered flashes show "dark" periods of up to 50 ms between rebrightenings of an existing channel. (B) Interferometer (INTF) images of intra-cloud (IC) flashes demonstrate that electric-field "K-changes" correspond to rapid propagation of RF impulses along a previously formed channel separated by up to 20 ms with little RF emission on that channel. (C) Further, INTF images (like the one below) frequently show that the initial IC channel is more branched and "fuzzier'' than its later incarnations. Also, we contrast high-speed video, INTF observations, and spectroscopic measurements with possible physical mechanisms that can explain how channel conditioning guides and facilitates dart leader propagation. These mechanisms include: (1) a plasmochemical effect where electrons are stored in negative ions and released during the dart leader propagation via field-induced detachment; (2) small-amplitude residual currents that can maintain electrical conductivity; and (3) slow heat conduction cooling of plasma owing to channel expansion dynamics.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19790009986','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19790009986"><span>The effect of interference on delta modulation encoded video signals</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Schilling, D. L.</p> <p>1979-01-01</p> <p>The results of a study on the use of the delta modulator as a digital encoder of television signals are presented. The computer simulation was studied of different delta modulators in order to find a satisfactory delta modulator. After finding a suitable delta modulator algorithm via computer simulation, the results are analyzed and then implemented in hardware to study the ability to encode real time motion pictures from an NTSC format television camera. The effects were investigated of channel errors on the delta modulated video signal and several error correction algorithms were tested via computer simulation. A very high speed delta modulator was built (out of ECL logic), incorporating the most promising of the correction schemes, so that it could be tested on real time motion pictures. The final area of investigation concerned itself with finding delta modulators which could achieve significant bandwidth reduction without regard to complexity or speed. The first such scheme to be investigated was a real time frame to frame encoding scheme which required the assembly of fourteen, 131,000 bit long shift registers as well as a high speed delta modulator. The other schemes involved two dimensional delta modulator algorithms.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5922975','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5922975"><span>Comprehensive machine learning analysis of Hydra behavior reveals a stable basal behavioral repertoire</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Taralova, Ekaterina; Dupre, Christophe; Yuste, Rafael</p> <p>2018-01-01</p> <p>Animal behavior has been studied for centuries, but few efficient methods are available to automatically identify and classify it. Quantitative behavioral studies have been hindered by the subjective and imprecise nature of human observation, and the slow speed of annotating behavioral data. Here, we developed an automatic behavior analysis pipeline for the cnidarian Hydra vulgaris using machine learning. We imaged freely behaving Hydra, extracted motion and shape features from the videos, and constructed a dictionary of visual features to classify pre-defined behaviors. We also identified unannotated behaviors with unsupervised methods. Using this analysis pipeline, we quantified 6 basic behaviors and found surprisingly similar behavior statistics across animals within the same species, regardless of experimental conditions. Our analysis indicates that the fundamental behavioral repertoire of Hydra is stable. This robustness could reflect a homeostatic neural control of "housekeeping" behaviors which could have been already present in the earliest nervous systems. PMID:29589829</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2011OptEn..50b7202K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2011OptEn..50b7202K"><span>Detection of dominant flow and abnormal events in surveillance video</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kwak, Sooyeong; Byun, Hyeran</p> <p>2011-02-01</p> <p>We propose an algorithm for abnormal event detection in surveillance video. The proposed algorithm is based on a semi-unsupervised learning method, a kind of feature-based approach so that it does not detect the moving object individually. The proposed algorithm identifies dominant flow without individual object tracking using a latent Dirichlet allocation model in crowded environments. It can also automatically detect and localize an abnormally moving object in real-life video. The performance tests are taken with several real-life databases, and their results show that the proposed algorithm can efficiently detect abnormally moving objects in real time. The proposed algorithm can be applied to any situation in which abnormal directions or abnormal speeds are detected regardless of direction.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29175825','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29175825"><span>Reconstruction of head impacts in FIS World Cup alpine skiing.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Steenstrup, Sophie Elspeth; Mok, Kam-Ming; McIntosh, Andrew S; Bahr, Roald; Krosshaug, Tron</p> <p>2018-06-01</p> <p>Prior to the 2013/2014 season, the International Ski Federation (FIS) increased the helmet testing speed from 5.4 to 6.8 m/s for alpine downhill, super-G and giant slalom. Whether this increased testing speed reflects head impact velocities in real head injury situations on snow is unclear. We therefore investigated the injury mechanisms and gross head impact biomechanics in seven real head injury situations among World Cup (WC) alpine skiers. We analysed nine head impacts from seven head injury videos from the FIS Injury Surveillance System, throughout nine WC seasons (2006-2015) in detail. We used commercial video-based motion analysis software to estimate head impact kinematics in two dimensions, including directly preimpact and postimpact, from broadcast video. The sagittal plane angular movement of the head was also measured using angle measurement software. In seven of nine head impacts, the estimated normal to slope preimpact velocity was higher than the current FIS helmet rule of 6.8 m/s (mean 8.1 (±SD 0.6) m/s, range 1.9±0.8 to 12.1±0.4 m/s). The nine head impacts had a mean normal to slope velocity change of 9.3±1.0 m/s, range 5.2±1.1 to 13.5±1.3 m/s. There was a large change in sagittal plane angular velocity (mean 43.3±2.9 rad/s (range 21.2±1.5 to 64.2±3.0 rad/s)) during impact. The estimated normal to slope preimpact velocity was higher than the current FIS helmet rule of 6.8 m/s in seven of nine head impacts. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010SPIE.7668E..0SC','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010SPIE.7668E..0SC"><span>Real-time unmanned aircraft systems surveillance video mosaicking using GPU</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Camargo, Aldo; Anderson, Kyle; Wang, Yi; Schultz, Richard R.; Fevig, Ronald A.</p> <p>2010-04-01</p> <p>Digital video mosaicking from Unmanned Aircraft Systems (UAS) is being used for many military and civilian applications, including surveillance, target recognition, border protection, forest fire monitoring, traffic control on highways, monitoring of transmission lines, among others. Additionally, NASA is using digital video mosaicking to explore the moon and planets such as Mars. In order to compute a "good" mosaic from video captured by a UAS, the algorithm must deal with motion blur, frame-to-frame jitter associated with an imperfectly stabilized platform, perspective changes as the camera tilts in flight, as well as a number of other factors. The most suitable algorithms use SIFT (Scale-Invariant Feature Transform) to detect the features consistent between video frames. Utilizing these features, the next step is to estimate the homography between two consecutives video frames, perform warping to properly register the image data, and finally blend the video frames resulting in a seamless video mosaick. All this processing takes a great deal of resources of resources from the CPU, so it is almost impossible to compute a real time video mosaic on a single processor. Modern graphics processing units (GPUs) offer computational performance that far exceeds current CPU technology, allowing for real-time operation. This paper presents the development of a GPU-accelerated digital video mosaicking implementation and compares it with CPU performance. Our tests are based on two sets of real video captured by a small UAS aircraft; one video comes from Infrared (IR) and Electro-Optical (EO) cameras. Our results show that we can obtain a speed-up of more than 50 times using GPU technology, so real-time operation at a video capture of 30 frames per second is feasible.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/21725104','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/21725104"><span>Relationships between triathlon performance and pacing strategy during the run in an international competition.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Le Meur, Yann; Bernard, Thierry; Dorel, Sylvain; Abbiss, Chris R; Honnorat, Gérard; Brisswalter, Jeanick; Hausswirth, Christophe</p> <p>2011-06-01</p> <p>The purpose of the present study was to examine relationships between athlete's pacing strategies and running performance during an international triathlon competition. Running split times for each of the 107 finishers of the 2009 European Triathlon Championships (42 females and 65 males) were determined with the use of a digital synchronized video analysis system. Five cameras were placed at various positions of the running circuit (4 laps of 2.42 km). Running speed and an index of running speed variability (IRSVrace) were subsequently calculated over each section or running split. Mean running speed over the first 1272 m of lap 1 was 0.76 km·h-1 (+4.4%) and 1.00 km·h-1 (+5.6%) faster than the mean running speed over the same section during the three last laps, for females and males, respectively (P < .001). A significant inverse correlation was observed between RSrace and IRSVrace for all triathletes (females r = -0.41, P = .009; males r = -0.65, P = .002; and whole population -0.76, P = .001). Females demonstrated higher IRSVrace compared with men (6.1 ± 0.5 km·h-1 and 4.0 ± 1.4 km·h-1, for females and males, respectively, P = .001) due to greater decrease in running speed over uphill sections. Pacing during the run appears to play a key role in high-level triathlon performance. Elite triathletes should reduce their initial running speed during international competitions, even if high levels of motivation and direct opponents lead them to adopt an aggressive strategy.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017APS..MAR.M1304M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017APS..MAR.M1304M"><span>High-speed AFM and the reduction of tip-sample forces</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Miles, Mervyn; Sharma, Ravi; Picco, Loren</p> <p></p> <p>High-speed DC-mode AFM has been shown to be routinely capable of imaging at video rate, and, if required, at over 1000 frames per second. At sufficiently high tip-sample velocities in ambient conditions, the tip lifts off the sample surface in a superlubricity process which reduces the level of shear forces imposed on the sample by the tip and therefore reduces the potential damage and distortion of the sample being imaged. High-frequency mechanical oscillations, both lateral and vertical, have been reported to reduced the tip-sample frictional forces. We have investigated the effect of combining linear high-speed scanning with these small amplitude high-frequency oscillations with the aim of reducing further the force interaction in high-speed imaging. Examples of this new version of high-speed AFM imaging will be presented for biological samples.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018E%26ES..107a2068N','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018E%26ES..107a2068N"><span>Toxicity assessment of polluted sediments using swimming behavior alteration test with Daphnia magna</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Nikitin, O. V.; Nasyrova, E. I.; Nuriakhmetova, V. R.; Stepanova, N. Yu; Danilova, N. V.; Latypova, V. Z.</p> <p>2018-01-01</p> <p>Recently behavioral responses of organisms are increasingly used as a reliable and sensitive tool in aquatic toxicology. Behavior-related endpoints allow efficiently studying the effects of sub-lethal exposure to contaminants. At present behavioural parameters frequently are determined with the use of digital analysis of video recording by computer vision technology. However, most studies evaluate the toxicity of aqueous solutions. Due to methodological difficulties associated with sample preparation not a lot of examples of the studies related to the assessment of toxicity of other environmental objects (wastes, sewage sludges, soils, sediments etc.) by computer vision technology. This paper presents the results of assessment of the swimming behavior alterations of Daphnia magna in elutriates from both uncontaminated natural and artificially chromium-contaminated bottom sediments. It was shown, that in elutriate from chromium contaminated bottom sediments (chromium concentration 115±5.7 μg l-1) the swimming speed of daphnids was decreases from 0.61 cm s-1 (median speed over the period) to 0.50 cm s-1 (median speed at the last minute of the experiment). The relocation of Daphnia from the culture medium to the extract from the non-polluted sediments does not essential changes the swimming activity.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016APS..DFDM19006A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016APS..DFDM19006A"><span>Spatial organization and Synchronization in collective swimming of Hemigrammus bleheri</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ashraf, Intesaaf; Ha, Thanh-Tung; Godoy-Diana, Ramiro; Thiria, Benjamin; Halloy, Jose; Collignon, Bertrand; Laboratoire de Physique et Mécanique des Milieux Hétérogènes (PMMH) Team; Laboratoire Interdisciplinaire des Energies de Demain (LIED) Team</p> <p>2016-11-01</p> <p>In this work, we study the collective swimming of Hemigrammus bleheri fish using experiments in a shallow swimming channel. We use high-speed video recordings to track the midline kinematics and the spatial organization of fish pairs and triads. Synchronizations are characterized by observance of "out of phase" and "in phase" configurations. We show that the synchronization state is highly correlated to swimming speed. The increase in synchronization led to efficient swimming based on Strouhal number. In case of fish pairs, the collective swimming is 2D and the spatial organization is characterized by two characteristic lengths: the lateral and longitudinal separation distances between fish pairs.For fish triads, different swimming patterns or configurations are observed having three dimensional structures. We performed 3D kinematic analysis by employing 3D reconstruction using the Direct Linear Transformation (DLT). We show that fish still keep their nearest neighbor distance (NND) constant irrespective of swimming speeds and configuration. We also point out characteristic angles between neighbors, hence imposing preferred patterns. At last we will give some perspectives on spatial organization for larger population. Sorbonne Paris City College of Doctoral Schools. European Union Information and Communication Technologies project ASSISIbf, FP7-ICT-FET-601074.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/18929349','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/18929349"><span>The effects of video game playing on attention, memory, and executive control.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Boot, Walter R; Kramer, Arthur F; Simons, Daniel J; Fabiani, Monica; Gratton, Gabriele</p> <p>2008-11-01</p> <p>Expert video game players often outperform non-players on measures of basic attention and performance. Such differences might result from exposure to video games or they might reflect other group differences between those people who do or do not play video games. Recent research has suggested a causal relationship between playing action video games and improvements in a variety of visual and attentional skills (e.g., [Green, C. S., & Bavelier, D. (2003). Action video game modifies visual selective attention. Nature, 423, 534-537]). The current research sought to replicate and extend these results by examining both expert/non-gamer differences and the effects of video game playing on tasks tapping a wider range of cognitive abilities, including attention, memory, and executive control. Non-gamers played 20+ h of an action video game, a puzzle game, or a real-time strategy game. Expert gamers and non-gamers differed on a number of basic cognitive skills: experts could track objects moving at greater speeds, better detected changes to objects stored in visual short-term memory, switched more quickly from one task to another, and mentally rotated objects more efficiently. Strikingly, extensive video game practice did not substantially enhance performance for non-gamers on most cognitive tasks, although they did improve somewhat in mental rotation performance. Our results suggest that at least some differences between video game experts and non-gamers in basic cognitive performance result either from far more extensive video game experience or from pre-existing group differences in abilities that result in a self-selection effect.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28806903','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28806903"><span>Service provider perceptions of transitioning from audio to video capability in a telehealth system: a qualitative evaluation.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Clay-Williams, Robyn; Baysari, Melissa; Taylor, Natalie; Zalitis, Dianne; Georgiou, Andrew; Robinson, Maureen; Braithwaite, Jeffrey; Westbrook, Johanna</p> <p>2017-08-14</p> <p>Telephone consultation and triage services are increasingly being used to deliver health advice. Availability of high speed internet services in remote areas allows healthcare providers to move from telephone to video telehealth services. Current approaches for assessing video services have limitations. This study aimed to identify the challenges for service providers associated with transitioning from audio to video technology. Using a mixed-method, qualitative approach, we observed training of service providers who were required to switch from telephone to video, and conducted pre- and post-training interviews with 15 service providers and their trainers on the challenges associated with transitioning to video. Two full days of simulation training were observed. Data were transcribed and analysed using an inductive approach; a modified constant comparative method was employed to identify common themes. We found three broad categories of issues likely to affect implementation of the video service: social, professional, and technical. Within these categories, eight sub-themes were identified; they were: enhanced delivery of the health service, improved health advice for people living in remote areas, safety concerns, professional risks, poor uptake of video service, system design issues, use of simulation for system testing, and use of simulation for system training. This study identified a number of unexpected potential barriers to successful transition from telephone to the video system. Most prominent were technical and training issues, and personal safety concerns about transitioning from telephone to video media. Addressing identified issues prior to implementation of a new video telehealth system is likely to improve effectiveness and uptake.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19850024737&hterms=Audio+recorder&qs=N%3D0%26Ntk%3DAll%26Ntx%3Dmode%2Bmatchall%26Ntt%3DAudio%2Brecorder','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19850024737&hterms=Audio+recorder&qs=N%3D0%26Ntk%3DAll%26Ntx%3Dmode%2Bmatchall%26Ntt%3DAudio%2Brecorder"><span>Balloon-borne video cassette recorders for digital data storage</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Althouse, W. E.; Cook, W. R.</p> <p>1985-01-01</p> <p>A high-speed, high-capacity digital data storage system has been developed for a new balloon-borne gamma-ray telescope. The system incorporates sophisticated, yet easy to use and economical consumer products: the portable video cassette recorder (VCR) and a relatively newer item - the digital audio processor. The in-flight recording system employs eight VCRs and will provide a continuous data storage rate of 1.4 megabits/sec throughout a 40 hour balloon flight. Data storage capacity is 25 gigabytes and power consumption is only 10 watts.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018SPIE10611E..13Z','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018SPIE10611E..13Z"><span>Gas leak detection in infrared video with background modeling</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Zeng, Xiaoxia; Huang, Likun</p> <p>2018-03-01</p> <p>Background modeling plays an important role in the task of gas detection based on infrared video. VIBE algorithm is a widely used background modeling algorithm in recent years. However, the processing speed of the VIBE algorithm sometimes cannot meet the requirements of some real time detection applications. Therefore, based on the traditional VIBE algorithm, we propose a fast prospect model and optimize the results by combining the connected domain algorithm and the nine-spaces algorithm in the following processing steps. Experiments show the effectiveness of the proposed method.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/ADA253005','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/ADA253005"><span>Special Course on Skin Friction Drag Reduction Held in Rhode-St-Genese, Belgium on 2-6 March 1992 (Reduction de Trainee de Frottement)</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>1992-03-01</p> <p>bodies such as tor- or by using these in combination with other control pedoes , are ideal targets for applying transition delay methods will also be...sponsored Tani and his colleagues to Nikuradse’s experimental project in Cambridge University, high speed video data for sand grain rough pipes, (see [125...lent kinetic energy balance in a LEBU modified don (1990) - also video presented at EDRM4 Lau- turbulent boundary layer. Proc. 11th Turbulence sanne</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li class="active"><span>21</span></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_21 --> <div id="page_22" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li class="active"><span>22</span></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="421"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23277965','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23277965"><span>Invited review article: high-speed flexure-guided nanopositioning: mechanical design and control issues.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Yong, Y K; Moheimani, S O R; Kenton, B J; Leang, K K</p> <p>2012-12-01</p> <p>Recent interest in high-speed scanning probe microscopy for high-throughput applications including video-rate atomic force microscopy and probe-based nanofabrication has sparked attention on the development of high-bandwidth flexure-guided nanopositioning systems (nanopositioners). Such nanopositioners are designed to move samples with sub-nanometer resolution with positioning bandwidth in the kilohertz range. State-of-the-art designs incorporate uniquely designed flexure mechanisms driven by compact and stiff piezoelectric actuators. This paper surveys key advances in mechanical design and control of dynamic effects and nonlinearities, in the context of high-speed nanopositioning. Future challenges and research topics are also discussed.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3885434','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3885434"><span>Timescale Halo: Average-Speed Targets Elicit More Positive and Less Negative Attributions than Slow or Fast Targets</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Hernandez, Ivan; Preston, Jesse Lee; Hepler, Justin</p> <p>2014-01-01</p> <p>Research on the timescale bias has found that observers perceive more capacity for mind in targets moving at an average speed, relative to slow or fast moving targets. The present research revisited the timescale bias as a type of halo effect, where normal-speed people elicit positive evaluations and abnormal-speed (slow and fast) people elicit negative evaluations. In two studies, participants viewed videos of people walking at a slow, average, or fast speed. We find evidence for a timescale halo effect: people walking at an average-speed were attributed more positive mental traits, but fewer negative mental traits, relative to slow or fast moving people. These effects held across both cognitive and emotional dimensions of mind and were mediated by overall positive/negative ratings of the person. These results suggest that, rather than eliciting greater perceptions of general mind, the timescale bias may reflect a generalized positivity toward average speed people relative to slow or fast moving people. PMID:24421882</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2007SPIE.6516E..05N','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2007SPIE.6516E..05N"><span>Video and LAN solutions for a digital OR: the Varese experience</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Nocco, Umberto; Cocozza, Eugenio; Sivo, Monica; Peta, Giancarlo</p> <p>2007-03-01</p> <p>Purpose: build 20 ORs equipped with independent video acquisition and broadcasting systems and a powerful LAN connectivity. Methods: a digital PC controlled video matrix has been installed in each OR. The LAN connectivity has been developed to grant data entering the OR and high speed connectivity to a server and to broadcasting devices. Video signals are broadcasted within the OR. Fixed inputs and five additional video inputs have been placed in the OR. Images can be stored locally on a high capacity HDD and a DVD recorder. Images can be also stored in a central archive for future acquisition and reference. Ethernet plugs have been placed within the OR to acquire images and data from the Hospital LAN; the OR is connected to the server/archive using a dedicated optical fiber. Results: 20 independent digital ORs have been built. Each OR is "self contained" and images can be digitally managed and broadcasted. Security issues concerning both image visualization and electrical safety have been fulfilled and each OR is fully integrated in the Hospital LAN. Conclusions: Digital ORs were fully implemented, they fulfill surgeons needs in terms of video acquisition and distribution and grant high quality video for each kind of surgery in a major hospital.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23670014','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23670014"><span>Fast generation of video holograms of three-dimensional moving objects using a motion compensation-based novel look-up table.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Kim, Seung-Cheol; Dong, Xiao-Bin; Kwon, Min-Woo; Kim, Eun-Soo</p> <p>2013-05-06</p> <p>A novel approach for fast generation of video holograms of three-dimensional (3-D) moving objects using a motion compensation-based novel-look-up-table (MC-N-LUT) method is proposed. Motion compensation has been widely employed in compression of conventional 2-D video data because of its ability to exploit high temporal correlation between successive video frames. Here, this concept of motion-compensation is firstly applied to the N-LUT based on its inherent property of shift-invariance. That is, motion vectors of 3-D moving objects are extracted between the two consecutive video frames, and with them motions of the 3-D objects at each frame are compensated. Then, through this process, 3-D object data to be calculated for its video holograms are massively reduced, which results in a dramatic increase of the computational speed of the proposed method. Experimental results with three kinds of 3-D video scenarios reveal that the average number of calculated object points and the average calculation time for one object point of the proposed method, have found to be reduced down to 86.95%, 86.53% and 34.99%, 32.30%, respectively compared to those of the conventional N-LUT and temporal redundancy-based N-LUT (TR-N-LUT) methods.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2008SPIE.6819E..0KW','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2008SPIE.6819E..0KW"><span>The video watermarking container: efficient real-time transaction watermarking</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Wolf, Patrick; Hauer, Enrico; Steinebach, Martin</p> <p>2008-02-01</p> <p>When transaction watermarking is used to secure sales in online shops by embedding transaction specific watermarks, the major challenge is embedding efficiency: Maximum speed by minimal workload. This is true for all types of media. Video transaction watermarking presents a double challenge. Video files not only are larger than for example music files of the same playback time. In addition, video watermarking algorithms have a higher complexity than algorithms for other types of media. Therefore online shops that want to protect their videos by transaction watermarking are faced with the problem that their servers need to work harder and longer for every sold medium in comparison to audio sales. In the past, many algorithms responded to this challenge by reducing their complexity. But this usually results in a loss of either robustness or transparency. This paper presents a different approach. The container technology separates watermark embedding into two stages: A preparation stage and the finalization stage. In the preparation stage, the video is divided into embedding segments. For each segment one copy marked with "0" and anther one marked with "1" is created. This stage is computationally expensive but only needs to be done once. In the finalization stage, the watermarked video is assembled from the embedding segments according to the watermark message. This stage is very fast and involves no complex computations. It thus allows efficient creation of individually watermarked video files.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19810004952','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19810004952"><span>Data acquisition and analysis in the DOE/NASA Wind Energy Program</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Neustadter, H. E.</p> <p>1980-01-01</p> <p>Four categories of data systems, each responding to a distinct information need are presented. The categories are: control, technology, engineering and performance. The focus is on the technology data system which consists of the following elements: sensors which measure critical parameters such as wind speed and direction, output power, blade loads and strains, and tower vibrations; remote multiplexing units (RMU) mounted on each wind turbine which frequency modulate, multiplex and transmit sensor outputs; the instrumentation available to record, process and display these signals; and centralized computer analysis of data. The RMU characteristics and multiplexing techniques are presented. Data processing is illustrated by following a typical signal through instruments such as the analog tape recorder, analog to digital converter, data compressor, digital tape recorder, video (CRT) display, and strip chart recorder.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.osti.gov/sciencecinema/biblio/1406353','SCIGOVIMAGE-SCICINEMA'); return false;" href="http://www.osti.gov/sciencecinema/biblio/1406353"><span>Why Can’t You Go Faster than Light?</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/sciencecinema/">ScienceCinema</a></p> <p>Lincoln, Don</p> <p>2018-01-16</p> <p>One of the most counterintuitive facts of our universe is that you can’t go faster than the speed of light. From this single observation arise all of the mind-bending behaviors of special relativity. But why is this so? In this in-depth video, Fermilab’s Dr. Don Lincoln explains the real reason that you can’t go faster than the speed of light. It will blow your mind.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014JGRB..119.5369G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014JGRB..119.5369G"><span>Pyroclast Tracking Velocimetry: A particle tracking velocimetry-based tool for the study of Strombolian explosive eruptions</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Gaudin, Damien; Moroni, Monica; Taddeucci, Jacopo; Scarlato, Piergiorgio; Shindler, Luca</p> <p>2014-07-01</p> <p>Image-based techniques enable high-resolution observation of the pyroclasts ejected during Strombolian explosions and drawing inferences on the dynamics of volcanic activity. However, data extraction from high-resolution videos is time consuming and operator dependent, while automatic analysis is often challenging due to the highly variable quality of images collected in the field. Here we present a new set of algorithms to automatically analyze image sequences of explosive eruptions: the pyroclast tracking velocimetry (PyTV) toolbox. First, a significant preprocessing is used to remove the image background and to detect the pyroclasts. Then, pyroclast tracking is achieved with a new particle tracking velocimetry algorithm, featuring an original predictor of velocity based on the optical flow equation. Finally, postprocessing corrects the systematic errors of measurements. Four high-speed videos of Strombolian explosions from Yasur and Stromboli volcanoes, representing various observation conditions, have been used to test the efficiency of the PyTV against manual analysis. In all cases, >106 pyroclasts have been successfully detected and tracked by PyTV, with a precision of 1 m/s for the velocity and 20% for the size of the pyroclast. On each video, more than 1000 tracks are several meters long, enabling us to study pyroclast properties and trajectories. Compared to manual tracking, 3 to 100 times more pyroclasts are analyzed. PyTV, by providing time-constrained information, links physical properties and motion of individual pyroclasts. It is a powerful tool for the study of explosive volcanic activity, as well as an ideal complement for other geological and geophysical volcano observation systems.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016EGUGA..1810147B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016EGUGA..1810147B"><span>Video-Seismic coupling for debris flow study at Merapi Volcano, Indonesia</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Budi Wibowo, Sandy; Lavigne, Franck; Mourot, Philippe; Sukatja, Bambang</p> <p>2016-04-01</p> <p>Previous lahar disasters caused at least 44.252 death toll worldwide from 1600 to 2010 of which 52 % was due to a single event in the late 20th century. The need of a better understanding of lahar flow behavior makes general public and stakeholders much more curious than before. However, the dynamics of lahar in motion is still poorly understood because data acquisition of active flows is difficult. This research presents debris-flow-type lahar on February 28, 2014 at Merapi volcano in Indonesia. The lahar dynamics was studied in the frame of the SEDIMER Project (Sediment-related Disasters following the 2010 centennial eruption of Merapi Volcano, Java, Indonesia) based on coupling between video and seismic data analysis. We installed a seismic station at Gendol river (1090 meters asl, 4.6 km south from the summit) consisting of two geophones placed 76 meters apart parallel to the river, a high definition camera on the edge of the river and two raingauges at east and west side of the river. The results showed that the behavior of this lahar changed continuously during the event. The lahar front moved at an average speed of 4.1 m/s at the observation site. Its maximum velocity reached 14.5 m/s with a peak discharge of 473 m3/s. The maximum depth of the flow reached 7 m. Almost 600 blocks of more than 1 m main axis were identified on the surface of the lahar during 36 minutes, which represents an average block discharge of 17 blocks per minute. Seismic frequency ranged from 10 to 150 Hz. However, there was a clear difference between upstream and downstream seismic characteristics. The interpretation related to this difference could be improved by the results of analysis of video recordings, especially to differentiate the debris flow and hyperconcentrated flow phase. The lahar video is accessible online to the broader community (https://www.youtube.com/watch?v=wlVssRoaPbw). Keywords: lahar, video, seismic signal, debris flow, hyperconcentrated flow, Merapi, Indonesia.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1990SPIE.1232...61R','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1990SPIE.1232...61R"><span>Objective analysis of image quality of video image capture systems</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Rowberg, Alan H.</p> <p>1990-07-01</p> <p>As Picture Archiving and Communication System (PACS) technology has matured, video image capture has become a common way of capturing digital images from many modalities. While digital interfaces, such as those which use the ACR/NEMA standard, will become more common in the future, and are preferred because of the accuracy of image transfer, video image capture will be the dominant method in the short term, and may continue to be used for some time because of the low cost and high speed often associated with such devices. Currently, virtually all installed systems use methods of digitizing the video signal that is produced for display on the scanner viewing console itself. A series of digital test images have been developed for display on either a GE CT9800 or a GE Signa MRI scanner. These images have been captured with each of five commercially available image capture systems, and the resultant images digitally transferred on floppy disk to a PC1286 computer containing Optimast' image analysis software. Here the images can be displayed in a comparative manner for visual evaluation, in addition to being analyzed statistically. Each of the images have been designed to support certain tests, including noise, accuracy, linearity, gray scale range, stability, slew rate, and pixel alignment. These image capture systems vary widely in these characteristics, in addition to the presence or absence of other artifacts, such as shading and moire pattern. Other accessories such as video distribution amplifiers and noise filters can also add or modify artifacts seen in the captured images, often giving unusual results. Each image is described, together with the tests which were performed using them. One image contains alternating black and white lines, each one pixel wide, after equilibration strips ten pixels wide. While some systems have a slew rate fast enough to track this correctly, others blur it to an average shade of gray, and do not resolve the lines, or give horizontal or vertical streaking. While many of these results are significant from an engineering standpoint alone, there are clinical implications and some anatomy or pathology may not be visualized if an image capture system is used improperly.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2009AIPC.1145...67H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2009AIPC.1145...67H"><span>2d Granular Gas in Knudsen Regime and in Microgravity Excited by Vibration: Velocity and Position Distributions</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Hou, M.; Liu, R.; Li, Y.; Lu, K.; Garrabos, Y.; Evesque, P.</p> <p>2009-06-01</p> <p>Dynamics of quasi-2d dissipative granular gas is studied in microgravity condition (of the order of 10-4 g) in the limit of Knudsen regime. The gas, made of 4 spheres, is confined in a square cell enforced to follow linear sinusoidal vibration in ten different vibration modes. The trajectory of one of the particles is tracked and reconstructed from the 2-hour video data. From statistical analysis, we find that (i) loss due to wall friction is small, (ii) trajectory looks ergodic in space, and (iii) distribution ρ(v) of speed follows an exponential distribution, i.e. ρ(v)≈exp(-v/vxo,yo), with vxo,yo being a characteristic velocity along a direction parallel (y) or perpendicular (x) to vibration direction. This law deviates strongly from the Boltzmann distribution of speed in molecular gas. Comparisons of this result with previous measurements in earth environment, and what was found in 3d cell [1] performed in 10-2 g environment are given.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017EJPh...38d4001B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017EJPh...38d4001B"><span>Monitoring the biomechanics of a wheelchair sprinter racing the 100 m final at the 2016 Paralympic Games</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Barbosa, Tiago M.; Coelho, Eduarda</p> <p>2017-07-01</p> <p>The aim was to run a case study of the biomechanics of a wheelchair sprinter racing the 100 m final at the 2016 Paralympic Games. Stroke kinematics was measured by video analysis in each 20 m split. Race kinetics was estimated by employing an analytical model that encompasses the computation of the rolling friction, drag, energy output and energy input. A maximal average speed of 6.97 m s-1 was reached in the last split. It was estimated that the contributions of the rolling friction and drag force would account for 54% and 46% of the total resistance at maximal speed, respectively. Energy input and output increased over the event. However, we failed to note a steady state or any impairment of the energy input and output in the last few metres of the race. Data suggest that the 100 m is too short an event for the sprinter to be able to achieve his maximal power in such a distance.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20170008510','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20170008510"><span>Comprehensive Oculomotor Behavioral Response Assessment (COBRA)</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Stone, Leland S. (Inventor); Liston, Dorion B. (Inventor)</p> <p>2017-01-01</p> <p>An eye movement-based methodology and assessment tool may be used to quantify many aspects of human dynamic visual processing using a relatively simple and short oculomotor task, noninvasive video-based eye tracking, and validated oculometric analysis techniques. By examining the eye movement responses to a task including a radially-organized appropriately randomized sequence of Rashbass-like step-ramp pursuit-tracking trials, distinct performance measurements may be generated that may be associated with, for example, pursuit initiation (e.g., latency and open-loop pursuit acceleration), steady-state tracking (e.g., gain, catch-up saccade amplitude, and the proportion of the steady-state response consisting of smooth movement), direction tuning (e.g., oblique effect amplitude, horizontal-vertical asymmetry, and direction noise), and speed tuning (e.g., speed responsiveness and noise). This quantitative approach may provide fast and results (e.g., a multi-dimensional set of oculometrics and a single scalar impairment index) that can be interpreted by one without a high degree of scientific sophistication or extensive training.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014PhPro..56..759S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014PhPro..56..759S"><span>Laser Spot Welding of Copper-aluminum Joints Using a Pulsed Dual Wavelength Laser at 532 and 1064 nm</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Stritt, Peter; Hagenlocher, Christian; Kizler, Christine; Weber, Rudolf; Rüttimann, Christoph; Graf, Thomas</p> <p></p> <p>A modulated pulsed laser source emitting green and infrared laser light is used to join the dissimilar metals copper and aluminum. The resultant dynamic welding process is analyzed using the back reflected laser light and high speed video observations of the interaction zone. Different pulse shapes are applied to influence the melt pool dynamics and thereby the forming grain structure and intermetallic phases. The results of high-speed images and back-reflections prove that a modulation of the pulse shape is transferred to oscillations of the melt pool at the applied frequency. The outcome of the melt pool oscillation is shown by the metallurgically prepared cross-section, which indicates different solidification lines and grain shapes. An energy-dispersivex-ray analysis shows the mixture and the resultant distribution of the two metals, copper and aluminum, within the spot weld. It can be seen that the mixture is homogenized the observed melt pool oscillations.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70032867','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70032867"><span>Sprint swimming performance of wild bull trout (Salvelinus confluentus)</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Mesa, M.G.; Phelps, J.; Weiland, L.K.</p> <p>2008-01-01</p> <p>We conducted laboratory experiments to determine the sprint swimming performance of wild juvenile and adult bull trout Salvelinus confluentus. Sprint swimming speeds were estimated using high-speed digital video analysis. Thirty two bull trout were tested in sizes ranging from about 10 to 31 cm. Of these, 14 fish showed at least one motivated, vigorous sprint. When plotted as a function of time, velocity of fish increased rapidly with the relation linear or slightly curvilinear. Their maximum velocity, or Vmax, ranged from 1.3 to 2.3 m/s, was usually achieved within 0.8 to 1.0 s, and was independent of fish size. Distances covered during these sprints ranged from 1.4 to 2.4 m. Our estimates of the sprint swimming performance are the first reported for this species and may be useful for producing or modifying fish passage structures that allow safe and effective passage of fish without overly exhausting them. ?? 2008 by the Northwest Scientific Association. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013SPIE.8913E..0EK','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013SPIE.8913E..0EK"><span>Design of a system based on DSP and FPGA for video recording and replaying</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kang, Yan; Wang, Heng</p> <p>2013-08-01</p> <p>This paper brings forward a video recording and replaying system with the architecture of Digital Signal Processor (DSP) and Field Programmable Gate Array (FPGA). The system achieved encoding, recording, decoding and replaying of Video Graphics Array (VGA) signals which are displayed on a monitor during airplanes and ships' navigating. In the architecture, the DSP is a main processor which is used for a large amount of complicated calculation during digital signal processing. The FPGA is a coprocessor for preprocessing video signals and implementing logic control in the system. In the hardware design of the system, Peripheral Device Transfer (PDT) function of the External Memory Interface (EMIF) is utilized to implement seamless interface among the DSP, the synchronous dynamic RAM (SDRAM) and the First-In-First-Out (FIFO) in the system. This transfer mode can avoid the bottle-neck of the data transfer and simplify the circuit between the DSP and its peripheral chips. The DSP's EMIF and two level matching chips are used to implement Advanced Technology Attachment (ATA) protocol on physical layer of the interface of an Integrated Drive Electronics (IDE) Hard Disk (HD), which has a high speed in data access and does not rely on a computer. Main functions of the logic on the FPGA are described and the screenshots of the behavioral simulation are provided in this paper. In the design of program on the DSP, Enhanced Direct Memory Access (EDMA) channels are used to transfer data between the FIFO and the SDRAM to exert the CPU's high performance on computing without intervention by the CPU and save its time spending. JPEG2000 is implemented to obtain high fidelity in video recording and replaying. Ways and means of acquiring high performance for code are briefly present. The ability of data processing of the system is desirable. And smoothness of the replayed video is acceptable. By right of its design flexibility and reliable operation, the system based on DSP and FPGA for video recording and replaying has a considerable perspective in analysis after the event, simulated exercitation and so forth.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20140006436','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20140006436"><span>Scalable Adaptive Graphics Environment (SAGE) Software for the Visualization of Large Data Sets on a Video Wall</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Jedlovec, Gary; Srikishen, Jayanthi; Edwards, Rita; Cross, David; Welch, Jon; Smith, Matt</p> <p>2013-01-01</p> <p>The use of collaborative scientific visualization systems for the analysis, visualization, and sharing of "big data" available from new high resolution remote sensing satellite sensors or four-dimensional numerical model simulations is propelling the wider adoption of ultra-resolution tiled display walls interconnected by high speed networks. These systems require a globally connected and well-integrated operating environment that provides persistent visualization and collaboration services. This abstract and subsequent presentation describes a new collaborative visualization system installed for NASA's Shortterm Prediction Research and Transition (SPoRT) program at Marshall Space Flight Center and its use for Earth science applications. The system consists of a 3 x 4 array of 1920 x 1080 pixel thin bezel video monitors mounted on a wall in a scientific collaboration lab. The monitors are physically and virtually integrated into a 14' x 7' for video display. The display of scientific data on the video wall is controlled by a single Alienware Aurora PC with a 2nd Generation Intel Core 4.1 GHz processor, 32 GB memory, and an AMD Fire Pro W600 video card with 6 mini display port connections. Six mini display-to-dual DVI cables are used to connect the 12 individual video monitors. The open source Scalable Adaptive Graphics Environment (SAGE) windowing and media control framework, running on top of the Ubuntu 12 Linux operating system, allows several users to simultaneously control the display and storage of high resolution still and moving graphics in a variety of formats, on tiled display walls of any size. The Ubuntu operating system supports the open source Scalable Adaptive Graphics Environment (SAGE) software which provides a common environment, or framework, enabling its users to access, display and share a variety of data-intensive information. This information can be digital-cinema animations, high-resolution images, high-definition video-teleconferences, presentation slides, documents, spreadsheets or laptop screens. SAGE is cross-platform, community-driven, open-source visualization and collaboration middleware that utilizes shared national and international cyberinfrastructure for the advancement of scientific research and education.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013AGUFMIN33C..06J','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013AGUFMIN33C..06J"><span>Scalable Adaptive Graphics Environment (SAGE) Software for the Visualization of Large Data Sets on a Video Wall</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Jedlovec, G.; Srikishen, J.; Edwards, R.; Cross, D.; Welch, J. D.; Smith, M. R.</p> <p>2013-12-01</p> <p>The use of collaborative scientific visualization systems for the analysis, visualization, and sharing of 'big data' available from new high resolution remote sensing satellite sensors or four-dimensional numerical model simulations is propelling the wider adoption of ultra-resolution tiled display walls interconnected by high speed networks. These systems require a globally connected and well-integrated operating environment that provides persistent visualization and collaboration services. This abstract and subsequent presentation describes a new collaborative visualization system installed for NASA's Short-term Prediction Research and Transition (SPoRT) program at Marshall Space Flight Center and its use for Earth science applications. The system consists of a 3 x 4 array of 1920 x 1080 pixel thin bezel video monitors mounted on a wall in a scientific collaboration lab. The monitors are physically and virtually integrated into a 14' x 7' for video display. The display of scientific data on the video wall is controlled by a single Alienware Aurora PC with a 2nd Generation Intel Core 4.1 GHz processor, 32 GB memory, and an AMD Fire Pro W600 video card with 6 mini display port connections. Six mini display-to-dual DVI cables are used to connect the 12 individual video monitors. The open source Scalable Adaptive Graphics Environment (SAGE) windowing and media control framework, running on top of the Ubuntu 12 Linux operating system, allows several users to simultaneously control the display and storage of high resolution still and moving graphics in a variety of formats, on tiled display walls of any size. The Ubuntu operating system supports the open source Scalable Adaptive Graphics Environment (SAGE) software which provides a common environment, or framework, enabling its users to access, display and share a variety of data-intensive information. This information can be digital-cinema animations, high-resolution images, high-definition video-teleconferences, presentation slides, documents, spreadsheets or laptop screens. SAGE is cross-platform, community-driven, open-source visualization and collaboration middleware that utilizes shared national and international cyberinfrastructure for the advancement of scientific research and education.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2009IEITC..92.3893N','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2009IEITC..92.3893N"><span>Adaptive Video Streaming Using Bandwidth Estimation for 3.5G Mobile Network</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Nam, Hyeong-Min; Park, Chun-Su; Jung, Seung-Won; Ko, Sung-Jea</p> <p></p> <p>Currently deployed mobile networks including High Speed Downlink Packet Access (HSDPA) offer only best-effort Quality of Service (QoS). In wireless best effort networks, the bandwidth variation is a critical problem, especially, for mobile devices with small buffers. This is because the bandwidth variation leads to packet losses caused by buffer overflow as well as picture freezing due to high transmission delay or buffer underflow. In this paper, in order to provide seamless video streaming over HSDPA, we propose an efficient real-time video streaming method that consists of the available bandwidth (AB) estimation for the HSDPA network and the transmission rate control to prevent buffer overflows/underflows. In the proposed method, the client estimates the AB and the estimated AB is fed back to the server through real-time transport control protocol (RTCP) packets. Then, the server adaptively adjusts the transmission rate according to the estimated AB and the buffer state obtained from the RTCP feedback information. Experimental results show that the proposed method achieves seamless video streaming over the HSDPA network providing higher video quality and lower transmission delay.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://rosap.ntl.bts.gov/view/dot/27710','DOTNTL'); return false;" href="https://rosap.ntl.bts.gov/view/dot/27710"><span>Video Vehicle Detector Verification System (V2DVS) operators manual and project final report.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntlsearch.bts.gov/tris/index.do">DOT National Transportation Integrated Search</a></p> <p></p> <p>2012-03-01</p> <p>The accurate detection of the presence, speed and/or length of vehicles on roadways is recognized as critical for : effective roadway congestion management and safety. Vehicle presence sensors are commonly used for traffic : volume measurement and co...</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li class="active"><span>22</span></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_22 --> <div id="page_23" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li class="active"><span>23</span></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="441"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://rosap.ntl.bts.gov/view/dot/30863','DOTNTL'); return false;" href="https://rosap.ntl.bts.gov/view/dot/30863"><span>Highway-railway at-grade crossing structures : long term settlement measurements and assessments.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntlsearch.bts.gov/tris/index.do">DOT National Transportation Integrated Search</a></p> <p></p> <p>2016-03-22</p> <p>A common maintenance technique to correct track geometry at bridge transitions is hand tamping. The first section presents a non-invasive track monitoring system involving high-speed video cameras that evaluates the change in track behavior before an...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23885414','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23885414"><span>Development of an imaging system for single droplet characterization using a droplet generator.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Minov, S Vulgarakis; Cointault, F; Vangeyte, J; Pieters, J G; Hijazi, B; Nuyttens, D</p> <p>2012-01-01</p> <p>The spray droplets generated by agricultural nozzles play an important role in the application accuracy and efficiency of plant protection products. The limitations of the non-imaging techniques and the recent improvements in digital image acquisition and processing increased the interest in using high speed imaging techniques in pesticide spray characterisation. The goal of this study was to develop an imaging technique to evaluate the characteristics of a single spray droplet using a piezoelectric single droplet generator and a high speed imaging technique. Tests were done with different camera settings, lenses, diffusers and light sources. The experiments have shown the necessity for having a good image acquisition and processing system. Image analysis results contributed in selecting the optimal set-up for measuring droplet size and velocity which consisted of a high speed camera with a 6 micros exposure time, a microscope lens at a working distance of 43 cm resulting in a field of view of 1.0 cm x 0.8 cm and a Xenon light source without diffuser used as a backlight. For measuring macro-spray characteristics as the droplet trajectory, the spray angle and the spray shape, a Macro Video Zoom lens at a working distance of 14.3 cm with a bigger field of view of 7.5 cm x 9.5 cm in combination with a halogen spotlight with a diffuser and the high speed camera can be used.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27463843','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27463843"><span>Video gaming in school children: How much is enough?</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Pujol, Jesus; Fenoll, Raquel; Forns, Joan; Harrison, Ben J; Martínez-Vilavella, Gerard; Macià, Dídac; Alvarez-Pedrerol, Mar; Blanco-Hinojo, Laura; González-Ortiz, Sofía; Deus, Joan; Sunyer, Jordi</p> <p>2016-09-01</p> <p>Despite extensive debate, the proposed benefits and risks of video gaming in young people remain to be empirically clarified, particularly as regards an optimal level of use. In 2,442 children aged 7 to 11 years, we investigated relationships between weekly video game use, selected cognitive abilities, and conduct-related problems. A large subgroup of these children (n = 260) was further examined with magnetic resonance imaging approximately 1 year later to assess the impact of video gaming on brain structure and function. Playing video games for 1 hour per week was associated with faster and more consistent psychomotor responses to visual stimulation. Remarkably, no further change in motor speed was identified in children playing >2 hours per week. By comparison, the weekly time spent gaming was steadily associated with conduct problems, peer conflicts, and reduced prosocial abilities. These negative implications were clearly visible only in children at the extreme of our game-playing distribution, with 9 hours or more of video gaming per week. At a neural level, changes associated with gaming were most evident in basal ganglia white matter and functional connectivity. Significantly better visuomotor skills can be seen in school children playing video games, even with relatively small amounts of use. Frequent weekly use, by contrast, was associated with conduct problems. Further studies are needed to determine whether moderate video gaming causes improved visuomotor skills and whether excessive video gaming causes conduct problems, or whether children who already have these characteristics simply play more video games. Ann Neurol 2016;80:424-433. © 2016 American Neurological Association.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2006SPIE.6143..370M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2006SPIE.6143..370M"><span>Real-time CT-video registration for continuous endoscopic guidance</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Merritt, Scott A.; Rai, Lav; Higgins, William E.</p> <p>2006-03-01</p> <p>Previous research has shown that CT-image-based guidance could be useful for the bronchoscopic assessment of lung cancer. This research drew upon the registration of bronchoscopic video images to CT-based endoluminal renderings of the airway tree. The proposed methods either were restricted to discrete single-frame registration, which took several seconds to complete, or required non-real-time buffering and processing of video sequences. We have devised a fast 2D/3D image registration method that performs single-frame CT-Video registration in under 1/15th of a second. This allows the method to be used for real-time registration at full video frame rates without significantly altering the physician's behavior. The method achieves its speed through a gradient-based optimization method that allows most of the computation to be performed off-line. During live registration, the optimization iteratively steps toward the locally optimal viewpoint at which a CT-based endoluminal view is most similar to a current bronchoscopic video frame. After an initial registration to begin the process (generally done in the trachea for bronchoscopy), subsequent registrations are performed in real-time on each incoming video frame. As each new bronchoscopic video frame becomes available, the current optimization is initialized using the previous frame's optimization result, allowing continuous guidance to proceed without manual re-initialization. Tests were performed using both synthetic and pre-recorded bronchoscopic video. The results show that the method is robust to initialization errors, that registration accuracy is high, and that continuous registration can proceed on real-time video at >15 frames per sec. with minimal user-intervention.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2011SPIE.8004E..0RL','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2011SPIE.8004E..0RL"><span>User-oriented summary extraction for soccer video based on multimodal analysis</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Liu, Huayong; Jiang, Shanshan; He, Tingting</p> <p>2011-11-01</p> <p>An advanced user-oriented summary extraction method for soccer video is proposed in this work. Firstly, an algorithm of user-oriented summary extraction for soccer video is introduced. A novel approach that integrates multimodal analysis, such as extraction and analysis of the stadium features, moving object features, audio features and text features is introduced. By these features the semantic of the soccer video and the highlight mode are obtained. Then we can find the highlight position and put them together by highlight degrees to obtain the video summary. The experimental results for sports video of world cup soccer games indicate that multimodal analysis is effective for soccer video browsing and retrieval.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27173640','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27173640"><span>Effects of domain-specific exercise load on speed and accuracy of a domain-specific perceptual-cognitive task.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Schapschröer, M; Baker, J; Schorer, J</p> <p>2016-08-01</p> <p>In the context of perceptual-cognitive expertise it is important to know whether physiological loads influence perceptual-cognitive performance. This study examined whether a handball specific physical exercise load influenced participants' speed and accuracy in a flicker task. At rest and during a specific interval exercise of 86.5-90% HRmax, 35 participants (experts: n=8, advanced: n=13, novices, n=14) performed a handball specific flicker task with two types of patterns (structured and unstructured). For reaction time, results revealed moderate effect sizes for group, with experts reacting faster than advanced and advanced reacting faster than novices, and for structure, with structured videos being performed faster than unstructured ones. A significant interaction for structure×group was also found, with experts and advanced players faster for structured videos, and novices faster for unstructured videos. For accuracy, significant main effects were found for structure with structured videos solved more accurately. A significant interaction for structure×group was revealed, with experts and advanced more accurate for structured scenes and novices more accurate for unstructured scenes. A significant interaction was also found for condition×structure; at rest, unstructured and structured scenes were performed with the same accuracy while under physical exercise, structured scenes were solved more accurately. No other interactions were found. These results were somewhat surprising given previous work in this area, although the impact of a specific physical exercise on a specific perceptual-cognitive task may be different from those tested generally. Copyright © 2016 Elsevier B.V. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1988SPIE..849..191B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1988SPIE..849..191B"><span>Robotic Attention Processing And Its Application To Visual Guidance</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Barth, Matthew; Inoue, Hirochika</p> <p>1988-03-01</p> <p>This paper describes a method of real-time visual attention processing for robots performing visual guidance. This robot attention processing is based on a novel vision processor, the multi-window vision system that was developed at the University of Tokyo. The multi-window vision system is unique in that it only processes visual information inside local area windows. These local area windows are quite flexible in their ability to move anywhere on the visual screen, change their size and shape, and alter their pixel sampling rate. By using these windows for specific attention tasks, it is possible to perform high speed attention processing. The primary attention skills of detecting motion, tracking an object, and interpreting an image are all performed at high speed on the multi-window vision system. A basic robotic attention scheme using the attention skills was developed. The attention skills involved detection and tracking of salient visual features. The tracking and motion information thus obtained was utilized in producing the response to the visual stimulus. The response of the attention scheme was quick enough to be applicable to the real-time vision processing tasks of playing a video 'pong' game, and later using an automobile driving simulator. By detecting the motion of a 'ball' on a video screen and then tracking the movement, the attention scheme was able to control a 'paddle' in order to keep the ball in play. The response was faster than that of a human's, allowing the attention scheme to play the video game at higher speeds. Further, in the application to the driving simulator, the attention scheme was able to control both direction and velocity of a simulated vehicle following a lead car. These two applications show the potential of local visual processing in its use for robotic attention processing.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018SPIE10620E..1RX','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018SPIE10620E..1RX"><span>Extracting information of fixational eye movements through pupil tracking</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Xiao, JiangWei; Qiu, Jian; Luo, Kaiqin; Peng, Li; Han, Peng</p> <p>2018-01-01</p> <p>Human eyes are never completely static even when they are fixing a stationary point. These irregular, small movements, which consist of micro-tremors, micro-saccades and drifts, can prevent the fading of the images that enter our eyes. The importance of researching the fixational eye movements has been experimentally demonstrated recently. However, the characteristics of fixational eye movements and their roles in visual process have not been explained clearly, because these signals can hardly be completely extracted by now. In this paper, we developed a new eye movement detection device with a high-speed camera. This device includes a beam splitter mirror, an infrared light source and a high-speed digital video camera with a frame rate of 200Hz. To avoid the influence of head shaking, we made the device wearable by fixing the camera on a safety helmet. Using this device, the experiments of pupil tracking were conducted. By localizing the pupil center and spectrum analysis, the envelope frequency spectrum of micro-saccades, micro-tremors and drifts are shown obviously. The experimental results show that the device is feasible and effective, so that the device can be applied in further characteristic analysis.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29135450','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29135450"><span>Influence of acquisition frame-rate and video compression techniques on pulse-rate variability estimation from vPPG signal.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Cerina, Luca; Iozzia, Luca; Mainardi, Luca</p> <p>2017-11-14</p> <p>In this paper, common time- and frequency-domain variability indexes obtained by pulse rate variability (PRV) series extracted from video-photoplethysmographic signal (vPPG) were compared with heart rate variability (HRV) parameters calculated from synchronized ECG signals. The dual focus of this study was to analyze the effect of different video acquisition frame-rates starting from 60 frames-per-second (fps) down to 7.5 fps and different video compression techniques using both lossless and lossy codecs on PRV parameters estimation. Video recordings were acquired through an off-the-shelf GigE Sony XCG-C30C camera on 60 young, healthy subjects (age 23±4 years) in the supine position. A fully automated, signal extraction method based on the Kanade-Lucas-Tomasi (KLT) algorithm for regions of interest (ROI) detection and tracking, in combination with a zero-phase principal component analysis (ZCA) signal separation technique was employed to convert the video frames sequence to a pulsatile signal. The frame-rate degradation was simulated on video recordings by directly sub-sampling the ROI tracking and signal extraction modules, to correctly mimic videos recorded at a lower speed. The compression of the videos was configured to avoid any frame rejection caused by codec quality leveling, FFV1 codec was used for lossless compression and H.264 with variable quality parameter as lossy codec. The results showed that a reduced frame-rate leads to inaccurate tracking of ROIs, increased time-jitter in the signals dynamics and local peak displacements, which degrades the performances in all the PRV parameters. The root mean square of successive differences (RMSSD) and the proportion of successive differences greater than 50 ms (PNN50) indexes in time-domain and the low frequency (LF) and high frequency (HF) power in frequency domain were the parameters which highly degraded with frame-rate reduction. Such a degradation can be partially mitigated by up-sampling the measured signal at a higher frequency (namely 60 Hz). Concerning the video compression, the results showed that compression techniques are suitable for the storage of vPPG recordings, although lossless or intra-frame compression are to be preferred over inter-frame compression methods. FFV1 performances are very close to the uncompressed (UNC) version with less than 45% disk size. H.264 showed a degradation of the PRV estimation directly correlated with the increase of the compression ratio.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1983JFM...129...27S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1983JFM...129...27S"><span>The characteristics of low-speed streaks in the near-wall region of a turbulent boundary layer</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Smith, C. R.; Metzler, S. P.</p> <p>1983-04-01</p> <p>The discovery of an instantaneous spanwise velocity distribution consisting of alternative zones of high- and low-speed fluid which develop in the viscous sublayer and extend into the logarithmic region was one of the first clues to the existence of an ordered structure within a turbulent boundary layer. The present investigation is concerned with quantitative flow-visualization results obtained with the aid of a high-speed video flow visualization system which permits the detailed visual examination of both the statistics and characteristics of low-speed streaks over a much wider range of Reynolds numbers than has been possible before. Attention is given to streak appearance, mean streak spacing, the spanwise distribution of streaks, streak persistence, and aspects of streak merging and intermittency. The results indicate that the statistical characteristics of the spanwise spacing of low-speed streaks are essentially invariant with Reynolds number.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28802457','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28802457"><span>Reducing the impact of speed dispersion on subway corridor flow.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Qiao, Jing; Sun, Lishan; Liu, Xiaoming; Rong, Jian</p> <p>2017-11-01</p> <p>The rapid increase in the volume of subway passengers in Beijing has necessitated higher requirements for the safety and efficiency of subway corridors. Speed dispersion is an important factor that affects safety and efficiency. This paper aims to analyze the management control methods for reducing pedestrian speed dispersion in subways. The characteristics of the speed dispersion of pedestrian flow were analyzed according to field videos. The control measurements which were conducted by placing traffic signs, yellow marking, and guardrail were proposed to alleviate speed dispersion. The results showed that the methods of placing traffic signs, yellow marking, and a guardrail improved safety and efficiency for all four volumes of pedestrian traffic flow, and the best-performing control measurement was guardrails. Furthermore, guardrails' optimal position and design measurements were explored. The research findings provide a rationale for subway managers in optimizing pedestrian traffic flow in subway corridors. Copyright © 2017. Published by Elsevier Ltd.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016SPIE10031E..4XT','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016SPIE10031E..4XT"><span>Comparison of H.265/HEVC encoders</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Trochimiuk, Maciej</p> <p>2016-09-01</p> <p>The H.265/HEVC is the state-of-the-art video compression standard, which allows the bitrate reduction up to 50% compared with its predecessor, H.264/AVC, maintaining equal perceptual video quality. The growth in coding efficiency was achieved by increasing the number of available intra- and inter-frame prediction features and improvements in existing ones, such as entropy encoding and filtering. Nevertheless, to achieve real-time performance of the encoder, simplifications in algorithm are inevitable. Some features and coding modes shall be skipped, to reduce time needed to evaluate modes forwarded to rate-distortion optimisation. Thus, the potential acceleration of the encoding process comes at the expense of coding efficiency. In this paper, a trade-off between video quality and encoding speed of various H.265/HEVC encoders is discussed.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/751027','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/751027"><span>Digital video technologies and their network requirements</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>R. P. Tsang; H. Y. Chen; J. M. Brandt</p> <p>1999-11-01</p> <p>Coded digital video signals are considered to be one of the most difficult data types to transport due to their real-time requirements and high bit rate variability. In this study, the authors discuss the coding mechanisms incorporated by the major compression standards bodies, i.e., JPEG and MPEG, as well as more advanced coding mechanisms such as wavelet and fractal techniques. The relationship between the applications which use these coding schemes and their network requirements are the major focus of this study. Specifically, the authors relate network latency, channel transmission reliability, random access speed, buffering and network bandwidth with the variousmore » coding techniques as a function of the applications which use them. Such applications include High-Definition Television, Video Conferencing, Computer-Supported Collaborative Work (CSCW), and Medical Imaging.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=Video&pg=4&id=EJ1172401','ERIC'); return false;" href="https://eric.ed.gov/?q=Video&pg=4&id=EJ1172401"><span>Player-Driven Video Analysis to Enhance Reflective Soccer Practice in Talent Development</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Hjort, Anders; Henriksen, Kristoffer; Elbæk, Lars</p> <p>2018-01-01</p> <p>In the present article, we investigate the introduction of a cloud-based video analysis platform called Player Universe (PU). Video analysis is not a new performance-enhancing element in sports, but PU is innovative in how it facilitates reflective learning. Video analysis is executed in the PU platform by involving the players in the analysis…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/976941','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/976941"><span>On the response of rubbers at high strain rates.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Niemczura, Johnathan Greenberg</p> <p></p> <p>In this report, we examine the propagation of tensile waves of finite deformation in rubbers through experiments and analysis. Attention is focused on the propagation of one-dimensional dispersive and shock waves in strips of latex and nitrile rubber. Tensile wave propagation experiments were conducted at high strain-rates by holding one end fixed and displacing the other end at a constant velocity. A high-speed video camera was used to monitor the motion and to determine the evolution of strain and particle velocity in the rubber strips. Analysis of the response through the theory of finite waves and quantitative matching between themore » experimental observations and analytical predictions was used to determine an appropriate instantaneous elastic response for the rubbers. This analysis also yields the tensile shock adiabat for rubber. Dispersive waves as well as shock waves are also observed in free-retraction experiments; these are used to quantify hysteretic effects in rubber.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19920013211','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19920013211"><span>An electronic pan/tilt/zoom camera system</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Zimmermann, Steve; Martin, H. Lee</p> <p>1991-01-01</p> <p>A camera system for omnidirectional image viewing applications that provides pan, tilt, zoom, and rotational orientation within a hemispherical field of view (FOV) using no moving parts was developed. The imaging device is based on the effect that from a fisheye lens, which produces a circular image of an entire hemispherical FOV, can be mathematically corrected using high speed electronic circuitry. An incoming fisheye image from any image acquisition source is captured in memory of the device, a transformation is performed for the viewing region of interest and viewing direction, and a corrected image is output as a video image signal for viewing, recording, or analysis. As a result, this device can accomplish the functions of pan, tilt, rotation, and zoom throughout a hemispherical FOV without the need for any mechanical mechanisms. A programmable transformation processor provides flexible control over viewing situations. Multiple images, each with different image magnifications and pan tilt rotation parameters, can be obtained from a single camera. The image transformation device can provide corrected images at frame rates compatible with RS-170 standard video equipment.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20080004529','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20080004529"><span>Omniview motionless camera orientation system</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Zimmermann, Steven D. (Inventor); Martin, H. Lee (Inventor)</p> <p>1999-01-01</p> <p>A device for omnidirectional image viewing providing pan-and-tilt orientation, rotation, and magnification within a hemispherical field-of-view that utilizes no moving parts. The imaging device is based on the effect that the image from a fisheye lens, which produces a circular image of at entire hemispherical field-of-view, which can be mathematically corrected using high speed electronic circuitry. More specifically, an incoming fisheye image from any image acquisition source is captured in memory of the device, a transformation is performed for the viewing region of interest and viewing direction, and a corrected image is output as a video image signal for viewing, recording, or analysis. As a result, this device can accomplish the functions of pan, tilt, rotation, and zoom throughout a hemispherical field-of-view without the need for any mechanical mechanisms. The preferred embodiment of the image transformation device can provide corrected images at real-time rates, compatible with standard video equipment. The device can be used for any application where a conventional pan-and-tilt or orientation mechanism might be considered including inspection, monitoring, surveillance, and target acquisition.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26357246','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26357246"><span>Human Motion Capture Data Tailored Transform Coding.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Junhui Hou; Lap-Pui Chau; Magnenat-Thalmann, Nadia; Ying He</p> <p>2015-07-01</p> <p>Human motion capture (mocap) is a widely used technique for digitalizing human movements. With growing usage, compressing mocap data has received increasing attention, since compact data size enables efficient storage and transmission. Our analysis shows that mocap data have some unique characteristics that distinguish themselves from images and videos. Therefore, directly borrowing image or video compression techniques, such as discrete cosine transform, does not work well. In this paper, we propose a novel mocap-tailored transform coding algorithm that takes advantage of these features. Our algorithm segments the input mocap sequences into clips, which are represented in 2D matrices. Then it computes a set of data-dependent orthogonal bases to transform the matrices to frequency domain, in which the transform coefficients have significantly less dependency. Finally, the compression is obtained by entropy coding of the quantized coefficients and the bases. Our method has low computational cost and can be easily extended to compress mocap databases. It also requires neither training nor complicated parameter setting. Experimental results demonstrate that the proposed scheme significantly outperforms state-of-the-art algorithms in terms of compression performance and speed.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23932168','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23932168"><span>Different computer-assisted sperm analysis (CASA) systems highly influence sperm motility parameters.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Boryshpolets, S; Kowalski, R K; Dietrich, G J; Dzyuba, B; Ciereszko, A</p> <p>2013-10-15</p> <p>In this study, we examined different computer-assisted sperm analysis (CASA) systems (CRISMAS, Hobson Sperm Tracker, and Image J CASA) on the exact same video recordings to evaluate the differences in sperm motility parameters related to the specific CASA used. To cover a wide range of sperm motility parameters, we chose 12-second video recordings at 25 and 50 Hz frame rates after sperm motility activation using three taxonomically distinct fish species (sterlet: Acipenser ruthenus L.; common carp: Cyprinus carpio L.; and rainbow trout: Oncorhynchus mykiss Walbaum) that are characterized by essential differences in sperm behavior during motility. Systematically higher values of velocity and beat cross frequency (BCF) were observed in video recordings obtained at 50 Hz frame frequency compared with 25 Hz for all three systems. Motility parameters were affected by the CASA and species used for analyses. Image J and CRISMAS calculated higher curvilinear velocity (VCL) values for rainbow trout and common carp at 25 Hz frequency compared with the Hobson Sperm Tracker, whereas at 50 Hz, a significant difference was observed only for rainbow trout sperm recordings. No significant difference was observed between the CASA systems for sterlet sperm motility at 25 and 50 Hz. Additional analysis of 1-second segments taken at three time points (1, 6, and 12 seconds of the recording) revealed a dramatic decrease in common carp and rainbow trout sperm speed. The motility parameters of sterlet spermatozoa did not change significantly during the 12-second motility period and should be considered as a suitable model for longer motility analyses. Our results indicated that the CASA used can affect motility results even when the same motility recordings are used. These results could be critically altered by the recording quality, time of analysis, and frame rate of camera, and could result in erroneous conclusions. Copyright © 2013 Elsevier Inc. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017JGRD..12212786V','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017JGRD..12212786V"><span>Features of Upward Positive Leaders Initiated From Towers in Natural Cloud-to-Ground Lightning Based on Simultaneous High-Speed Videos, Measured Currents, and Electric Fields</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Visacro, Silverio; Guimaraes, Miguel; Murta Vale, Maria Helena</p> <p>2017-12-01</p> <p>Original simultaneous records of currents, close electric field, and high-speed videos of natural negative cloud-to-ground lightning striking the tower of Morro do Cachimbo Station are used to reveal typical features of upward positive leaders before the attachment, including their initiation and mode of propagation. According to the results, upward positive leaders initiate some hundreds of microseconds prior to the return stroke, while a continuous uprising current of about 4 A and superimposed pulses of a few tens amperes flow along the tower. Upon leader initiation, the electric field measured 50 m away from the tower at ground level is about 60 kV/m. The corresponding average field roughly estimated 0.5 m above the tower top is higher than 0.55 MV/m. As in laboratory experiments, the common propagation mode of upward positive leaders is developing continuously, without steps, from their initiation. Unlike downward negative leaders, upward positive leaders typically do not branch off, though they can bifurcate under the effect of a downward negative leader's secondary branch approaching their lateral surface. The upward positive leader's estimated average two-dimensional propagation speed, in the range of 0.06 × 106 to 0.16 × 106 m/s, has the same order of magnitude as that of downward negative leaders. Apparently, the speed tends to increase just before attachment.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li class="active"><span>23</span></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_23 --> <div id="page_24" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li class="active"><span>24</span></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="461"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28419951','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28419951"><span>Can fractal methods applied to video tracking detect the effects of deltamethrin pesticide or mercury on the locomotion behavior of shrimps?</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Tenorio, Bruno Mendes; da Silva Filho, Eurípedes Alves; Neiva, Gentileza Santos Martins; da Silva, Valdemiro Amaro; Tenorio, Fernanda das Chagas Angelo Mendes; da Silva, Themis de Jesus; Silva, Emerson Carlos Soares E; Nogueira, Romildo de Albuquerque</p> <p>2017-08-01</p> <p>Shrimps can accumulate environmental toxicants and suffer behavioral changes. However, methods to quantitatively detect changes in the behavior of these shrimps are still needed. The present study aims to verify whether mathematical and fractal methods applied to video tracking can adequately describe changes in the locomotion behavior of shrimps exposed to low concentrations of toxic chemicals, such as 0.15µgL -1 deltamethrin pesticide or 10µgL -1 mercuric chloride. Results showed no change after 1min, 4, 24, and 48h of treatment. However, after 72 and 96h of treatment, both the linear methods describing the track length, mean speed, mean distance from the current to the previous track point, as well as the non-linear methods of fractal dimension (box counting or information entropy) and multifractal analysis were able to detect changes in the locomotion behavior of shrimps exposed to deltamethrin. Analysis of angular parameters of the track points vectors and lacunarity were not sensitive to those changes. None of the methods showed adverse effects to mercury exposure. These mathematical and fractal methods applicable to software represent low cost useful tools in the toxicological analyses of shrimps for quality of food, water and biomonitoring of ecosystems. Copyright © 2017 Elsevier Inc. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=Lawlor&pg=4&id=EJ074390','ERIC'); return false;" href="https://eric.ed.gov/?q=Lawlor&pg=4&id=EJ074390"><span>Teacher Expectations: A Study of Their Genesis</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Lawlor, Francis X.; Lawlor, Elizabeth P.</p> <p>1973-01-01</p> <p>Studied the consistency and formation of teacher's judgements about child's abilities by having science methods course undergraduates watch two 10-minute video-taped lessons conducted among nine children. Indicated the most agreement on two extremes and the use of children's accomplishment speeds as judgment criteria. (CC)</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=FIBER+AND+OPTICS&pg=3&id=EJ471074','ERIC'); return false;" href="https://eric.ed.gov/?q=FIBER+AND+OPTICS&pg=3&id=EJ471074"><span>Fiber Optics: Deregulate and Deploy.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Suwinski, Jan H.</p> <p>1993-01-01</p> <p>Describes fiber optic technology, explains its use in education and commercial settings, and recommends regulations and legislation that will speed its use to create broadband information networks. Topics discussed include distance learning; interactive video; costs; and the roles of policy makers, lawmakers, public advocacy groups, and consumers.…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/2398235','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/2398235"><span>Reaction time, impulsivity, and attention in hyperactive children and controls: a video game technique.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Mitchell, W G; Chavez, J M; Baker, S A; Guzman, B L; Azen, S P</p> <p>1990-07-01</p> <p>Maturation of sustained attention was studied in a group of 52 hyperactive elementary school children and 152 controls using a microcomputer-based test formatted to resemble a video game. In nonhyperactive children, both simple and complex reaction time decreased with age, as did variability of response time. Omission errors were extremely infrequent on simple reaction time and decreased with age on the more complex tasks. Commission errors had an inconsistent relationship with age. Hyperactive children were slower, more variable, and made more errors on all segments of the game than did controls. Both motor speed and calculated mental speed were slower in hyperactive children, with greater discrepancy for responses directed to the nondominant hand, suggesting that a selective right hemisphere deficit may be present in hyperactives. A summary score (number of individual game scores above the 95th percentile) of 4 or more detected 60% of hyperactive subjects with a false positive rate of 5%. Agreement with the Matching Familiar Figures Test was 75% in the hyperactive group.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016SPIE.9988E..0TB','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016SPIE.9988E..0TB"><span>Aerial vehicles collision avoidance using monocular vision</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Balashov, Oleg; Muraviev, Vadim; Strotov, Valery</p> <p>2016-10-01</p> <p>In this paper image-based collision avoidance algorithm that provides detection of nearby aircraft and distance estimation is presented. The approach requires a vision system with a single moving camera and additional information about carrier's speed and orientation from onboard sensors. The main idea is to create a multi-step approach based on a preliminary detection, regions of interest (ROI) selection, contour segmentation, object matching and localization. The proposed algorithm is able to detect small targets but unlike many other approaches is designed to work with large-scale objects as well. To localize aerial vehicle position the system of equations relating object coordinates in space and observed image is solved. The system solution gives the current position and speed of the detected object in space. Using this information distance and time to collision can be estimated. Experimental research on real video sequences and modeled data is performed. Video database contained different types of aerial vehicles: aircrafts, helicopters, and UAVs. The presented algorithm is able to detect aerial vehicles from several kilometers under regular daylight conditions.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018SPIE10615E..04W','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018SPIE10615E..04W"><span>Long-term scale adaptive tracking with kernel correlation filters</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Wang, Yueren; Zhang, Hong; Zhang, Lei; Yang, Yifan; Sun, Mingui</p> <p>2018-04-01</p> <p>Object tracking in video sequences has broad applications in both military and civilian domains. However, as the length of input video sequence increases, a number of problems arise, such as severe object occlusion, object appearance variation, and object out-of-view (some portion or the entire object leaves the image space). To deal with these problems and identify the object being tracked from cluttered background, we present a robust appearance model using Speeded Up Robust Features (SURF) and advanced integrated features consisting of the Felzenszwalb's Histogram of Oriented Gradients (FHOG) and color attributes. Since re-detection is essential in long-term tracking, we develop an effective object re-detection strategy based on moving area detection. We employ the popular kernel correlation filters in our algorithm design, which facilitates high-speed object tracking. Our evaluation using the CVPR2013 Object Tracking Benchmark (OTB2013) dataset illustrates that the proposed algorithm outperforms reference state-of-the-art trackers in various challenging scenarios.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016SPIE10020E..17H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016SPIE10020E..17H"><span>Pedestrian detection based on redundant wavelet transform</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Huang, Lin; Ji, Liping; Hu, Ping; Yang, Tiejun</p> <p>2016-10-01</p> <p>Intelligent video surveillance is to analysis video or image sequences captured by a fixed or mobile surveillance camera, including moving object detection, segmentation and recognition. By using it, we can be notified immediately in an abnormal situation. Pedestrian detection plays an important role in an intelligent video surveillance system, and it is also a key technology in the field of intelligent vehicle. So pedestrian detection has very vital significance in traffic management optimization, security early warn and abnormal behavior detection. Generally, pedestrian detection can be summarized as: first to estimate moving areas; then to extract features of region of interest; finally to classify using a classifier. Redundant wavelet transform (RWT) overcomes the deficiency of shift variant of discrete wavelet transform, and it has better performance in motion estimation when compared to discrete wavelet transform. Addressing the problem of the detection of multi-pedestrian with different speed, we present an algorithm of pedestrian detection based on motion estimation using RWT, combining histogram of oriented gradients (HOG) and support vector machine (SVM). Firstly, three intensities of movement (IoM) are estimated using RWT and the corresponding areas are segmented. According to the different IoM, a region proposal (RP) is generated. Then, the features of a RP is extracted using HOG. Finally, the features are fed into a SVM trained by pedestrian databases and the final detection results are gained. Experiments show that the proposed algorithm can detect pedestrians accurately and efficiently.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=commercial+AND+gases&id=EJ852866','ERIC'); return false;" href="https://eric.ed.gov/?q=commercial+AND+gases&id=EJ852866"><span>Innovative Uses of Video Analysis</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Brown, Douglas; Cox, Anne J.</p> <p>2009-01-01</p> <p>The value of video analysis in physics education is well established, and both commercial and free educational video analysis programs are readily available. The video format is familiar to students, contains a wealth of spatial and temporal data, and provides a bridge between direct observations and abstract representations of physical phenomena.…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=iq+AND+education&pg=3&id=EJ1125947','ERIC'); return false;" href="https://eric.ed.gov/?q=iq+AND+education&pg=3&id=EJ1125947"><span>Transana Qualitative Video and Audio Analysis Software as a Tool for Teaching Intellectual Assessment Skills to Graduate Psychology Students</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Rush, S. Craig</p> <p>2014-01-01</p> <p>This article draws on the author's experience using qualitative video and audio analysis, most notably through use of the Transana qualitative video and audio analysis software program, as an alternative method for teaching IQ administration skills to students in a graduate psychology program. Qualitative video and audio analysis may be useful for…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19950005881','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19950005881"><span>Conceptual design of the AE481 Demon Remotely Piloted Vehicle (RPV)</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Hailes, Chris; Kolver, Jill; Nestor, Julie; Patterson, Mike; Selow, Jan; Sagdeo, Pradip; Katz, Kenneth</p> <p>1994-01-01</p> <p>This project report presents a conceptual design for a high speed remotely piloted vehicle (RPV). The AE481 Demon RPV is capable of performing video reconnaissance missions and electronic jamming over hostile territory. The RPV cruises at a speed of Mach 0.8 and an altitude of 300 feet above the ground throughout its mission. It incorporates a rocket assisted takeoff and a parachute-airbag landing. Missions are preprogrammed, but in-flight changes are possible. The Demon is the answer to a military need for a high speed, low altitude RPV. The design methods, onboard systems, and avionics payload are discussed in this conceptual design report along with economic viability.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017APS..SHK.U3002K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017APS..SHK.U3002K"><span>Underwater sympathetic detonation of pellet explosive</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kubota, Shiro; Saburi, Tei; Nagayama, Kunihito</p> <p>2017-06-01</p> <p>The underwater sympathetic detonation of pellet explosives was taken by high-speed photography. The diameter and the thickness of the pellet were 20 and 10 mm, respectively. The experimental system consists of the precise electric detonator, two grams of composition C4 booster and three pellets, and these were set in water tank. High-speed video camera, HPV-X made by Shimadzu was used with 10 Mfs. The underwater explosions of the precise electric detonator, the C4 booster and a pellet were also taken by high-speed photography to estimate the propagation processes of the underwater shock waves. Numerical simulation of the underwater sympathetic detonation of the pellet explosives was also carried out and compared with experiment.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1335628','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/1335628"><span>Blind identification of full-field vibration modes of output-only structures from uniformly-sampled, possibly temporally-aliased (sub-Nyquist), video measurements</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Yang, Yongchao; Dorn, Charles; Mancini, Tyler</p> <p></p> <p>Enhancing the spatial and temporal resolution of vibration measurements and modal analysis could significantly benefit dynamic modelling, analysis, and health monitoring of structures. For example, spatially high-density mode shapes are critical for accurate vibration-based damage localization. In experimental or operational modal analysis, higher (frequency) modes, which may be outside the frequency range of the measurement, contain local structural features that can improve damage localization as well as the construction and updating of the modal-based dynamic model of the structure. In general, the resolution of vibration measurements can be increased by enhanced hardware. Traditional vibration measurement sensors such as accelerometers havemore » high-frequency sampling capacity; however, they are discrete point-wise sensors only providing sparse, low spatial sensing resolution measurements, while dense deployment to achieve high spatial resolution is expensive and results in the mass-loading effect and modification of structure's surface. Non-contact measurement methods such as scanning laser vibrometers provide high spatial and temporal resolution sensing capacity; however, they make measurements sequentially that requires considerable acquisition time. As an alternative non-contact method, digital video cameras are relatively low-cost, agile, and provide high spatial resolution, simultaneous, measurements. Combined with vision based algorithms (e.g., image correlation or template matching, optical flow, etc.), video camera based measurements have been successfully used for experimental and operational vibration measurement and subsequent modal analysis. However, the sampling frequency of most affordable digital cameras is limited to 30–60 Hz, while high-speed cameras for higher frequency vibration measurements are extremely costly. This work develops a computational algorithm capable of performing vibration measurement at a uniform sampling frequency lower than what is required by the Shannon-Nyquist sampling theorem for output-only modal analysis. In particular, the spatio-temporal uncoupling property of the modal expansion of structural vibration responses enables a direct modal decoupling of the temporally-aliased vibration measurements by existing output-only modal analysis methods, yielding (full-field) mode shapes estimation directly. Then the signal aliasing properties in modal analysis is exploited to estimate the modal frequencies and damping ratios. Furthermore, the proposed method is validated by laboratory experiments where output-only modal identification is conducted on temporally-aliased acceleration responses and particularly the temporally-aliased video measurements of bench-scale structures, including a three-story building structure and a cantilever beam.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1335628-blind-identification-full-field-vibration-modes-output-only-structures-from-uniformly-sampled-possibly-temporally-aliased-sub-nyquist-video-measurements','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1335628-blind-identification-full-field-vibration-modes-output-only-structures-from-uniformly-sampled-possibly-temporally-aliased-sub-nyquist-video-measurements"><span>Blind identification of full-field vibration modes of output-only structures from uniformly-sampled, possibly temporally-aliased (sub-Nyquist), video measurements</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Yang, Yongchao; Dorn, Charles; Mancini, Tyler; ...</p> <p>2016-12-05</p> <p>Enhancing the spatial and temporal resolution of vibration measurements and modal analysis could significantly benefit dynamic modelling, analysis, and health monitoring of structures. For example, spatially high-density mode shapes are critical for accurate vibration-based damage localization. In experimental or operational modal analysis, higher (frequency) modes, which may be outside the frequency range of the measurement, contain local structural features that can improve damage localization as well as the construction and updating of the modal-based dynamic model of the structure. In general, the resolution of vibration measurements can be increased by enhanced hardware. Traditional vibration measurement sensors such as accelerometers havemore » high-frequency sampling capacity; however, they are discrete point-wise sensors only providing sparse, low spatial sensing resolution measurements, while dense deployment to achieve high spatial resolution is expensive and results in the mass-loading effect and modification of structure's surface. Non-contact measurement methods such as scanning laser vibrometers provide high spatial and temporal resolution sensing capacity; however, they make measurements sequentially that requires considerable acquisition time. As an alternative non-contact method, digital video cameras are relatively low-cost, agile, and provide high spatial resolution, simultaneous, measurements. Combined with vision based algorithms (e.g., image correlation or template matching, optical flow, etc.), video camera based measurements have been successfully used for experimental and operational vibration measurement and subsequent modal analysis. However, the sampling frequency of most affordable digital cameras is limited to 30–60 Hz, while high-speed cameras for higher frequency vibration measurements are extremely costly. This work develops a computational algorithm capable of performing vibration measurement at a uniform sampling frequency lower than what is required by the Shannon-Nyquist sampling theorem for output-only modal analysis. In particular, the spatio-temporal uncoupling property of the modal expansion of structural vibration responses enables a direct modal decoupling of the temporally-aliased vibration measurements by existing output-only modal analysis methods, yielding (full-field) mode shapes estimation directly. Then the signal aliasing properties in modal analysis is exploited to estimate the modal frequencies and damping ratios. Furthermore, the proposed method is validated by laboratory experiments where output-only modal identification is conducted on temporally-aliased acceleration responses and particularly the temporally-aliased video measurements of bench-scale structures, including a three-story building structure and a cantilever beam.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2006SPIE.6074...83A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2006SPIE.6074...83A"><span>Audio-based queries for video retrieval over Java enabled mobile devices</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ahmad, Iftikhar; Cheikh, Faouzi Alaya; Kiranyaz, Serkan; Gabbouj, Moncef</p> <p>2006-02-01</p> <p>In this paper we propose a generic framework for efficient retrieval of audiovisual media based on its audio content. This framework is implemented in a client-server architecture where the client application is developed in Java to be platform independent whereas the server application is implemented for the PC platform. The client application adapts to the characteristics of the mobile device where it runs such as screen size and commands. The entire framework is designed to take advantage of the high-level segmentation and classification of audio content to improve speed and accuracy of audio-based media retrieval. Therefore, the primary objective of this framework is to provide an adaptive basis for performing efficient video retrieval operations based on the audio content and types (i.e. speech, music, fuzzy and silence). Experimental results approve that such an audio based video retrieval scheme can be used from mobile devices to search and retrieve video clips efficiently over wireless networks.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JEI....27b3035Z','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JEI....27b3035Z"><span>High-throughput sample adaptive offset hardware architecture for high-efficiency video coding</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Zhou, Wei; Yan, Chang; Zhang, Jingzhi; Zhou, Xin</p> <p>2018-03-01</p> <p>A high-throughput hardware architecture for a sample adaptive offset (SAO) filter in the high-efficiency video coding video coding standard is presented. First, an implementation-friendly and simplified bitrate estimation method of rate-distortion cost calculation is proposed to reduce the computational complexity in the mode decision of SAO. Then, a high-throughput VLSI architecture for SAO is presented based on the proposed bitrate estimation method. Furthermore, multiparallel VLSI architecture for in-loop filters, which integrates both deblocking filter and SAO filter, is proposed. Six parallel strategies are applied in the proposed in-loop filters architecture to improve the system throughput and filtering speed. Experimental results show that the proposed in-loop filters architecture can achieve up to 48% higher throughput in comparison with prior work. The proposed architecture can reach a high-operating clock frequency of 297 MHz with TSMC 65-nm library and meet the real-time requirement of the in-loop filters for 8 K × 4 K video format at 132 fps.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2003APS..DFD.DE004D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2003APS..DFD.DE004D"><span>Experimental visualization of rapid maneuvering fish</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Daigh, S.; Techet, A. H.</p> <p>2003-11-01</p> <p>A freshwater tropical fish, Danio aequippinatus, is studied undergoing rapid turning and fast starting maneuvers. This agile species of fish is ideal for this study as it is capable of quick turning and darting motions up to 5g's. The fgish studied are 4-5 cm in length. The speed and kinematics of the maneuvering is determined by video analysis. Planar and stereo Particle Image Velocimetry (PIV) is used to map the vortical patterns in the wake of the maneuvering fish. PIV visualizations reveal that during C-shaped maneuvers a ring shaped jet vortex is formed. Fast starting behavior is also presented. PIV data is used to approixmate the thrust vectoring force produced during each maneuver.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010AIPC.1215..277M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010AIPC.1215..277M"><span>Dynamic Analysis of Irradiation of High Intensity Focused Ultrasound (HIFU) to Achieve a Living Tissue Perforation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Mochizuki, Takashi; Kitazumi, Gontaro; Katsuike, Yasumasa; Hotta, Sayo; Maruyama, Hirotaka; Chiba, Toshio</p> <p>2010-03-01</p> <p>It is well known that tissue perforation is performed by the shock waves generated by the collapse of micro bubbles due to HIFU irradiation. However, the angle-dependency between the HIFU irradiation beam and the tissue membrane has not been studied in detail so far. The objective of this study was to investigate the HIFU parameters which were the most effective in perforating the tissues with the heart beating, especially the angle dependency of the beam with the observation using high speed video camera. The result shows that the ultrasound beam should be at right angle to the membrane to perforate the tissue membrane effectively.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013PhRvE..87f1002Z','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013PhRvE..87f1002Z"><span>Collapse of an antibubble</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Zou, Jun; Ji, Chen; Yuan, BaoGang; Ruan, XiaoDong; Fu, Xin</p> <p>2013-06-01</p> <p>In contrast to a soap bubble, an antibubble is a liquid globule surrounded by a thin film of air. The collapse behavior of an antibubble is studied using a high-speed video camera. It is found that the retraction velocity of the thin air film of antibubbles depends on the thickness of the air film, e, the surface tension coefficient σ, etc., and varies linearly with (σ/ρe)1/2, according to theoretical analysis and experimental observations. During the collapse of the antibubble, many tiny bubbles can be formed at the rim of the air film due to the Rayleigh instability. In most cases, a larger bubble will emerge finally, which holds most of the volume of the air film.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014SPIE.9216E..1PA','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014SPIE.9216E..1PA"><span>Performance analysis of bi-directional broadband passive optical network using erbium-doped fiber amplifier</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Almalaq, Yasser; Matin, Mohammad A.</p> <p>2014-09-01</p> <p>The broadband passive optical network (BPON) has the ability to support high-speed data, voice, and video services to home and small businesses customers. In this work, the performance of bi-directional BPON is analyzed for both down and up streams traffic cases by the help of erbium doped fiber amplifier (EDFA). The importance of BPON is reduced cost. Because PBON uses a splitter the cost of the maintenance between the providers and the customers side is suitable. In the proposed research, BPON has been tested by the use of bit error rate (BER) analyzer. BER analyzer realizes maximum Q factor, minimum bit error rate, and eye height.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23004333','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23004333"><span>Spatiotemporal evolution of thin liquid films during impact of water bubbles on glass on a micrometer to nanometer scale.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Hendrix, Maurice H W; Manica, Rogerio; Klaseboer, Evert; Chan, Derek Y C; Ohl, Claus-Dieter</p> <p>2012-06-15</p> <p>Collisions between millimeter-size bubbles in water against a glass plate are studied using high-speed video. Bubble trajectory and shape are tracked simultaneously with laser interferometry between the glass and bubble surfaces that monitors spatial-temporal evolution of the trapped water film. Initial bubble bounces and the final attachment of the bubble to the surface have been quantified. While the global Reynolds number is large (∼10(2)), the film Reynolds number remains small and permits analysis with lubrication theory with tangentially immobile boundary condition at the air-water interface. Accurate predictions of dimple formation and subsequent film drainage are obtained.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li class="active"><span>24</span></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_24 --> <div id="page_25" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li class="active"><span>25</span></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="481"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014OptEn..53f3102J','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014OptEn..53f3102J"><span>Bitstream decoding processor for fast entropy decoding of variable length coding-based multiformat videos</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Jo, Hyunho; Sim, Donggyu</p> <p>2014-06-01</p> <p>We present a bitstream decoding processor for entropy decoding of variable length coding-based multiformat videos. Since most of the computational complexity of entropy decoders comes from bitstream accesses and table look-up process, the developed bitstream processing unit (BsPU) has several designated instructions to access bitstreams and to minimize branch operations in the table look-up process. In addition, the instruction for bitstream access has the capability to remove emulation prevention bytes (EPBs) of H.264/AVC without initial delay, repeated memory accesses, and additional buffer. Experimental results show that the proposed method for EPB removal achieves a speed-up of 1.23 times compared to the conventional EPB removal method. In addition, the BsPU achieves speed-ups of 5.6 and 3.5 times in entropy decoding of H.264/AVC and MPEG-4 Visual bitstreams, respectively, compared to an existing processor without designated instructions and a new table mapping algorithm. The BsPU is implemented on a Xilinx Virtex5 LX330 field-programmable gate array. The MPEG-4 Visual (ASP, Level 5) and H.264/AVC (Main Profile, Level 4) are processed using the developed BsPU with a core clock speed of under 250 MHz in real time.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5573440','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5573440"><span>Cross-Cultural Investigation of Male Gait Perception in Relation to Physical Strength and Speed</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Fink, Bernhard; Wübker, Marieke; Ostner, Julia; Butovskaya, Marina L.; Mezentseva, Anna; Muñoz-Reyes, José Antonio; Sela, Yael; Shackelford, Todd K.</p> <p>2017-01-01</p> <p>Previous research documents that men and women can accurately judge male physical strength from gait, but also that the sexes differ in attractiveness judgments of strong and weak male walkers. Women’s (but not men’s) attractiveness assessments of strong male walkers are higher than for weak male walkers. Here, we extend this research to assessments of strong and weak male walkers in Chile, Germany, and Russia. Men and women judged videos of virtual characters, animated with the walk movements of motion-captured men, on strength and attractiveness. In two countries (Germany and Russia), these videos were additionally presented at 70% (slower) and 130% (faster) of their original speed. Stronger walkers were judged to be stronger and more attractive than weak walkers, and this effect was independent of country (but not sex). Women tended to provide higher attractiveness judgments to strong walkers, and men tended to provide higher attractiveness judgments to weak walkers. In addition, German and Russian participants rated strong walkers most attractive at slow and fast speed. Thus, across countries men and women can assess male strength from gait, although they tended to differ in attractiveness assessments of strong and weak male walkers. Attractiveness assessments of male gait may be influenced by society-specific emphasis on male physical strength. PMID:28878720</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29046418','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29046418"><span>Escaping blood-fed malaria mosquitoes minimize tactile detection without compromising on take-off speed.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Muijres, F T; Chang, S W; van Veen, W G; Spitzen, J; Biemans, B T; Koehl, M A R; Dudley, R</p> <p>2017-10-15</p> <p>To escape after taking a blood meal, a mosquito must exert forces sufficiently high to take off when carrying a load roughly equal to its body weight, while simultaneously avoiding detection by minimizing tactile signals exerted on the host's skin. We studied this trade-off between escape speed and stealth in the malaria mosquito Anopheles coluzzii using 3D motion analysis of high-speed stereoscopic videos of mosquito take-offs and aerodynamic modeling. We found that during the push-off phase, mosquitoes enhanced take-off speed using aerodynamic forces generated by the beating wings in addition to leg-based push-off forces, whereby wing forces contributed 61% of the total push-off force. Exchanging leg-derived push-off forces for wing-derived aerodynamic forces allows the animal to reduce peak force production on the host's skin. By slowly extending their long legs throughout the push-off, mosquitoes spread push-off forces over a longer time window than insects with short legs, thereby further reducing peak leg forces. Using this specialized take-off behavior, mosquitoes are capable of reaching take-off speeds comparable to those of similarly sized fruit flies, but with weight-normalized peak leg forces that were only 27% of those of the fruit flies. By limiting peak leg forces, mosquitoes possibly reduce the chance of being detected by the host. The resulting combination of high take-off speed and low tactile signals on the host might help increase the mosquito's success in escaping from blood-hosts, which consequently also increases the chance of transmitting vector-borne diseases, such as malaria, to future hosts. © 2017. Published by The Company of Biologists Ltd.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4120448','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4120448"><span>Lower Extremity Muscle Activity During a Women’s Overhand Lacrosse Shot</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Millard, Brianna M.; Mercer, John A.</p> <p>2014-01-01</p> <p>The purpose of this study was to describe lower extremity muscle activity during the lacrosse shot. Participants (n=5 females, age 22±2 years, body height 162.6±15.2 cm, body mass 63.7±23.6 kg) were free from injury and had at least one year of lacrosse experience. The lead leg was instrumented with electromyography (EMG) leads to measure muscle activity of the rectus femoris (RF), biceps femoris (BF), tibialis anterior (TA), and medial gastrocnemius (GA). Participants completed five trials of a warm-up speed shot (Slow) and a game speed shot (Fast). Video analysis was used to identify the discrete events defining specific movement phases. Full-wave rectified data were averaged per muscle per phase (Crank Back Minor, Crank Back Major, Stick Acceleration, Stick Deceleration). Average EMG per muscle was analyzed using a 4 (Phase) × 2 (Speed) ANOVA. BF was greater during Fast vs. Slow for all phases (p<0.05), while TA was not influenced by either Phase or Speed (p>0.05). RF and GA were each influenced by the interaction of Phase and Speed (p<0.05) with GA being greater during Fast vs. Slow shots during all phases and RF greater during Crank Back Minor and Major as well as Stick Deceleration (p<0.05) but only tended to be greater during Stick Acceleration (p=0.076) for Fast vs. Slow. The greater muscle activity (BF, RF, GA) during Fast vs. Slow shots may have been related to a faster approach speed and/or need to create a stiff lower extremity to allow for faster upper extremity movements. PMID:25114727</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4849744','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4849744"><span>Effects of Vocal Fold Nodules on Glottal Cycle Measurements Derived from High-Speed Videoendoscopy in Children</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p></p> <p>2016-01-01</p> <p>The goal of this study is to quantify the effects of vocal fold nodules on vibratory motion in children using high-speed videoendoscopy. Differences in vibratory motion were evaluated in 20 children with vocal fold nodules (5–11 years) and 20 age and gender matched typically developing children (5–11 years) during sustained phonation at typical pitch and loudness. Normalized kinematic features of vocal fold displacements from the mid-membranous vocal fold point were extracted from the steady-state high-speed video. A total of 12 kinematic features representing spatial and temporal characteristics of vibratory motion were calculated. Average values and standard deviations (cycle-to-cycle variability) of the following kinematic features were computed: normalized peak displacement, normalized average opening velocity, normalized average closing velocity, normalized peak closing velocity, speed quotient, and open quotient. Group differences between children with and without vocal fold nodules were statistically investigated. While a moderate effect size was observed for the spatial feature of speed quotient, and the temporal feature of normalized average closing velocity in children with nodules compared to vocally normal children, none of the features were statistically significant between the groups after Bonferroni correction. The kinematic analysis of the mid-membranous vocal fold displacement revealed that children with nodules primarily differ from typically developing children in closing phase kinematics of the glottal cycle, whereas the opening phase kinematics are similar. Higher speed quotients and similar opening phase velocities suggest greater relative forces are acting on vocal fold in the closing phase. These findings suggest that future large-scale studies should focus on spatial and temporal features related to the closing phase of the glottal cycle for differentiating the kinematics of children with and without vocal fold nodules. PMID:27124157</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20020087606','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20020087606"><span>Image Processor</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p></p> <p>1989-01-01</p> <p>Texas Instruments Programmable Remapper is a research tool used to determine how to best utilize the part of a patient's visual field still usable by mapping onto his field of vision with manipulated imagery. It is an offshoot of a NASA program for speeding up, improving the accuracy of pattern recognition in video imagery. The Remapper enables an image to be "pushed around" so more of it falls into the functional portions in the retina of a low vision person. It works at video rates, and researchers hope to significantly reduce its size and cost, creating a wearable prosthesis for visually impaired people.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015SPIE.9456E..0IB','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015SPIE.9456E..0IB"><span>Packet based serial link realized in FPGA dedicated for high resolution infrared image transmission</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Bieszczad, Grzegorz</p> <p>2015-05-01</p> <p>In article the external digital interface specially designed for thermographic camera built in Military University of Technology is described. The aim of article is to illustrate challenges encountered during design process of thermal vision camera especially related to infrared data processing and transmission. Article explains main requirements for interface to transfer Infra-Red or Video digital data and describes the solution which we elaborated based on Low Voltage Differential Signaling (LVDS) physical layer and signaling scheme. Elaborated link for image transmission is built using FPGA integrated circuit with built-in high speed serial transceivers achieving up to 2500Gbps throughput. Image transmission is realized using proprietary packet protocol. Transmission protocol engine was described in VHDL language and tested in FPGA hardware. The link is able to transmit 1280x1024@60Hz 24bit video data using one signal pair. Link was tested to transmit thermal-vision camera picture to remote monitor. Construction of dedicated video link allows to reduce power consumption compared to solutions with ASIC based encoders and decoders realizing video links like DVI or packed based Display Port, with simultaneous reduction of wires needed to establish link to one pair. Article describes functions of modules integrated in FPGA design realizing several functions like: synchronization to video source, video stream packeting, interfacing transceiver module and dynamic clock generation for video standard conversion.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2001SPIE.4674..158N','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2001SPIE.4674..158N"><span>Holo-Chidi video concentrator card</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Nwodoh, Thomas A.; Prabhakar, Aditya; Benton, Stephen A.</p> <p>2001-12-01</p> <p>The Holo-Chidi Video Concentrator Card is a frame buffer for the Holo-Chidi holographic video processing system. Holo- Chidi is designed at the MIT Media Laboratory for real-time computation of computer generated holograms and the subsequent display of the holograms at video frame rates. The Holo-Chidi system is made of two sets of cards - the set of Processor cards and the set of Video Concentrator Cards (VCCs). The Processor cards are used for hologram computation, data archival/retrieval from a host system, and for higher-level control of the VCCs. The VCC formats computed holographic data from multiple hologram computing Processor cards, converting the digital data to analog form to feed the acousto-optic-modulators of the Media lab's Mark-II holographic display system. The Video Concentrator card is made of: a High-Speed I/O (HSIO) interface whence data is transferred from the hologram computing Processor cards, a set of FIFOs and video RAM used as buffer for data for the hololines being displayed, a one-chip integrated microprocessor and peripheral combination that handles communication with other VCCs and furnishes the card with a USB port, a co-processor which controls display data formatting, and D-to-A converters that convert digital fringes to analog form. The co-processor is implemented with an SRAM-based FPGA with over 500,000 gates and controls all the signals needed to format the data from the multiple Processor cards into the format required by Mark-II. A VCC has three HSIO ports through which up to 500 Megabytes of computed holographic data can flow from the Processor Cards to the VCC per second. A Holo-Chidi system with three VCCs has enough frame buffering capacity to hold up to thirty two 36Megabyte hologram frames at a time. Pre-computed holograms may also be loaded into the VCC from a host computer through the low- speed USB port. Both the microprocessor and the co- processor in the VCC can access the main system memory used to store control programs and data for the VCC. The Card also generates the control signals used by the scanning mirrors of Mark-II. In this paper we discuss the design of the VCC and its implementation in the Holo-Chidi system.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=example+AND+comparative+AND+research&pg=3&id=ED576390','ERIC'); return false;" href="https://eric.ed.gov/?q=example+AND+comparative+AND+research&pg=3&id=ED576390"><span>Interaction Support for Information Finding and Comparative Analysis in Online Video</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Xia, Jinyue</p> <p>2017-01-01</p> <p>Current online video interaction is typically designed with a focus on straightforward distribution and passive consumption of individual videos. This "click play, sit back and watch" context is typical of videos for entertainment. However, there are many task scenarios that require active engagement and analysis of video content as a…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/19946380','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/19946380"><span>The Addition of a Video Game to Stationary Cycling: The Impact on Energy Expenditure in Overweight Children.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Haddock, Bryan L; Siegel, Shannon R; Wikin, Linda D</p> <p>2009-01-01</p> <p>INTRODUCTION: The prevalence of obesity in children has reached epidemic proportions with over 37% of children aged 6-11 years in the U.S. being classified as "at risk for overweight" or "overweight." Utilization of active video games has been proposed as one possible mechanism to help shift the tide of the obesity epidemic. PURPOSE: The purpose of this study was to determine if riding a stationary bike that controlled a video game would lead to significantly greater energy expenditure than riding the same bike without the video game connected. METHODS: Twenty children, 7-14 years old, with a BMI classification of "at risk for overweight" or "overweight" participated in this study. Following familiarization, energy expenditure was evaluated while riding a stationary bike for 20 minutes. One test was performed without the addition of a video game and one test with the bike controlling the speed of a car on the video game. RESULTS: Oxygen consumption and energy expenditure were significantly elevated above baseline in both conditions. Energy expenditure was significantly higher while riding the bike as it controlled the video game (4.4 ± 1.2 Kcal·min(-1)) than when riding the bike by itself (3.7 ± 1.1 Kcal·min(-1)) (p<0.05). Perceived exertion was not significantly different between the two sessions (p>0.05). CONCLUSION: Using a stationary bike to control a video game led to greater energy expenditure than riding a stationary bike without the video game and without a related increase in perceived exertion.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/19202495','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/19202495"><span>Live lecture versus video-recorded lecture: are students voting with their feet?</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Cardall, Scott; Krupat, Edward; Ulrich, Michael</p> <p>2008-12-01</p> <p>In light of educators' concerns that lecture attendance in medical school has declined, the authors sought to assess students' perceptions, evaluations, and motivations concerning live lectures compared with accelerated, video-recorded lectures viewed online. The authors performed a cross-sectional survey study of all first- and second-year students at Harvard Medical School. Respondents answered questions regarding their lecture attendance; use of class and personal time; use of accelerated, video-recorded lectures; and reasons for viewing video-recorded and live lectures. Other questions asked students to compare how well live and video-recorded lectures satisfied learning goals. Of the 353 students who received questionnaires, 204 (58%) returned responses. Collectively, students indicated watching 57.2% of lectures live, 29.4% recorded, and 3.8% using both methods. All students have watched recorded lectures, and most (88.5%) have used video-accelerating technologies. When using accelerated, video-recorded lecture as opposed to attending lecture, students felt they were more likely to increase their speed of knowledge acquisition (79.3% of students), look up additional information (67.7%), stay focused (64.8%), and learn more (63.7%). Live attendance remains the predominant method for viewing lectures. However, students find accelerated, video-recorded lectures equally or more valuable. Although educators may be uncomfortable with the fundamental change in the learning process represented by video-recorded lecture use, students' responses indicate that their decisions to attend lectures or view recorded lectures are motivated primarily by a desire to satisfy their professional goals. A challenge remains for educators to incorporate technologies students find useful while creating an interactive learning culture.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19910000372&hterms=learn+better+video&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D10%26Ntt%3Dlearn%2Bbetter%2Bvideo','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19910000372&hterms=learn+better+video&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D10%26Ntt%3Dlearn%2Bbetter%2Bvideo"><span>Effects Of Frame Rates In Video Displays</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Kellogg, Gary V.; Wagner, Charles A.</p> <p>1991-01-01</p> <p>Report describes experiment on subjective effects of rates at which display on cathode-ray tube in flight simulator updated and refreshed. Conducted to learn more about jumping, blurring, flickering, and multiple lines that observer perceives when line moves at high speed across screen of a calligraphic CRT.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23453956','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23453956"><span>Action video games make dyslexic children read better.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Franceschini, Sandro; Gori, Simone; Ruffino, Milena; Viola, Simona; Molteni, Massimo; Facoetti, Andrea</p> <p>2013-03-18</p> <p>Learning to read is extremely difficult for about 10% of children; they are affected by a neurodevelopmental disorder called dyslexia [1, 2]. The neurocognitive causes of dyslexia are still hotly debated [3-12]. Dyslexia remediation is far from being fully achieved [13], and the current treatments demand high levels of resources [1]. Here, we demonstrate that only 12 hr of playing action video games-not involving any direct phonological or orthographic training-drastically improve the reading abilities of children with dyslexia. We tested reading, phonological, and attentional skills in two matched groups of children with dyslexia before and after they played action or nonaction video games for nine sessions of 80 min per day. We found that only playing action video games improved children's reading speed, without any cost in accuracy, more so than 1 year of spontaneous reading development and more than or equal to highly demanding traditional reading treatments. Attentional skills also improved during action video game training. It has been demonstrated that action video games efficiently improve attention abilities [14, 15]; our results showed that this attention improvement can directly translate into better reading abilities, providing a new, fast, fun remediation of dyslexia that has theoretical relevance in unveiling the causal role of attention in reading acquisition. Copyright © 2013 Elsevier Ltd. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28157222','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28157222"><span>Assessment of the Apple iPad as a low-vision reading aid.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Morrice, E; Johnson, A P; Marinier, J-A; Wittich, W</p> <p>2017-06-01</p> <p>PurposeLow-vision clients frequently report having problems with reading. Using magnification, reading performance (as measured by reading speed) can be improved by up to 200%. Current magnification aids can be expensive or bulky; therefore, we explored if the Apple iPad offers comparable performance in improving reading speeds, in comparison with a closed-circuit television (CCTV) video magnifier, or other magnification devices.MethodsWe recruited 100 participants between the ages of 24-97 years, with low vision who were literate and cognitively capable, of whom 57 had age-related macular degeneration. To assess reading, participants read standardized iReST texts and were tested for comprehension. We compared reading speed on the Apple iPad (10 inch) with that of the CCTV, home magnification devices, and baseline measures.ResultsAll assistive devices improved reading rates in comparison to baseline (P<0.001, Hedge's g>1), however, there was no difference in improvement across devices (P>0.05, Hedge's g<0.1). When experience was taken into account, those with iPad experience read, on average, 30 words per minute faster than first time iPad users, whereas CCTV experience did not influence reading speed.ConclusionsIn our sample, the Apple iPad was as effective as currently used technologies for improving reading rates. Moreover, exposure to, and experience with the Apple iPad might increase reading speed with that device. A larger sample size, however, is needed to do subgroup analysis on who would optimally benefit from each type of magnification device.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5518826','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5518826"><span>Assessment of the Apple iPad as a low-vision reading aid</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Morrice, E; Johnson, A P; Marinier, J-A; Wittich, W</p> <p>2017-01-01</p> <p>Purpose Low-vision clients frequently report having problems with reading. Using magnification, reading performance (as measured by reading speed) can be improved by up to 200%. Current magnification aids can be expensive or bulky; therefore, we explored if the Apple iPad offers comparable performance in improving reading speeds, in comparison with a closed-circuit television (CCTV) video magnifier, or other magnification devices. Methods We recruited 100 participants between the ages of 24–97 years, with low vision who were literate and cognitively capable, of whom 57 had age-related macular degeneration. To assess reading, participants read standardized iReST texts and were tested for comprehension. We compared reading speed on the Apple iPad (10 inch) with that of the CCTV, home magnification devices, and baseline measures. Results All assistive devices improved reading rates in comparison to baseline (P<0.001, Hedge’s g>1), however, there was no difference in improvement across devices (P>0.05, Hedge’s g<0.1). When experience was taken into account, those with iPad experience read, on average, 30 words per minute faster than first time iPad users, whereas CCTV experience did not influence reading speed. Conclusions In our sample, the Apple iPad was as effective as currently used technologies for improving reading rates. Moreover, exposure to, and experience with the Apple iPad might increase reading speed with that device. A larger sample size, however, is needed to do subgroup analysis on who would optimally benefit from each type of magnification device. PMID:28157222</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20020062179','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20020062179"><span>Innovative Solution to Video Enhancement</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p></p> <p>2001-01-01</p> <p>Through a licensing agreement, Intergraph Government Solutions adapted a technology originally developed at NASA's Marshall Space Flight Center for enhanced video imaging by developing its Video Analyst(TM) System. Marshall's scientists developed the Video Image Stabilization and Registration (VISAR) technology to help FBI agents analyze video footage of the deadly 1996 Olympic Summer Games bombing in Atlanta, Georgia. VISAR technology enhanced nighttime videotapes made with hand-held camcorders, revealing important details about the explosion. Intergraph's Video Analyst System is a simple, effective, and affordable tool for video enhancement and analysis. The benefits associated with the Video Analyst System include support of full-resolution digital video, frame-by-frame analysis, and the ability to store analog video in digital format. Up to 12 hours of digital video can be stored and maintained for reliable footage analysis. The system also includes state-of-the-art features such as stabilization, image enhancement, and convolution to help improve the visibility of subjects in the video without altering underlying footage. Adaptable to many uses, Intergraph#s Video Analyst System meets the stringent demands of the law enforcement industry in the areas of surveillance, crime scene footage, sting operations, and dash-mounted video cameras.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016SPIE.9997E..06P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016SPIE.9997E..06P"><span>Near-infrared high-resolution real-time omnidirectional imaging platform for drone detection</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Popovic, Vladan; Ott, Beat; Wellig, Peter; Leblebici, Yusuf</p> <p>2016-10-01</p> <p>Recent technological advancements in hardware systems have made higher quality cameras. State of the art panoramic systems use them to produce videos with a resolution of 9000 x 2400 pixels at a rate of 30 frames per second (fps).1 Many modern applications use object tracking to determine the speed and the path taken by each object moving through a scene. The detection requires detailed pixel analysis between two frames. In fields like surveillance systems or crowd analysis, this must be achieved in real time.2 In this paper, we focus on the system-level design of multi-camera sensor acquiring near-infrared (NIR) spectrum and its ability to detect mini-UAVs in a representative rural Swiss environment. The presented results show the UAV detection from the trial that we conducted during a field trial in August 2015.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=EM-0039-02&hterms=Startups&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D50%26Ntt%3DStartups','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=EM-0039-02&hterms=Startups&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D50%26Ntt%3DStartups"><span>XB-70A during startup and ramp taxi</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p></p> <p>1968-01-01</p> <p>The XB-70 was the world's largest experimental aircraft. Capable of flight at speeds of three times the speed of sound (2,000 miles per hour) at altitudes of 70,000 feet, the XB-70 was used to collect in-flight information for use in the design of future supersonic aircraft, military and civilian. This 35-second video shows the startup of the XB-70A airplane engines, the beginning of its taxi to the runway, and a turn on the ramp that shows the unique configuration of this aircraft.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1997SPIE.3033..368E','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1997SPIE.3033..368E"><span>Movement measurement of isolated skeletal muscle using imaging microscopy</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Elias, David; Zepeda, Hugo; Leija, Lorenzo S.; Sossa, Humberto; de la Rosa, Jose I.</p> <p>1997-05-01</p> <p>An imaging-microscopy methodology to measure contraction movement in chemically stimulated crustacean skeletal muscle, whose movement speed is about 0.02 mm/s is presented. For this, a CCD camera coupled to a microscope and a high speed digital image acquisition system, allowing us to capture 960 images per second are used. The images are digitally processed in a PC and displayed in a video monitor. A maximal field of 0.198 X 0.198 mm2 and a spatial resolution of 3.5 micrometers are obtained.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=videos+AND+pedagogy&pg=3&id=EJ1063576','ERIC'); return false;" href="https://eric.ed.gov/?q=videos+AND+pedagogy&pg=3&id=EJ1063576"><span>Links between Characteristics of Collaborative Peer Video Analysis Events and Literacy Teachers' Outcomes</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Arya, Poonam; Christ, Tanya; Chiu, Ming</p> <p>2015-01-01</p> <p>This study examined how characteristics of Collaborative Peer Video Analysis (CPVA) events are related to teachers' pedagogical outcomes. Data included 39 transcribed literacy video events, in which 14 in-service teachers engaged in discussions of their video clips. Emergent coding and Statistical Discourse Analysis were used to analyze the data.…</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li class="active"><span>25</span></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_25 --> <div class="footer-extlink text-muted" style="margin-bottom:1rem; text-align:center;">Some links on this page may take you to non-federal websites. Their policies may differ from this site.</div> </div><!-- container --> <a id="backToTop" href="#top"> Top </a> <footer> <nav> <ul class="links"> <li><a href="/sitemap.html">Site Map</a></li> <li><a href="/website-policies.html">Website Policies</a></li> <li><a href="https://www.energy.gov/vulnerability-disclosure-policy" target="_blank">Vulnerability Disclosure Program</a></li> <li><a href="/contact.html">Contact Us</a></li> </ul> </nav> </footer> <script type="text/javascript"><!-- // var lastDiv = ""; function showDiv(divName) { // hide last div if (lastDiv) { document.getElementById(lastDiv).className = "hiddenDiv"; } //if value of the box is not nothing and an object with that name exists, then change the class if (divName && document.getElementById(divName)) { document.getElementById(divName).className = "visibleDiv"; lastDiv = divName; } } //--> </script> <script> /** * Function that tracks a click on an outbound link in Google Analytics. * This function takes a valid URL string as an argument, and uses that URL string * as the event label. */ var trackOutboundLink = function(url,collectionCode) { try { h = window.open(url); setTimeout(function() { ga('send', 'event', 'topic-page-click-through', collectionCode, url); }, 1000); } catch(err){} }; </script> <!-- Google Analytics --> <script> (function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){ (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o), m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) })(window,document,'script','//www.google-analytics.com/analytics.js','ga'); ga('create', 'UA-1122789-34', 'auto'); ga('send', 'pageview'); </script> <!-- End Google Analytics --> <script> showDiv('page_1') </script> </body> </html>