Sample records for video motion sensor

  1. Visualizing the history of living spaces.

    PubMed

    Ivanov, Yuri; Wren, Christopher; Sorokin, Alexander; Kaur, Ishwinder

    2007-01-01

    The technology available to building designers now makes it possible to monitor buildings on a very large scale. Video cameras and motion sensors are commonplace in practically every office space, and are slowly making their way into living spaces. The application of such technologies, in particular video cameras, while improving security, also violates privacy. On the other hand, motion sensors, while being privacy-conscious, typically do not provide enough information for a human operator to maintain the same degree of awareness about the space that can be achieved by using video cameras. We propose a novel approach in which we use a large number of simple motion sensors and a small set of video cameras to monitor a large office space. In our system we deployed 215 motion sensors and six video cameras to monitor the 3,000-square-meter office space occupied by 80 people for a period of about one year. The main problem in operating such systems is finding a way to present this highly multidimensional data, which includes both spatial and temporal components, to a human operator to allow browsing and searching recorded data in an efficient and intuitive way. In this paper we present our experiences and the solutions that we have developed in the course of our work on the system. We consider this work to be the first step in helping designers and managers of building systems gain access to information about occupants' behavior in the context of an entire building in a way that is only minimally intrusive to the occupants' privacy.

  2. Using the Xbox Kinect sensor for positional data acquisition

    NASA Astrophysics Data System (ADS)

    Ballester, Jorge; Pheatt, Chuck

    2013-01-01

    The Kinect sensor was introduced in November 2010 by Microsoft for the Xbox 360 video game system. It is designed to be positioned above or below a video display to track player body and hand movements in three dimensions (3D). The sensor contains a red, green, and blue (RGB) camera, a depth sensor, an infrared (IR) light source, a three-axis accelerometer, and a multi-array microphone, as well as hardware required to transmit sensor information to an external receiver. In this article, we evaluate the capabilities of the Kinect sensor as a 3D data-acquisition platform for use in physics experiments. Data obtained for a simple pendulum, a spherical pendulum, projectile motion, and a bouncing basketball are presented. Overall, the Kinect sensor is found to be a useful data-acquisition tool for motion studies in the physics laboratory.

  3. Use of a Proximity Sensor Switch for "Hands Free" Operation of Computer-Based Video Prompting by Young Adults with Moderate Intellectual Disability

    ERIC Educational Resources Information Center

    Ivey, Alexandria N.; Mechling, Linda C.; Spencer, Galen P.

    2015-01-01

    In this study, the effectiveness of a "hands free" approach for operating video prompts to complete multi-step tasks was measured. Students advanced the video prompts by using a motion (hand wave) over a proximity sensor switch. Three young adult females with a diagnosis of moderate intellectual disability participated in the study.…

  4. Geopositioning with a quadcopter: Extracted feature locations and predicted accuracy without a priori sensor attitude information

    NASA Astrophysics Data System (ADS)

    Dolloff, John; Hottel, Bryant; Edwards, David; Theiss, Henry; Braun, Aaron

    2017-05-01

    This paper presents an overview of the Full Motion Video-Geopositioning Test Bed (FMV-GTB) developed to investigate algorithm performance and issues related to the registration of motion imagery and subsequent extraction of feature locations along with predicted accuracy. A case study is included corresponding to a video taken from a quadcopter. Registration of the corresponding video frames is performed without the benefit of a priori sensor attitude (pointing) information. In particular, tie points are automatically measured between adjacent frames using standard optical flow matching techniques from computer vision, an a priori estimate of sensor attitude is then computed based on supplied GPS sensor positions contained in the video metadata and a photogrammetric/search-based structure from motion algorithm, and then a Weighted Least Squares adjustment of all a priori metadata across the frames is performed. Extraction of absolute 3D feature locations, including their predicted accuracy based on the principles of rigorous error propagation, is then performed using a subset of the registered frames. Results are compared to known locations (check points) over a test site. Throughout this entire process, no external control information (e.g. surveyed points) is used other than for evaluation of solution errors and corresponding accuracy.

  5. Activity-based exploitation of Full Motion Video (FMV)

    NASA Astrophysics Data System (ADS)

    Kant, Shashi

    2012-06-01

    Video has been a game-changer in how US forces are able to find, track and defeat its adversaries. With millions of minutes of video being generated from an increasing number of sensor platforms, the DOD has stated that the rapid increase in video is overwhelming their analysts. The manpower required to view and garner useable information from the flood of video is unaffordable, especially in light of current fiscal restraints. "Search" within full-motion video has traditionally relied on human tagging of content, and video metadata, to provision filtering and locate segments of interest, in the context of analyst query. Our approach utilizes a novel machine-vision based approach to index FMV, using object recognition & tracking, events and activities detection. This approach enables FMV exploitation in real-time, as well as a forensic look-back within archives. This approach can help get the most information out of video sensor collection, help focus the attention of overburdened analysts form connections in activity over time and conserve national fiscal resources in exploiting FMV.

  6. Efficient Feature Extraction and Likelihood Fusion for Vehicle Tracking in Low Frame Rate Airborne Video

    DTIC Science & Technology

    2010-07-01

    imagery, persistent sensor array I. Introduction New device fabrication technologies and heterogeneous embedded processors have led to the emergence of a...geometric occlusions between target and sensor , motion blur, urban scene complexity, and high data volumes. In practical terms the targets are small...distributed airborne narrow-field-of-view video sensor networks. Airborne camera arrays combined with com- putational photography techniques enable the

  7. Human activity discrimination for maritime application

    NASA Astrophysics Data System (ADS)

    Boettcher, Evelyn; Deaver, Dawne M.; Krapels, Keith

    2008-04-01

    The US Army RDECOM CERDEC Night Vision and Electronic Sensors Directorate (NVESD) is investigating how motion affects the target acquisition model (NVThermIP) sensor performance estimates. This paper looks specifically at estimating sensor performance for the task of discriminating human activities on watercraft, and was sponsored by the Office of Naval Research (ONR). Traditionally, sensor models were calibrated using still images. While that approach is sufficient for static targets, video allows one to use motion cues to aid in discerning the type of human activity more quickly and accurately. This, in turn, will affect estimated sensor performance and these effects are measured in order to calibrate current target acquisition models for this task. The study employed an eleven alternative forced choice (11AFC) human perception experiment to measure the task difficulty of discriminating unique human activities on watercrafts. A mid-wave infrared camera was used to collect video at night. A description of the construction of this experiment is given, including: the data collection, image processing, perception testing and how contrast was defined for video. These results are applicable to evaluate sensor field performance for Anti-Terrorism and Force Protection (AT/FP) tasks for the U.S. Navy.

  8. Increased ISR operator capability utilizing a centralized 360° full motion video display

    NASA Astrophysics Data System (ADS)

    Andryc, K.; Chamberlain, J.; Eagleson, T.; Gottschalk, G.; Kowal, B.; Kuzdeba, P.; LaValley, D.; Myers, E.; Quinn, S.; Rose, M.; Rusiecki, B.

    2012-06-01

    In many situations, the difference between success and failure comes down to taking the right actions quickly. While the myriad of electronic sensors available today can provide data quickly, it may overload the operator; where only a contextualized centralized display of information and intuitive human interface can help to support the quick and effective decisions needed. If these decisions are to result in quick actions, then the operator must be able to understand all of the data of his environment. In this paper we present a novel approach in contextualizing multi-sensor data onto a full motion video real-time 360 degree imaging display. The system described could function as a primary display system for command and control in security, military and observation posts. It has the ability to process and enable interactive control of multiple other sensor systems. It enhances the value of these other sensors by overlaying their information on a panorama of the surroundings. Also, it can be used to interface to other systems including: auxiliary electro-optical systems, aerial video, contact management, Hostile Fire Indicators (HFI), and Remote Weapon Stations (RWS).

  9. Real-time 3D visualization of volumetric video motion sensor data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carlson, J.; Stansfield, S.; Shawver, D.

    1996-11-01

    This paper addresses the problem of improving detection, assessment, and response capabilities of security systems. Our approach combines two state-of-the-art technologies: volumetric video motion detection (VVMD) and virtual reality (VR). This work capitalizes on the ability of VVMD technology to provide three-dimensional (3D) information about the position, shape, and size of intruders within a protected volume. The 3D information is obtained by fusing motion detection data from multiple video sensors. The second component involves the application of VR technology to display information relating to the sensors and the sensor environment. VR technology enables an operator, or security guard, to bemore » immersed in a 3D graphical representation of the remote site. VVMD data is transmitted from the remote site via ordinary telephone lines. There are several benefits to displaying VVMD information in this way. Because the VVMD system provides 3D information and because the sensor environment is a physical 3D space, it seems natural to display this information in 3D. Also, the 3D graphical representation depicts essential details within and around the protected volume in a natural way for human perception. Sensor information can also be more easily interpreted when the operator can `move` through the virtual environment and explore the relationships between the sensor data, objects and other visual cues present in the virtual environment. By exploiting the powerful ability of humans to understand and interpret 3D information, we expect to improve the means for visualizing and interpreting sensor information, allow a human operator to assess a potential threat more quickly and accurately, and enable a more effective response. This paper will detail both the VVMD and VR technologies and will discuss a prototype system based upon their integration.« less

  10. Method and System for Physiologically Modulating Videogames and Simulations which Use Gesture and Body Image Sensing Control Input Devices

    NASA Technical Reports Server (NTRS)

    Pope, Alan T. (Inventor); Stephens, Chad L. (Inventor); Habowski, Tyler (Inventor)

    2017-01-01

    Method for physiologically modulating videogames and simulations includes utilizing input from a motion-sensing video game system and input from a physiological signal acquisition device. The inputs from the physiological signal sensors are utilized to change the response of a user's avatar to inputs from the motion-sensing sensors. The motion-sensing system comprises a 3D sensor system having full-body 3D motion capture of a user's body. This arrangement encourages health-enhancing physiological self-regulation skills or therapeutic amplification of healthful physiological characteristics. The system provides increased motivation for users to utilize biofeedback as may be desired for treatment of various conditions.

  11. Optimized static and video EEG rapid serial visual presentation (RSVP) paradigm based on motion surprise computation

    NASA Astrophysics Data System (ADS)

    Khosla, Deepak; Huber, David J.; Bhattacharyya, Rajan

    2017-05-01

    In this paper, we describe an algorithm and system for optimizing search and detection performance for "items of interest" (IOI) in large-sized images and videos that employ the Rapid Serial Visual Presentation (RSVP) based EEG paradigm and surprise algorithms that incorporate motion processing to determine whether static or video RSVP is used. The system works by first computing a motion surprise map on image sub-regions (chips) of incoming sensor video data and then uses those surprise maps to label the chips as either "static" or "moving". This information tells the system whether to use a static or video RSVP presentation and decoding algorithm in order to optimize EEG based detection of IOI in each chip. Using this method, we are able to demonstrate classification of a series of image regions from video with an azimuth value of 1, indicating perfect classification, over a range of display frequencies and video speeds.

  12. Reliable motion detection of small targets in video with low signal-to-clutter ratios

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nichols, S.A.; Naylor, R.B.

    1995-07-01

    Studies show that vigilance decreases rapidly after several minutes when human operators are required to search live video for infrequent intrusion detections. Therefore, there is a need for systems which can automatically detect targets in live video and reserve the operator`s attention for assessment only. Thus far, automated systems have not simultaneously provided adequate detection sensitivity, false alarm suppression, and ease of setup when used in external, unconstrained environments. This unsatisfactory performance can be exacerbated by poor video imagery with low contrast, high noise, dynamic clutter, image misregistration, and/or the presence of small, slow, or erratically moving targets. This papermore » describes a highly adaptive video motion detection and tracking algorithm which has been developed as part of Sandia`s Advanced Exterior Sensor (AES) program. The AES is a wide-area detection and assessment system for use in unconstrained exterior security applications. The AES detection and tracking algorithm provides good performance under stressing data and environmental conditions. Features of the algorithm include: reliable detection with negligible false alarm rate of variable velocity targets having low signal-to-clutter ratios; reliable tracking of targets that exhibit motion that is non-inertial, i.e., varies in direction and velocity; automatic adaptation to both infrared and visible imagery with variable quality; and suppression of false alarms caused by sensor flaws and/or cutouts.« less

  13. An intelligent surveillance platform for large metropolitan areas with dense sensor deployment.

    PubMed

    Fernández, Jorge; Calavia, Lorena; Baladrón, Carlos; Aguiar, Javier M; Carro, Belén; Sánchez-Esguevillas, Antonio; Alonso-López, Jesus A; Smilansky, Zeev

    2013-06-07

    This paper presents an intelligent surveillance platform based on the usage of large numbers of inexpensive sensors designed and developed inside the European Eureka Celtic project HuSIMS. With the aim of maximizing the number of deployable units while keeping monetary and resource/bandwidth costs at a minimum, the surveillance platform is based on the usage of inexpensive visual sensors which apply efficient motion detection and tracking algorithms to transform the video signal in a set of motion parameters. In order to automate the analysis of the myriad of data streams generated by the visual sensors, the platform's control center includes an alarm detection engine which comprises three components applying three different Artificial Intelligence strategies in parallel. These strategies are generic, domain-independent approaches which are able to operate in several domains (traffic surveillance, vandalism prevention, perimeter security, etc.). The architecture is completed with a versatile communication network which facilitates data collection from the visual sensors and alarm and video stream distribution towards the emergency teams. The resulting surveillance system is extremely suitable for its deployment in metropolitan areas, smart cities, and large facilities, mainly because cheap visual sensors and autonomous alarm detection facilitate dense sensor network deployments for wide and detailed coverage.

  14. Validation of cardiac accelerometer sensor measurements.

    PubMed

    Remme, Espen W; Hoff, Lars; Halvorsen, Per Steinar; Naerum, Edvard; Skulstad, Helge; Fleischer, Lars A; Elle, Ole Jakob; Fosse, Erik

    2009-12-01

    In this study we have investigated the accuracy of an accelerometer sensor designed for the measurement of cardiac motion and automatic detection of motion abnormalities caused by myocardial ischaemia. The accelerometer, attached to the left ventricular wall, changed its orientation relative to the direction of gravity during the cardiac cycle. This caused a varying gravity component in the measured acceleration signal that introduced an error in the calculation of myocardial motion. Circumferential displacement, velocity and rotation of the left ventricular apical region were calculated from the measured acceleration signal. We developed a mathematical method to separate translational and gravitational acceleration components based on a priori assumptions of myocardial motion. The accuracy of the measured motion was investigated by comparison with known motion of a robot arm programmed to move like the heart wall. The accuracy was also investigated in an animal study. The sensor measurements were compared with simultaneously recorded motion from a robot arm attached next to the sensor on the heart and with measured motion by echocardiography and a video camera. The developed compensation method for the varying gravity component improved the accuracy of the calculated velocity and displacement traces, giving very good agreement with the reference methods.

  15. Video-based respiration monitoring with automatic region of interest detection.

    PubMed

    Janssen, Rik; Wang, Wenjin; Moço, Andreia; de Haan, Gerard

    2016-01-01

    Vital signs monitoring is ubiquitous in clinical environments and emerging in home-based healthcare applications. Still, since current monitoring methods require uncomfortable sensors, respiration rate remains the least measured vital sign. In this paper, we propose a video-based respiration monitoring method that automatically detects a respiratory region of interest (RoI) and signal using a camera. Based on the observation that respiration induced chest/abdomen motion is an independent motion system in a video, our basic idea is to exploit the intrinsic properties of respiration to find the respiratory RoI and extract the respiratory signal via motion factorization. We created a benchmark dataset containing 148 video sequences obtained on adults under challenging conditions and also neonates in the neonatal intensive care unit (NICU). The measurements obtained by the proposed video respiration monitoring (VRM) method are not significantly different from the reference methods (guided breathing or contact-based ECG; p-value  =  0.6), and explain more than 99% of the variance of the reference values with low limits of agreement (-2.67 to 2.81 bpm). VRM seems to provide a valid solution to ECG in confined motion scenarios, though precision may be reduced for neonates. More studies are needed to validate VRM under challenging recording conditions, including upper-body motion types.

  16. Optimal full motion video registration with rigorous error propagation

    NASA Astrophysics Data System (ADS)

    Dolloff, John; Hottel, Bryant; Doucette, Peter; Theiss, Henry; Jocher, Glenn

    2014-06-01

    Optimal full motion video (FMV) registration is a crucial need for the Geospatial community. It is required for subsequent and optimal geopositioning with simultaneous and reliable accuracy prediction. An overall approach being developed for such registration is presented that models relevant error sources in terms of the expected magnitude and correlation of sensor errors. The corresponding estimator is selected based on the level of accuracy of the a priori information of the sensor's trajectory and attitude (pointing) information, in order to best deal with non-linearity effects. Estimator choices include near real-time Kalman Filters and batch Weighted Least Squares. Registration solves for corrections to the sensor a priori information for each frame. It also computes and makes available a posteriori accuracy information, i.e., the expected magnitude and correlation of sensor registration errors. Both the registered sensor data and its a posteriori accuracy information are then made available to "down-stream" Multi-Image Geopositioning (MIG) processes. An object of interest is then measured on the registered frames and a multi-image optimal solution, including reliable predicted solution accuracy, is then performed for the object's 3D coordinates. This paper also describes a robust approach to registration when a priori information of sensor attitude is unavailable. It makes use of structure-from-motion principles, but does not use standard Computer Vision techniques, such as estimation of the Essential Matrix which can be very sensitive to noise. The approach used instead is a novel, robust, direct search-based technique.

  17. An Intelligent Surveillance Platform for Large Metropolitan Areas with Dense Sensor Deployment

    PubMed Central

    Fernández, Jorge; Calavia, Lorena; Baladrón, Carlos; Aguiar, Javier M.; Carro, Belén; Sánchez-Esguevillas, Antonio; Alonso-López, Jesus A.; Smilansky, Zeev

    2013-01-01

    This paper presents an intelligent surveillance platform based on the usage of large numbers of inexpensive sensors designed and developed inside the European Eureka Celtic project HuSIMS. With the aim of maximizing the number of deployable units while keeping monetary and resource/bandwidth costs at a minimum, the surveillance platform is based on the usage of inexpensive visual sensors which apply efficient motion detection and tracking algorithms to transform the video signal in a set of motion parameters. In order to automate the analysis of the myriad of data streams generated by the visual sensors, the platform's control center includes an alarm detection engine which comprises three components applying three different Artificial Intelligence strategies in parallel. These strategies are generic, domain-independent approaches which are able to operate in several domains (traffic surveillance, vandalism prevention, perimeter security, etc.). The architecture is completed with a versatile communication network which facilitates data collection from the visual sensors and alarm and video stream distribution towards the emergency teams. The resulting surveillance system is extremely suitable for its deployment in metropolitan areas, smart cities, and large facilities, mainly because cheap visual sensors and autonomous alarm detection facilitate dense sensor network deployments for wide and detailed coverage. PMID:23748169

  18. Achieving an Optimal Medium Altitude UAV Force Balance in Support of COIN Operations

    DTIC Science & Technology

    2009-02-02

    and execute operations. UAS with common data links and remote video terminals (RVTs) provide input to the common operational picture (COP) and...full-motion video (FMV) is intuitive to many tactical warfighters who have used similar sensors in manned aircraft. Modern data links allow the video ...Document (AFDD) 2-9. Intelligence, Surveillance, and Reconnaissance Operations, 17 July 2007. Baldor, Lolita C. “Increased UAV reliance evident in

  19. In Vivo Evaluation of Wearable Head Impact Sensors.

    PubMed

    Wu, Lyndia C; Nangia, Vaibhav; Bui, Kevin; Hammoor, Bradley; Kurt, Mehmet; Hernandez, Fidel; Kuo, Calvin; Camarillo, David B

    2016-04-01

    Inertial sensors are commonly used to measure human head motion. Some sensors have been tested with dummy or cadaver experiments with mixed results, and methods to evaluate sensors in vivo are lacking. Here we present an in vivo method using high speed video to test teeth-mounted (mouthguard), soft tissue-mounted (skin patch), and headgear-mounted (skull cap) sensors during 6-13 g sagittal soccer head impacts. Sensor coupling to the skull was quantified by displacement from an ear-canal reference. Mouthguard displacements were within video measurement error (<1 mm), while the skin patch and skull cap displaced up to 4 and 13 mm from the ear-canal reference, respectively. We used the mouthguard, which had the least displacement from skull, as the reference to assess 6-degree-of-freedom skin patch and skull cap measurements. Linear and rotational acceleration magnitudes were over-predicted by both the skin patch (with 120% NRMS error for a(mag), 290% for α(mag)) and the skull cap (320% NRMS error for a(mag), 500% for α(mag)). Such over-predictions were largely due to out-of-plane motion. To model sensor error, we found that in-plane skin patch linear acceleration in the anterior-posterior direction could be modeled by an underdamped viscoelastic system. In summary, the mouthguard showed tighter skull coupling than the other sensor mounting approaches. Furthermore, the in vivo methods presented are valuable for investigating skull acceleration sensor technologies.

  20. Real-time synchronization of kinematic and video data for the comprehensive assessment of surgical skills.

    PubMed

    Dosis, Aristotelis; Bello, Fernando; Moorthy, Krishna; Munz, Yaron; Gillies, Duncan; Darzi, Ara

    2004-01-01

    Surgical dexterity in operating theatres has traditionally been assessed subjectively. Electromagnetic (EM) motion tracking systems such as the Imperial College Surgical Assessment Device (ICSAD) have been shown to produce valid and accurate objective measures of surgical skill. To allow for video integration we have modified the data acquisition and built it within the ROVIMAS analysis software. We then used ActiveX 9.0 DirectShow video capturing and the system clock as a time stamp for the synchronized concurrent acquisition of kinematic data and video frames. Interactive video/motion data browsing was implemented to allow the user to concentrate on frames exhibiting certain kinematic properties that could result in operative errors. We exploited video-data synchronization to calculate the camera visual hull by identifying all 3D vertices using the ICSAD electromagnetic sensors. We also concentrated on high velocity peaks as a means of identifying potential erroneous movements to be confirmed by studying the corresponding video frames. The outcome of the study clearly shows that the kinematic data are precisely synchronized with the video frames and that the velocity peaks correspond to large and sudden excursions of the instrument tip. We validated the camera visual hull by both video and geometrical kinematic analysis and we observed that graphs containing fewer sudden velocity peaks are less likely to have erroneous movements. This work presented further developments to the well-established ICSAD dexterity analysis system. Synchronized real-time motion and video acquisition provides a comprehensive assessment solution by combining quantitative motion analysis tools and qualitative targeted video scoring.

  1. 33 CFR 117.829 - Northeast Cape Fear River.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... maintenance authorized in accordance with Subpart A of this part. (3) Trains shall be controlled so that any... of failure or obstruction of the motion sensors, laser scanners, video cameras or marine-radio...

  2. 33 CFR 117.829 - Northeast Cape Fear River.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... maintenance authorized in accordance with Subpart A of this part. (3) Trains shall be controlled so that any... of failure or obstruction of the motion sensors, laser scanners, video cameras or marine-radio...

  3. Blind identification of full-field vibration modes from video measurements with phase-based video motion magnification

    NASA Astrophysics Data System (ADS)

    Yang, Yongchao; Dorn, Charles; Mancini, Tyler; Talken, Zachary; Kenyon, Garrett; Farrar, Charles; Mascareñas, David

    2017-02-01

    Experimental or operational modal analysis traditionally requires physically-attached wired or wireless sensors for vibration measurement of structures. This instrumentation can result in mass-loading on lightweight structures, and is costly and time-consuming to install and maintain on large civil structures, especially for long-term applications (e.g., structural health monitoring) that require significant maintenance for cabling (wired sensors) or periodic replacement of the energy supply (wireless sensors). Moreover, these sensors are typically placed at a limited number of discrete locations, providing low spatial sensing resolution that is hardly sufficient for modal-based damage localization, or model correlation and updating for larger-scale structures. Non-contact measurement methods such as scanning laser vibrometers provide high-resolution sensing capacity without the mass-loading effect; however, they make sequential measurements that require considerable acquisition time. As an alternative non-contact method, digital video cameras are relatively low-cost, agile, and provide high spatial resolution, simultaneous, measurements. Combined with vision based algorithms (e.g., image correlation, optical flow), video camera based measurements have been successfully used for vibration measurements and subsequent modal analysis, based on techniques such as the digital image correlation (DIC) and the point-tracking. However, they typically require speckle pattern or high-contrast markers to be placed on the surface of structures, which poses challenges when the measurement area is large or inaccessible. This work explores advanced computer vision and video processing algorithms to develop a novel video measurement and vision-based operational (output-only) modal analysis method that alleviate the need of structural surface preparation associated with existing vision-based methods and can be implemented in a relatively efficient and autonomous manner with little user supervision and calibration. First a multi-scale image processing method is applied on the frames of the video of a vibrating structure to extract the local pixel phases that encode local structural vibration, establishing a full-field spatiotemporal motion matrix. Then a high-spatial dimensional, yet low-modal-dimensional, over-complete model is used to represent the extracted full-field motion matrix using modal superposition, which is physically connected and manipulated by a family of unsupervised learning models and techniques, respectively. Thus, the proposed method is able to blindly extract modal frequencies, damping ratios, and full-field (as many points as the pixel number of the video frame) mode shapes from line of sight video measurements of the structure. The method is validated by laboratory experiments on a bench-scale building structure and a cantilever beam. Its ability for output (video measurements)-only identification and visualization of the weakly-excited mode is demonstrated and several issues with its implementation are discussed.

  4. Full-motion video analysis for improved gender classification

    NASA Astrophysics Data System (ADS)

    Flora, Jeffrey B.; Lochtefeld, Darrell F.; Iftekharuddin, Khan M.

    2014-06-01

    The ability of computer systems to perform gender classification using the dynamic motion of the human subject has important applications in medicine, human factors, and human-computer interface systems. Previous works in motion analysis have used data from sensors (including gyroscopes, accelerometers, and force plates), radar signatures, and video. However, full-motion video, motion capture, range data provides a higher resolution time and spatial dataset for the analysis of dynamic motion. Works using motion capture data have been limited by small datasets in a controlled environment. In this paper, we explore machine learning techniques to a new dataset that has a larger number of subjects. Additionally, these subjects move unrestricted through a capture volume, representing a more realistic, less controlled environment. We conclude that existing linear classification methods are insufficient for the gender classification for larger dataset captured in relatively uncontrolled environment. A method based on a nonlinear support vector machine classifier is proposed to obtain gender classification for the larger dataset. In experimental testing with a dataset consisting of 98 trials (49 subjects, 2 trials per subject), classification rates using leave-one-out cross-validation are improved from 73% using linear discriminant analysis to 88% using the nonlinear support vector machine classifier.

  5. Wiimote Experiments: Circular Motion

    ERIC Educational Resources Information Center

    Kouh, Minjoon; Holz, Danielle; Kawam, Alae; Lamont, Mary

    2013-01-01

    The advent of new sensor technologies can provide new ways of exploring fundamental physics. In this paper, we show how a Wiimote, which is a handheld remote controller for the Nintendo Wii video game system with an accelerometer, can be used to study the dynamics of circular motion with a very simple setup such as an old record player or a…

  6. Wiimote Experiments: Circular Motion

    NASA Astrophysics Data System (ADS)

    Kouh, Minjoon; Holz, Danielle; Kawam, Alae; Lamont, Mary

    2013-03-01

    The advent of new sensor technologies can provide new ways of exploring fundamental physics. In this paper, we show how a Wiimote, which is a handheld remote controller for the Nintendo Wii video game system with an accelerometer, can be used to study the dynamics of circular motion with a very simple setup such as an old record player or a bicycle wheel.

  7. Quantifying technical skills during open operations using video-based motion analysis.

    PubMed

    Glarner, Carly E; Hu, Yue-Yung; Chen, Chia-Hsiung; Radwin, Robert G; Zhao, Qianqian; Craven, Mark W; Wiegmann, Douglas A; Pugh, Carla M; Carty, Matthew J; Greenberg, Caprice C

    2014-09-01

    Objective quantification of technical operative skills in surgery remains poorly defined, although the delivery of and training in these skills is essential to the profession of surgery. Attempts to measure hand kinematics to quantify operative performance primarily have relied on electromagnetic sensors attached to the surgeon's hand or instrument. We sought to determine whether a similar motion analysis could be performed with a marker-less, video-based review, allowing for a scalable approach to performance evaluation. We recorded six reduction mammoplasty operations-a plastic surgery procedure in which the attending and resident surgeons operate in parallel. Segments representative of surgical tasks were identified with Multimedia Video Task Analysis software. Video digital processing was used to extract and analyze the spatiotemporal characteristics of hand movement. Attending plastic surgeons appear to use their nondominant hand more than residents when cutting with the scalpel, suggesting more use of countertraction. While suturing, attendings were more ambidextrous, with smaller differences in movement between their dominant and nondominant hands than residents. Attendings also seem to have more conservation of movement when performing instrument tying than residents, as demonstrated by less nondominant hand displacement. These observations were consistent within procedures and between the different attending plastic surgeons evaluated in this fashion. Video motion analysis can be used to provide objective measurement of technical skills without the need for sensors or markers. Such data could be valuable in better understanding the acquisition and degradation of operative skills, providing enhanced feedback to shorten the learning curve. Copyright © 2014 Mosby, Inc. All rights reserved.

  8. Beat-to-beat heart rate estimation fusing multimodal video and sensor data

    PubMed Central

    Antink, Christoph Hoog; Gao, Hanno; Brüser, Christoph; Leonhardt, Steffen

    2015-01-01

    Coverage and accuracy of unobtrusively measured biosignals are generally relatively low compared to clinical modalities. This can be improved by exploiting redundancies in multiple channels with methods of sensor fusion. In this paper, we demonstrate that two modalities, skin color variation and head motion, can be extracted from the video stream recorded with a webcam. Using a Bayesian approach, these signals are fused with a ballistocardiographic signal obtained from the seat of a chair with a mean absolute beat-to-beat estimation error below 25 milliseconds and an average coverage above 90% compared to an ECG reference. PMID:26309754

  9. Beat-to-beat heart rate estimation fusing multimodal video and sensor data.

    PubMed

    Antink, Christoph Hoog; Gao, Hanno; Brüser, Christoph; Leonhardt, Steffen

    2015-08-01

    Coverage and accuracy of unobtrusively measured biosignals are generally relatively low compared to clinical modalities. This can be improved by exploiting redundancies in multiple channels with methods of sensor fusion. In this paper, we demonstrate that two modalities, skin color variation and head motion, can be extracted from the video stream recorded with a webcam. Using a Bayesian approach, these signals are fused with a ballistocardiographic signal obtained from the seat of a chair with a mean absolute beat-to-beat estimation error below 25 milliseconds and an average coverage above 90% compared to an ECG reference.

  10. Near real-time, on-the-move software PED using VPEF

    NASA Astrophysics Data System (ADS)

    Green, Kevin; Geyer, Chris; Burnette, Chris; Agarwal, Sanjeev; Swett, Bruce; Phan, Chung; Deterline, Diane

    2015-05-01

    The scope of the Micro-Cloud for Operational, Vehicle-Based EO-IR Reconnaissance System (MOVERS) development effort, managed by the Night Vision and Electronic Sensors Directorate (NVESD), is to develop, integrate, and demonstrate new sensor technologies and algorithms that improve improvised device/mine detection using efficient and effective exploitation and fusion of sensor data and target cues from existing and future Route Clearance Package (RCP) sensor systems. Unfortunately, the majority of forward looking Full Motion Video (FMV) and computer vision processing, exploitation, and dissemination (PED) algorithms are often developed using proprietary, incompatible software. This makes the insertion of new algorithms difficult due to the lack of standardized processing chains. In order to overcome these limitations, EOIR developed the Government off-the-shelf (GOTS) Video Processing and Exploitation Framework (VPEF) to be able to provide standardized interfaces (e.g., input/output video formats, sensor metadata, and detected objects) for exploitation software and to rapidly integrate and test computer vision algorithms. EOIR developed a vehicle-based computing framework within the MOVERS and integrated it with VPEF. VPEF was further enhanced for automated processing, detection, and publishing of detections in near real-time, thus improving the efficiency and effectiveness of RCP sensor systems.

  11. Optimal power allocation and joint source-channel coding for wireless DS-CDMA visual sensor networks

    NASA Astrophysics Data System (ADS)

    Pandremmenou, Katerina; Kondi, Lisimachos P.; Parsopoulos, Konstantinos E.

    2011-01-01

    In this paper, we propose a scheme for the optimal allocation of power, source coding rate, and channel coding rate for each of the nodes of a wireless Direct Sequence Code Division Multiple Access (DS-CDMA) visual sensor network. The optimization is quality-driven, i.e. the received quality of the video that is transmitted by the nodes is optimized. The scheme takes into account the fact that the sensor nodes may be imaging scenes with varying levels of motion. Nodes that image low-motion scenes will require a lower source coding rate, so they will be able to allocate a greater portion of the total available bit rate to channel coding. Stronger channel coding will mean that such nodes will be able to transmit at lower power. This will both increase battery life and reduce interference to other nodes. Two optimization criteria are considered. One that minimizes the average video distortion of the nodes and one that minimizes the maximum distortion among the nodes. The transmission powers are allowed to take continuous values, whereas the source and channel coding rates can assume only discrete values. Thus, the resulting optimization problem lies in the field of mixed-integer optimization tasks and is solved using Particle Swarm Optimization. Our experimental results show the importance of considering the characteristics of the video sequences when determining the transmission power, source coding rate and channel coding rate for the nodes of the visual sensor network.

  12. Robust real-time horizon detection in full-motion video

    NASA Astrophysics Data System (ADS)

    Young, Grace B.; Bagnall, Bryan; Lane, Corey; Parameswaran, Shibin

    2014-06-01

    The ability to detect the horizon on a real-time basis in full-motion video is an important capability to aid and facilitate real-time processing of full-motion videos for the purposes such as object detection, recognition and other video/image segmentation applications. In this paper, we propose a method for real-time horizon detection that is designed to be used as a front-end processing unit for a real-time marine object detection system that carries out object detection and tracking on full-motion videos captured by ship/harbor-mounted cameras, Unmanned Aerial Vehicles (UAVs) or any other method of surveillance for Maritime Domain Awareness (MDA). Unlike existing horizon detection work, we cannot assume a priori the angle or nature (for e.g. straight line) of the horizon, due to the nature of the application domain and the data. Therefore, the proposed real-time algorithm is designed to identify the horizon at any angle and irrespective of objects appearing close to and/or occluding the horizon line (for e.g. trees, vehicles at a distance) by accounting for its non-linear nature. We use a simple two-stage hierarchical methodology, leveraging color-based features, to quickly isolate the region of the image containing the horizon and then perform a more ne-grained horizon detection operation. In this paper, we present our real-time horizon detection results using our algorithm on real-world full-motion video data from a variety of surveillance sensors like UAVs and ship mounted cameras con rming the real-time applicability of this method and its ability to detect horizon with no a priori assumptions.

  13. A System for Video Surveillance and Monitoring CMU VSAM Final Report

    DTIC Science & Technology

    1999-11-30

    motion-based skeletonization, neural network , spatio-temporal salience Patterns inside image chips, spurious motion rejection, model -based... network of sensors with respect to the model coordinate system, computation of 3D geolocation estimates, and graphical display of object hypotheses...rithms have been developed. The first uses view dependent visual properties to train a neural network classifier to recognize four classes: single

  14. Statistical data mining of streaming motion data for fall detection in assistive environments.

    PubMed

    Tasoulis, S K; Doukas, C N; Maglogiannis, I; Plagianakos, V P

    2011-01-01

    The analysis of human motion data is interesting for the purpose of activity recognition or emergency event detection, especially in the case of elderly or disabled people living independently in their homes. Several techniques have been proposed for identifying such distress situations using either motion, audio or video sensors on the monitored subject (wearable sensors) or the surrounding environment. The output of such sensors is data streams that require real time recognition, especially in emergency situations, thus traditional classification approaches may not be applicable for immediate alarm triggering or fall prevention. This paper presents a statistical mining methodology that may be used for the specific problem of real time fall detection. Visual data captured from the user's environment, using overhead cameras along with motion data are collected from accelerometers on the subject's body and are fed to the fall detection system. The paper includes the details of the stream data mining methodology incorporated in the system along with an initial evaluation of the achieved accuracy in detecting falls.

  15. a Sensor Aided H.264/AVC Video Encoder for Aerial Video Sequences with in the Loop Metadata Correction

    NASA Astrophysics Data System (ADS)

    Cicala, L.; Angelino, C. V.; Ruatta, G.; Baccaglini, E.; Raimondo, N.

    2015-08-01

    Unmanned Aerial Vehicles (UAVs) are often employed to collect high resolution images in order to perform image mosaicking and/or 3D reconstruction. Images are usually stored on board and then processed with on-ground desktop software. In such a way the computational load, and hence the power consumption, is moved on ground, leaving on board only the task of storing data. Such an approach is important in the case of small multi-rotorcraft UAVs because of their low endurance due to the short battery life. Images can be stored on board with either still image or video data compression. Still image system are preferred when low frame rates are involved, because video coding systems are based on motion estimation and compensation algorithms which fail when the motion vectors are significantly long and when the overlapping between subsequent frames is very small. In this scenario, UAVs attitude and position metadata from the Inertial Navigation System (INS) can be employed to estimate global motion parameters without video analysis. A low complexity image analysis can be still performed in order to refine the motion field estimated using only the metadata. In this work, we propose to use this refinement step in order to improve the position and attitude estimation produced by the navigation system in order to maximize the encoder performance. Experiments are performed on both simulated and real world video sequences.

  16. Standards for efficient employment of wide-area motion imagery (WAMI) sensors

    NASA Astrophysics Data System (ADS)

    Randall, L. Scott; Maenner, Paul F.

    2013-05-01

    Airborne Wide Area Motion Imagery (WAMI) sensors provide the opportunity for continuous high-resolution surveillance of geographic areas covering tens of square kilometers. This is both a blessing and a curse. Data volumes from "gigapixel-class" WAMI sensors are orders of magnitude greater than for traditional "megapixel-class" video sensors. The amount of data greatly exceeds the capacities of downlinks to ground stations, and even if this were not true, the geographic coverage is too large for effective human monitoring. Although collected motion imagery is recorded on the platform, typically only small "windows" of the full field of view are transmitted to the ground; the full set of collected data can be retrieved from the recording device only after the mission has concluded. Thus, the WAMI environment presents several difficulties: (1) data is too massive for downlink; (2) human operator selection and control of the video windows may not be effective; (3) post-mission storage and dissemination may be limited by inefficient file formats; and (4) unique system implementation characteristics may thwart exploitation by available analysis tools. To address these issues, the National Geospatial-Intelligence Agency's Motion Imagery Standards Board (MISB) is developing relevant standard data exchange formats: (1) moving target indicator (MTI) and tracking metadata to support tipping and cueing of WAMI windows using "watch boxes" and "trip wires"; (2) control channel commands for positioning the windows within the full WAMI field of view; and (3) a full-field-of-view spatiotemporal tiled file format for efficient storage, retrieval, and dissemination. The authors previously provided an overview of this suite of standards. This paper describes the latest progress, with specific concentration on a detailed description of the spatiotemporal tiled file format.

  17. Sensor Management for Tactical Surveillance Operations

    DTIC Science & Technology

    2007-11-01

    active and passive sonar for submarine and tor- pedo detection, and mine avoidance. [range, bearing] range 1.8 km to 55 km Active or Passive AN/SLQ-501...finding (DF) unit [bearing, classification] maximum range 1100 km Passive Cameras (day- light/ night- vision) ( video & still) Record optical and...infrared still images or motion video of events for near-real time assessment or long term analysis and archiving. Range is limited by the image resolution

  18. Development of a Kinect Software Tool to Classify Movements during Active Video Gaming.

    PubMed

    Rosenberg, Michael; Thornton, Ashleigh L; Lay, Brendan S; Ward, Brodie; Nathan, David; Hunt, Daniel; Braham, Rebecca

    2016-01-01

    While it has been established that using full body motion to play active video games results in increased levels of energy expenditure, there is little information on the classification of human movement during active video game play in relationship to fundamental movement skills. The aim of this study was to validate software utilising Kinect sensor motion capture technology to recognise fundamental movement skills (FMS), during active video game play. Two human assessors rated jumping and side-stepping and these assessments were compared to the Kinect Action Recognition Tool (KART), to establish a level of agreement and determine the number of movements completed during five minutes of active video game play, for 43 children (m = 12 years 7 months ± 1 year 6 months). During five minutes of active video game play, inter-rater reliability, when examining the two human raters, was found to be higher for the jump (r = 0.94, p < .01) than the sidestep (r = 0.87, p < .01), although both were excellent. Excellent reliability was also found between human raters and the KART system for the jump (r = 0.84, p, .01) and moderate reliability for sidestep (r = 0.6983, p < .01) during game play, demonstrating that both humans and KART had higher agreement for jumps than sidesteps in the game play condition. The results of the study provide confidence that the Kinect sensor can be used to count the number of jumps and sidestep during five minutes of active video game play with a similar level of accuracy as human raters. However, in contrast to humans, the KART system required a fraction of the time to analyse and tabulate the results.

  19. Development of a Kinect Software Tool to Classify Movements during Active Video Gaming

    PubMed Central

    Rosenberg, Michael; Lay, Brendan S.; Ward, Brodie; Nathan, David; Hunt, Daniel; Braham, Rebecca

    2016-01-01

    While it has been established that using full body motion to play active video games results in increased levels of energy expenditure, there is little information on the classification of human movement during active video game play in relationship to fundamental movement skills. The aim of this study was to validate software utilising Kinect sensor motion capture technology to recognise fundamental movement skills (FMS), during active video game play. Two human assessors rated jumping and side-stepping and these assessments were compared to the Kinect Action Recognition Tool (KART), to establish a level of agreement and determine the number of movements completed during five minutes of active video game play, for 43 children (m = 12 years 7 months ± 1 year 6 months). During five minutes of active video game play, inter-rater reliability, when examining the two human raters, was found to be higher for the jump (r = 0.94, p < .01) than the sidestep (r = 0.87, p < .01), although both were excellent. Excellent reliability was also found between human raters and the KART system for the jump (r = 0.84, p, .01) and moderate reliability for sidestep (r = 0.6983, p < .01) during game play, demonstrating that both humans and KART had higher agreement for jumps than sidesteps in the game play condition. The results of the study provide confidence that the Kinect sensor can be used to count the number of jumps and sidestep during five minutes of active video game play with a similar level of accuracy as human raters. However, in contrast to humans, the KART system required a fraction of the time to analyse and tabulate the results. PMID:27442437

  20. Motion-related resource allocation in dynamic wireless visual sensor network environments.

    PubMed

    Katsenou, Angeliki V; Kondi, Lisimachos P; Parsopoulos, Konstantinos E

    2014-01-01

    This paper investigates quality-driven cross-layer optimization for resource allocation in direct sequence code division multiple access wireless visual sensor networks. We consider a single-hop network topology, where each sensor transmits directly to a centralized control unit (CCU) that manages the available network resources. Our aim is to enable the CCU to jointly allocate the transmission power and source-channel coding rates for each node, under four different quality-driven criteria that take into consideration the varying motion characteristics of each recorded video. For this purpose, we studied two approaches with a different tradeoff of quality and complexity. The first one allocates the resources individually for each sensor, whereas the second clusters them according to the recorded level of motion. In order to address the dynamic nature of the recorded scenery and re-allocate the resources whenever it is dictated by the changes in the amount of motion in the scenery, we propose a mechanism based on the particle swarm optimization algorithm, combined with two restarting schemes that either exploit the previously determined resource allocation or conduct a rough estimation of it. Experimental simulations demonstrate the efficiency of the proposed approaches.

  1. Analysis and Visualization of 3D Motion Data for UPDRS Rating of Patients with Parkinson's Disease.

    PubMed

    Piro, Neltje E; Piro, Lennart K; Kassubek, Jan; Blechschmidt-Trapp, Ronald A

    2016-06-21

    Remote monitoring of Parkinson's Disease (PD) patients with inertia sensors is a relevant method for a better assessment of symptoms. We present a new approach for symptom quantification based on motion data: the automatic Unified Parkinson Disease Rating Scale (UPDRS) classification in combination with an animated 3D avatar giving the neurologist the impression of having the patient live in front of him. In this study we compared the UPDRS ratings of the pronation-supination task derived from: (a) an examination based on video recordings as a clinical reference; (b) an automatically classified UPDRS; and (c) a UPDRS rating from the assessment of the animated 3D avatar. Data were recorded using Magnetic, Angular Rate, Gravity (MARG) sensors with 15 subjects performing a pronation-supination movement of the hand. After preprocessing, the data were classified with a J48 classifier and animated as a 3D avatar. Video recording of the movements, as well as the 3D avatar, were examined by movement disorder specialists and rated by UPDRS. The mean agreement between the ratings based on video and (b) the automatically classified UPDRS is 0.48 and with (c) the 3D avatar it is 0.47. The 3D avatar is similarly suitable for assessing the UPDRS as video recordings for the examined task and will be further developed by the research team.

  2. Comments on airborne ISR radar utilization

    NASA Astrophysics Data System (ADS)

    Doerry, A. W.

    2016-05-01

    A sensor/payload operator for modern multi-sensor multi-mode Intelligence, Surveillance, and Reconnaissance (ISR) platforms is often confronted with a plethora of options in sensors and sensor modes. This often leads an over-worked operator to down-select to favorite sensors and modes; for example a justifiably favorite Full Motion Video (FMV) sensor at the expense of radar modes, even if radar modes can offer unique and advantageous information. At best, sensors might be used in a serial monogamous fashion with some cross-cueing. The challenge is then to increase the utilization of the radar modes in a manner attractive to the sensor/payload operator. We propose that this is best accomplished by combining sensor modes and displays into `super-modes'.

  3. In vivo evaluation of wearable head impact sensors

    PubMed Central

    Wu, Lyndia C.; Nangia, Vaibhav; Bui, Kevin; Hammoor, Bradley; Kurt, Mehmet; Hernandez, Fidel; Kuo, Calvin; Camarillo, David B.

    2015-01-01

    Inertial sensors are commonly used to measure human head motion.(R1–3) Some sensors have been tested with dummy or cadaver experiments with mixed results, and methods to evaluate sensors in vivo are lacking. Here we present an in vivo(R3–10) method using high speed video to test teeth-mounted (mouthguard), soft tissue-mounted (skin patch), and headgear-mounted (skull cap) sensors during 6–13g(R1–20) sagittal soccer head impacts. Sensor coupling to the skull (R1–3) was quantified by displacement from an ear-canal reference. Mouthguard displacements were within video measurement error (<1mm), while the skin patch and skull cap displaced up to 4mm and 13mm from the ear-canal reference, respectively. We used the mouthguard, which had the least displacement from skull (R1–5), as the reference to assess 6-degree-of-freedom skin patch and skull cap measurements. Linear and rotational acceleration magnitudes were over-predicted by both the skin patch (with 120% NRMS error for amag, 290% for αmag(R1–6)) and the skull cap (320% NRMS error for amag, 500% for αmag(R1–6)). Such over-predictions were largely due to out-of-plane motion. To model sensor error, we found that in-plane skin patch acceleration peaks in the anterior-posterior direction could be modeled by an underdamped viscoelastic system. In summary, the mouthguard showed tighter skull coupling than the other sensor mounting approaches(R1–7). Furthermore, the in vivo methods presented are valuable for investigating skull acceleration sensor technologies. PMID:26289941

  4. Multi-modal gesture recognition using integrated model of motion, audio and video

    NASA Astrophysics Data System (ADS)

    Goutsu, Yusuke; Kobayashi, Takaki; Obara, Junya; Kusajima, Ikuo; Takeichi, Kazunari; Takano, Wataru; Nakamura, Yoshihiko

    2015-07-01

    Gesture recognition is used in many practical applications such as human-robot interaction, medical rehabilitation and sign language. With increasing motion sensor development, multiple data sources have become available, which leads to the rise of multi-modal gesture recognition. Since our previous approach to gesture recognition depends on a unimodal system, it is difficult to classify similar motion patterns. In order to solve this problem, a novel approach which integrates motion, audio and video models is proposed by using dataset captured by Kinect. The proposed system can recognize observed gestures by using three models. Recognition results of three models are integrated by using the proposed framework and the output becomes the final result. The motion and audio models are learned by using Hidden Markov Model. Random Forest which is the video classifier is used to learn the video model. In the experiments to test the performances of the proposed system, the motion and audio models most suitable for gesture recognition are chosen by varying feature vectors and learning methods. Additionally, the unimodal and multi-modal models are compared with respect to recognition accuracy. All the experiments are conducted on dataset provided by the competition organizer of MMGRC, which is a workshop for Multi-Modal Gesture Recognition Challenge. The comparison results show that the multi-modal model composed of three models scores the highest recognition rate. This improvement of recognition accuracy means that the complementary relationship among three models improves the accuracy of gesture recognition. The proposed system provides the application technology to understand human actions of daily life more precisely.

  5. MPCM: a hardware coder for super slow motion video sequences

    NASA Astrophysics Data System (ADS)

    Alcocer, Estefanía; López-Granado, Otoniel; Gutierrez, Roberto; Malumbres, Manuel P.

    2013-12-01

    In the last decade, the improvements in VLSI levels and image sensor technologies have led to a frenetic rush to provide image sensors with higher resolutions and faster frame rates. As a result, video devices were designed to capture real-time video at high-resolution formats with frame rates reaching 1,000 fps and beyond. These ultrahigh-speed video cameras are widely used in scientific and industrial applications, such as car crash tests, combustion research, materials research and testing, fluid dynamics, and flow visualization that demand real-time video capturing at extremely high frame rates with high-definition formats. Therefore, data storage capability, communication bandwidth, processing time, and power consumption are critical parameters that should be carefully considered in their design. In this paper, we propose a fast FPGA implementation of a simple codec called modulo-pulse code modulation (MPCM) which is able to reduce the bandwidth requirements up to 1.7 times at the same image quality when compared with PCM coding. This allows current high-speed cameras to capture in a continuous manner through a 40-Gbit Ethernet point-to-point access.

  6. Multi-Target Camera Tracking, Hand-off and Display LDRD 158819 Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, Robert J.

    2014-10-01

    Modern security control rooms gather video and sensor feeds from tens to hundreds of cameras. Advanced camera analytics can detect motion from individual video streams and convert unexpected motion into alarms, but the interpretation of these alarms depends heavily upon human operators. Unfortunately, these operators can be overwhelmed when a large number of events happen simultaneously, or lulled into complacency due to frequent false alarms. This LDRD project has focused on improving video surveillance-based security systems by changing the fundamental focus from the cameras to the targets being tracked. If properly integrated, more cameras shouldn’t lead to more alarms, moremore » monitors, more operators, and increased response latency but instead should lead to better information and more rapid response times. For the course of the LDRD we have been developing algorithms that take live video imagery from multiple video cameras, identify individual moving targets from the background imagery, and then display the results in a single 3D interactive video. In this document we summarize the work in developing this multi-camera, multi-target system, including lessons learned, tools developed, technologies explored, and a description of current capability.« less

  7. Multi-target camera tracking, hand-off and display LDRD 158819 final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, Robert J.

    2014-10-01

    Modern security control rooms gather video and sensor feeds from tens to hundreds of cameras. Advanced camera analytics can detect motion from individual video streams and convert unexpected motion into alarms, but the interpretation of these alarms depends heavily upon human operators. Unfortunately, these operators can be overwhelmed when a large number of events happen simultaneously, or lulled into complacency due to frequent false alarms. This LDRD project has focused on improving video surveillance-based security systems by changing the fundamental focus from the cameras to the targets being tracked. If properly integrated, more cameras shouldn't lead to more alarms, moremore » monitors, more operators, and increased response latency but instead should lead to better information and more rapid response times. For the course of the LDRD we have been developing algorithms that take live video imagery from multiple video cameras, identifies individual moving targets from the background imagery, and then displays the results in a single 3D interactive video. In this document we summarize the work in developing this multi-camera, multi-target system, including lessons learned, tools developed, technologies explored, and a description of current capability.« less

  8. Vibration-based damage detection in wind turbine blades using Phase-based Motion Estimation and motion magnification

    NASA Astrophysics Data System (ADS)

    Sarrafi, Aral; Mao, Zhu; Niezrecki, Christopher; Poozesh, Peyman

    2018-05-01

    Vibration-based Structural Health Monitoring (SHM) techniques are among the most common approaches for structural damage identification. The presence of damage in structures may be identified by monitoring the changes in dynamic behavior subject to external loading, and is typically performed by using experimental modal analysis (EMA) or operational modal analysis (OMA). These tools for SHM normally require a limited number of physically attached transducers (e.g. accelerometers) in order to record the response of the structure for further analysis. Signal conditioners, wires, wireless receivers and a data acquisition system (DAQ) are also typical components of traditional sensing systems used in vibration-based SHM. However, instrumentation of lightweight structures with contact sensors such as accelerometers may induce mass-loading effects, and for large-scale structures, the instrumentation is labor intensive and time consuming. Achieving high spatial measurement resolution for a large-scale structure is not always feasible while working with traditional contact sensors, and there is also the potential for a lack of reliability associated with fixed contact sensors in outliving the life-span of the host structure. Among the state-of-the-art non-contact measurements, digital video cameras are able to rapidly collect high-density spatial information from structures remotely. In this paper, the subtle motions from recorded video (i.e. a sequence of images) are extracted by means of Phase-based Motion Estimation (PME) and the extracted information is used to conduct damage identification on a 2.3-m long Skystream® wind turbine blade (WTB). The PME and phased-based motion magnification approach estimates the structural motion from the captured sequence of images for both a baseline and damaged test cases on a wind turbine blade. Operational deflection shapes of the test articles are also quantified and compared for the baseline and damaged states. In addition, having proper lighting while working with high-speed cameras can be an issue, therefore image enhancement and contrast manipulation has also been performed to enhance the raw images. Ultimately, the extracted resonant frequencies and operational deflection shapes are used to detect the presence of damage, demonstrating the feasibility of implementing non-contact video measurements to perform realistic structural damage detection.

  9. Real-time distributed video coding for 1K-pixel visual sensor networks

    NASA Astrophysics Data System (ADS)

    Hanca, Jan; Deligiannis, Nikos; Munteanu, Adrian

    2016-07-01

    Many applications in visual sensor networks (VSNs) demand the low-cost wireless transmission of video data. In this context, distributed video coding (DVC) has proven its potential to achieve state-of-the-art compression performance while maintaining low computational complexity of the encoder. Despite their proven capabilities, current DVC solutions overlook hardware constraints, and this renders them unsuitable for practical implementations. This paper introduces a DVC architecture that offers highly efficient wireless communication in real-world VSNs. The design takes into account the severe computational and memory constraints imposed by practical implementations on low-resolution visual sensors. We study performance-complexity trade-offs for feedback-channel removal, propose learning-based techniques for rate allocation, and investigate various simplifications of side information generation yielding real-time decoding. The proposed system is evaluated against H.264/AVC intra, Motion-JPEG, and our previously designed DVC prototype for low-resolution visual sensors. Extensive experimental results on various data show significant improvements in multiple configurations. The proposed encoder achieves real-time performance on a 1k-pixel visual sensor mote. Real-time decoding is performed on a Raspberry Pi single-board computer or a low-end notebook PC. To the best of our knowledge, the proposed codec is the first practical DVC deployment on low-resolution VSNs.

  10. Algorithm architecture co-design for ultra low-power image sensor

    NASA Astrophysics Data System (ADS)

    Laforest, T.; Dupret, A.; Verdant, A.; Lattard, D.; Villard, P.

    2012-03-01

    In a context of embedded video surveillance, stand alone leftbehind image sensors are used to detect events with high level of confidence, but also with a very low power consumption. Using a steady camera, motion detection algorithms based on background estimation to find regions in movement are simple to implement and computationally efficient. To reduce power consumption, the background is estimated using a down sampled image formed of macropixels. In order to extend the class of moving objects to be detected, we propose an original mixed mode architecture developed thanks to an algorithm architecture co-design methodology. This programmable architecture is composed of a vector of SIMD processors. A basic RISC architecture was optimized in order to implement motion detection algorithms with a dedicated set of 42 instructions. Definition of delta modulation as a calculation primitive has allowed to implement algorithms in a very compact way. Thereby, a 1920x1080@25fps CMOS image sensor performing integrated motion detection is proposed with a power estimation of 1.8 mW.

  11. Feasibility of Using Low-Cost Motion Capture for Automated Screening of Shoulder Motion Limitation after Breast Cancer Surgery.

    PubMed

    Gritsenko, Valeriya; Dailey, Eric; Kyle, Nicholas; Taylor, Matt; Whittacre, Sean; Swisher, Anne K

    2015-01-01

    To determine if a low-cost, automated motion analysis system using Microsoft Kinect could accurately measure shoulder motion and detect motion impairments in women following breast cancer surgery. Descriptive study of motion measured via 2 methods. Academic cancer center oncology clinic. 20 women (mean age = 60 yrs) were assessed for active and passive shoulder motions during a routine post-operative clinic visit (mean = 18 days after surgery) following mastectomy (n = 4) or lumpectomy (n = 16) for breast cancer. Participants performed 3 repetitions of active and passive shoulder motions on the side of the breast surgery. Arm motion was recorded using motion capture by Kinect for Windows sensor and on video. Goniometric values were determined from video recordings, while motion capture data were transformed to joint angles using 2 methods (body angle and projection angle). Correlation of motion capture with goniometry and detection of motion limitation. Active shoulder motion measured with low-cost motion capture agreed well with goniometry (r = 0.70-0.80), while passive shoulder motion measurements did not correlate well. Using motion capture, it was possible to reliably identify participants whose range of shoulder motion was reduced by 40% or more. Low-cost, automated motion analysis may be acceptable to screen for moderate to severe motion impairments in active shoulder motion. Automatic detection of motion limitation may allow quick screening to be performed in an oncologist's office and trigger timely referrals for rehabilitation.

  12. Convergence in full motion video processing, exploitation, and dissemination and activity based intelligence

    NASA Astrophysics Data System (ADS)

    Phipps, Marja; Lewis, Gina

    2012-06-01

    Over the last decade, intelligence capabilities within the Department of Defense/Intelligence Community (DoD/IC) have evolved from ad hoc, single source, just-in-time, analog processing; to multi source, digitally integrated, real-time analytics; to multi-INT, predictive Processing, Exploitation and Dissemination (PED). Full Motion Video (FMV) technology and motion imagery tradecraft advancements have greatly contributed to Intelligence, Surveillance and Reconnaissance (ISR) capabilities during this timeframe. Imagery analysts have exploited events, missions and high value targets, generating and disseminating critical intelligence reports within seconds of occurrence across operationally significant PED cells. Now, we go beyond FMV, enabling All-Source Analysts to effectively deliver ISR information in a multi-INT sensor rich environment. In this paper, we explore the operational benefits and technical challenges of an Activity Based Intelligence (ABI) approach to FMV PED. Existing and emerging ABI features within FMV PED frameworks are discussed, to include refined motion imagery tools, additional intelligence sources, activity relevant content management techniques and automated analytics.

  13. In-vehicle group activity modeling and simulation in sensor-based virtual environment

    NASA Astrophysics Data System (ADS)

    Shirkhodaie, Amir; Telagamsetti, Durga; Poshtyar, Azin; Chan, Alex; Hu, Shuowen

    2016-05-01

    Human group activity recognition is a very complex and challenging task, especially for Partially Observable Group Activities (POGA) that occur in confined spaces with limited visual observability and often under severe occultation. In this paper, we present IRIS Virtual Environment Simulation Model (VESM) for the modeling and simulation of dynamic POGA. More specifically, we address sensor-based modeling and simulation of a specific category of POGA, called In-Vehicle Group Activities (IVGA). In VESM, human-alike animated characters, called humanoids, are employed to simulate complex in-vehicle group activities within the confined space of a modeled vehicle. Each articulated humanoid is kinematically modeled with comparable physical attributes and appearances that are linkable to its human counterpart. Each humanoid exhibits harmonious full-body motion - simulating human-like gestures and postures, facial impressions, and hands motions for coordinated dexterity. VESM facilitates the creation of interactive scenarios consisting of multiple humanoids with different personalities and intentions, which are capable of performing complicated human activities within the confined space inside a typical vehicle. In this paper, we demonstrate the efficiency and effectiveness of VESM in terms of its capabilities to seamlessly generate time-synchronized, multi-source, and correlated imagery datasets of IVGA, which are useful for the training and testing of multi-source full-motion video processing and annotation. Furthermore, we demonstrate full-motion video processing of such simulated scenarios under different operational contextual constraints.

  14. Underwater image mosaicking and visual odometry

    NASA Astrophysics Data System (ADS)

    Sadjadi, Firooz; Tangirala, Sekhar; Sorber, Scott

    2017-05-01

    This paper summarizes the results of studies in underwater odometery using a video camera for estimating the velocity of an unmanned underwater vehicle (UUV). Underwater vehicles are usually equipped with sonar and Inertial Measurement Unit (IMU) - an integrated sensor package that combines multiple accelerometers and gyros to produce a three dimensional measurement of both specific force and angular rate with respect to an inertial reference frame for navigation. In this study, we investigate the use of odometry information obtainable from a video camera mounted on a UUV to extract vehicle velocity relative to the ocean floor. A key challenge with this process is the seemingly bland (i.e. featureless) nature of video data obtained underwater which could make conventional approaches to image-based motion estimation difficult. To address this problem, we perform image enhancement, followed by frame to frame image transformation, registration and mosaicking/stitching. With this approach the velocity components associated with the moving sensor (vehicle) are readily obtained from (i) the components of the transform matrix at each frame; (ii) information about the height of the vehicle above the seabed; and (iii) the sensor resolution. Preliminary results are presented.

  15. Method and apparatus for telemetry adaptive bandwidth compression

    NASA Technical Reports Server (NTRS)

    Graham, Olin L.

    1987-01-01

    Methods and apparatus are provided for automatic and/or manual adaptive bandwidth compression of telemetry. An adaptive sampler samples a video signal from a scanning sensor and generates a sequence of sampled fields. Each field and range rate information from the sensor are hence sequentially transmitted to and stored in a multiple and adaptive field storage means. The field storage means then, in response to an automatic or manual control signal, transfers the stored sampled field signals to a video monitor in a form for sequential or simultaneous display of a desired number of stored signal fields. The sampling ratio of the adaptive sample, the relative proportion of available communication bandwidth allocated respectively to transmitted data and video information, and the number of fields simultaneously displayed are manually or automatically selectively adjustable in functional relationship to each other and detected range rate. In one embodiment, when relatively little or no scene motion is detected, the control signal maximizes sampling ratio and causes simultaneous display of all stored fields, thus maximizing resolution and bandwidth available for data transmission. When increased scene motion is detected, the control signal is adjusted accordingly to cause display of fewer fields. If greater resolution is desired, the control signal is adjusted to increase the sampling ratio.

  16. Automatic association of chats and video tracks for activity learning and recognition in aerial video surveillance.

    PubMed

    Hammoud, Riad I; Sahin, Cem S; Blasch, Erik P; Rhodes, Bradley J; Wang, Tao

    2014-10-22

    We describe two advanced video analysis techniques, including video-indexed by voice annotations (VIVA) and multi-media indexing and explorer (MINER). VIVA utilizes analyst call-outs (ACOs) in the form of chat messages (voice-to-text) to associate labels with video target tracks, to designate spatial-temporal activity boundaries and to augment video tracking in challenging scenarios. Challenging scenarios include low-resolution sensors, moving targets and target trajectories obscured by natural and man-made clutter. MINER includes: (1) a fusion of graphical track and text data using probabilistic methods; (2) an activity pattern learning framework to support querying an index of activities of interest (AOIs) and targets of interest (TOIs) by movement type and geolocation; and (3) a user interface to support streaming multi-intelligence data processing. We also present an activity pattern learning framework that uses the multi-source associated data as training to index a large archive of full-motion videos (FMV). VIVA and MINER examples are demonstrated for wide aerial/overhead imagery over common data sets affording an improvement in tracking from video data alone, leading to 84% detection with modest misdetection/false alarm results due to the complexity of the scenario. The novel use of ACOs and chat Sensors 2014, 14 19844 messages in video tracking paves the way for user interaction, correction and preparation of situation awareness reports.

  17. Assessing Arthroscopic Skills Using Wireless Elbow-Worn Motion Sensors.

    PubMed

    Kirby, Georgina S J; Guyver, Paul; Strickland, Louise; Alvand, Abtin; Yang, Guang-Zhong; Hargrove, Caroline; Lo, Benny P L; Rees, Jonathan L

    2015-07-01

    Assessment of surgical skill is a critical component of surgical training. Approaches to assessment remain predominantly subjective, although more objective measures such as Global Rating Scales are in use. This study aimed to validate the use of elbow-worn, wireless, miniaturized motion sensors to assess the technical skill of trainees performing arthroscopic procedures in a simulated environment. Thirty participants were divided into three groups on the basis of their surgical experience: novices (n = 15), intermediates (n = 10), and experts (n = 5). All participants performed three standardized tasks on an arthroscopic virtual reality simulator while wearing wireless wrist and elbow motion sensors. Video output was recorded and a validated Global Rating Scale was used to assess performance; dexterity metrics were recorded from the simulator. Finally, live motion data were recorded via Bluetooth from the wireless wrist and elbow motion sensors and custom algorithms produced an arthroscopic performance score. Construct validity was demonstrated for all tasks, with Global Rating Scale scores and virtual reality output metrics showing significant differences between novices, intermediates, and experts (p < 0.001). The correlation of the virtual reality path length to the number of hand movements calculated from the wireless sensors was very high (p < 0.001). A comparison of the arthroscopic performance score levels with virtual reality output metrics also showed highly significant differences (p < 0.01). Comparisons of the arthroscopic performance score levels with the Global Rating Scale scores showed strong and highly significant correlations (p < 0.001) for both sensor locations, but those of the elbow-worn sensors were stronger and more significant (p < 0.001) than those of the wrist-worn sensors. A new wireless assessment of surgical performance system for objective assessment of surgical skills has proven valid for assessing arthroscopic skills. The elbow-worn sensors were shown to achieve an accurate assessment of surgical dexterity and performance. The validation of an entirely objective assessment of arthroscopic skill with wireless elbow-worn motion sensors introduces, for the first time, a feasible assessment system for the live operating theater with the added potential to be applied to other surgical and interventional specialties. Copyright © 2015 by The Journal of Bone and Joint Surgery, Incorporated.

  18. Optimization of CMOS image sensor utilizing variable temporal multisampling partial transfer technique to achieve full-frame high dynamic range with superior low light and stop motion capability

    NASA Astrophysics Data System (ADS)

    Kabir, Salman; Smith, Craig; Armstrong, Frank; Barnard, Gerrit; Schneider, Alex; Guidash, Michael; Vogelsang, Thomas; Endsley, Jay

    2018-03-01

    Differential binary pixel technology is a threshold-based timing, readout, and image reconstruction method that utilizes the subframe partial charge transfer technique in a standard four-transistor (4T) pixel CMOS image sensor to achieve a high dynamic range video with stop motion. This technology improves low light signal-to-noise ratio (SNR) by up to 21 dB. The method is verified in silicon using a Taiwan Semiconductor Manufacturing Company's 65 nm 1.1 μm pixel technology 1 megapixel test chip array and is compared with a traditional 4 × oversampling technique using full charge transfer to show low light SNR superiority of the presented technology.

  19. Aerodynamics in the amusement park: interpreting sensor data for acceleration and rotation

    NASA Astrophysics Data System (ADS)

    Löfstrand, Marcus; Pendrill, Ann-Marie

    2016-09-01

    The sky roller ride depends on interaction with the air to create a rolling motion. In this paper, we analyse forces, torque and angular velocities during different parts of the ride, combining a theoretical analysis, with photos, videos as well as with accelerometer and gyroscopic data, that may be collected e.g. with a smartphone. For interpreting the result, it must be taken into account that the sensors and their coordinate system rotate together with the rider. The sky roller offers many examples for physics education, from simple circular motion, to acceleration and rotation involving several axes, as well as the relation between wing orientation, torque and angular velocities and using barometer pressure to determine the elevation gain.

  20. A low cost PSD-based monocular motion capture system

    NASA Astrophysics Data System (ADS)

    Ryu, Young Kee; Oh, Choonsuk

    2007-10-01

    This paper describes a monocular PSD-based motion capture sensor to employ with commercial video game systems such as Microsoft's XBOX and Sony's Playstation II. The system is compact, low-cost, and only requires a one-time calibration at the factory. The system includes a PSD(Position Sensitive Detector) and active infrared (IR) LED markers that are placed on the object to be tracked. The PSD sensor is placed in the focal plane of a wide-angle lens. The micro-controller calculates the 3D position of the markers using only the measured intensity and the 2D position on the PSD. A series of experiments were performed to evaluate the performance of our prototype system. From the experimental results we see that the proposed system has the advantages of the compact size, the low cost, the easy installation, and the high frame rates to be suitable for high speed motion tracking in games.

  1. Low-cost human motion capture system for postural analysis onboard ships

    NASA Astrophysics Data System (ADS)

    Nocerino, Erica; Ackermann, Sebastiano; Del Pizzo, Silvio; Menna, Fabio; Troisi, Salvatore

    2011-07-01

    The study of human equilibrium, also known as postural stability, concerns different research sectors (medicine, kinesiology, biomechanics, robotics, sport) and is usually performed employing motion analysis techniques for recording human movements and posture. A wide range of techniques and methodologies has been developed, but the choice of instrumentations and sensors depends on the requirement of the specific application. Postural stability is a topic of great interest for the maritime community, since ship motions can make demanding and difficult the maintenance of the upright stance with hazardous consequences for the safety of people onboard. The need of capturing the motion of an individual standing on a ship during its daily service does not permit to employ optical systems commonly used for human motion analysis. These sensors are not designed for operating in disadvantageous environmental conditions (water, wetness, saltiness) and with not optimal lighting. The solution proposed in this study consists in a motion acquisition system that could be easily usable onboard ships. It makes use of two different methodologies: (I) motion capture with videogrammetry and (II) motion measurement with Inertial Measurement Unit (IMU). The developed image-based motion capture system, made up of three low-cost, light and compact video cameras, was validated against a commercial optical system and then used for testing the reliability of the inertial sensors. In this paper, the whole process of planning, designing, calibrating, and assessing the accuracy of the motion capture system is reported and discussed. Results from the laboratory tests and preliminary campaigns in the field are presented.

  2. Dr. Peter Cavanaugh Explains the Need and Operation of the FOOT Experiment

    NASA Technical Reports Server (NTRS)

    2003-01-01

    This video clip is an interview with Dr. Peter Cavanaugh, principal investigator for the FOOT experiment. He explains the reasoning behind the experiment and shows some video clips of the FOOT experiment being calibrated and conducted in orbit. The heart of the FOOT experiment is an instrumented suit called the Lower Extremity Monitoring Suit (LEMS). This customized garment is a pair of Lycra cycling tights incorporating 20 carefully placed sensors and the associated wiring control units, and amplifiers. LEMS enables the electrical activity of the muscles, the angular motions of the hip, knee, and ankle joints, and the force under both feet to be measured continuously. Measurements are also made on the arm muscles. Information from the sensors can be recorded up to 14 hours on a small, wearable computer.

  3. Microgravity Investigation of Crew Reactions in 0-G (MICRO-G)

    NASA Technical Reports Server (NTRS)

    Newman, Dava; Coleman, Charles; Metaxas, Dimitri

    2004-01-01

    There is a need for a human factors, technology-based bioastronautics research effort to develop an integrated system that reduces risk and provides scientific knowledge of astronaut-induced loads and motions during long-duration missions on the International Space Station (ISS), which will lead to appropriate countermeasures. The primary objectives of the Microgravity Investigation of Crew Reactions in 0-G (MICRO-GI research effort are to quantify astronaut adaptation and movement as well as to model motor strategies for differing gravity environments. The overall goal of this research program is to improve astronaut performance and efficiency through the use of rigorous quantitative dynamic analysis, simulation and experimentation. The MICRO-G research effort provides a modular, kinetic and kinematic capability for the ISS. The collection and evaluation of kinematics (whole-body motion) and dynamics (reacting forces and torques) of astronauts within the ISS will allow for quantification of human motion and performance in weightlessness, gathering fundamental human factors information for design, scientific investigation in the field of dynamics and motor control, technological assessment of microgravity disturbances, and the design of miniaturized, real-time space systems. The proposed research effort builds on a strong foundation of successful microgravity experiments, namely, the EDLS (Enhanced Dynamics Load Sensors) flown aboard the Russian Mir space station (19961998) and the DLS (Dynamic Load Sensors) flown on Space Shuttle Mission STS-62. In addition, previously funded NASA ground-based research into sensor technology development and development of algorithms to produce three-dimensional (3-0) kinematics from video images have come to fruition and these efforts culminate in the proposed collaborative MICRO-G flight experiment. The required technology and hardware capitalize on previous sensor design, fabrication, and testing and can be flight qualified for a fraction of the cost of an initial spaceflight experiment. Four dynamic load sensors/restraints are envisioned for measurement of astronaut forces and torques. Two standard ISS video cameras record typical astronaut operations and prescribed IVA motions for 3-D kinematics. Forces and kinematics are combined for dynamic analysis of astronaut motion, exploiting the results of the detailed dynamic modeling effort for the quantitative verification of astronaut IVA performance, induced-loads, and adaptive control strategies for crewmember whole-body motion in microgravity. This comprehensive effort, provides an enhanced human factors approach based on physics-based modeling to identify adaptive performance during long-duration spaceflight, which is critically important for astronaut training as well as providing a spaceflight database to drive countermeasure design.

  4. Persistent Target Tracking Using Likelihood Fusion in Wide-Area and Full Motion Video Sequences

    DTIC Science & Technology

    2012-07-01

    624–637, 2010. [33] R. Pelapur, K. Palaniappan, F. Bunyak, and G. Seetharaman, “Vehicle orientation estimation using radon transform-based voting in...pp. 873–880. [37] F. Bunyak, K. Palaniappan, S. K. Nath, and G. Seetharaman, “Flux tensor constrained geodesic active contours with sensor fusion for

  5. The validity of the first and second generation Microsoft Kinect™ for identifying joint center locations during static postures.

    PubMed

    Xu, Xu; McGorry, Raymond W

    2015-07-01

    The Kinect™ sensor released by Microsoft is a low-cost, portable, and marker-less motion tracking system for the video game industry. Since the first generation Kinect sensor was released in 2010, many studies have been conducted to examine the validity of this sensor when used to measure body movement in different research areas. In 2014, Microsoft released the computer-used second generation Kinect sensor with a better resolution for the depth sensor. However, very few studies have performed a direct comparison between all the Kinect sensor-identified joint center locations and their corresponding motion tracking system-identified counterparts, the result of which may provide some insight into the error of the Kinect-identified segment length, joint angles, as well as the feasibility of adapting inverse dynamics to Kinect-identified joint centers. The purpose of the current study is to first propose a method to align the coordinate system of the Kinect sensor with respect to the global coordinate system of a motion tracking system, and then to examine the accuracy of the Kinect sensor-identified coordinates of joint locations during 8 standing and 8 sitting postures of daily activities. The results indicate the proposed alignment method can effectively align the Kinect sensor with respect to the motion tracking system. The accuracy level of the Kinect-identified joint center location is posture-dependent and joint-dependent. For upright standing posture, the average error across all the participants and all Kinect-identified joint centers is 76 mm and 87 mm for the first and second generation Kinect sensor, respectively. In general, standing postures can be identified with better accuracy than sitting postures, and the identification accuracy of the joints of the upper extremities is better than for the lower extremities. This result may provide some information regarding the feasibility of using the Kinect sensor in future studies. Copyright © 2015 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  6. Wireless visual sensor network resource allocation using cross-layer optimization

    NASA Astrophysics Data System (ADS)

    Bentley, Elizabeth S.; Matyjas, John D.; Medley, Michael J.; Kondi, Lisimachos P.

    2009-01-01

    In this paper, we propose an approach to manage network resources for a Direct Sequence Code Division Multiple Access (DS-CDMA) visual sensor network where nodes monitor scenes with varying levels of motion. It uses cross-layer optimization across the physical layer, the link layer and the application layer. Our technique simultaneously assigns a source coding rate, a channel coding rate, and a power level to all nodes in the network based on one of two criteria that maximize the quality of video of the entire network as a whole, subject to a constraint on the total chip rate. One criterion results in the minimal average end-to-end distortion amongst all nodes, while the other criterion minimizes the maximum distortion of the network. Our approach allows one to determine the capacity of the visual sensor network based on the number of nodes and the quality of video that must be transmitted. For bandwidth-limited applications, one can also determine the minimum bandwidth needed to accommodate a number of nodes with a specific target chip rate. Video captured by a sensor node camera is encoded and decoded using the H.264 video codec by a centralized control unit at the network layer. To reduce the computational complexity of the solution, Universal Rate-Distortion Characteristics (URDCs) are obtained experimentally to relate bit error probabilities to the distortion of corrupted video. Bit error rates are found first by using Viterbi's upper bounds on the bit error probability and second, by simulating nodes transmitting data spread by Total Square Correlation (TSC) codes over a Rayleigh-faded DS-CDMA channel and receiving that data using Auxiliary Vector (AV) filtering.

  7. 4K x 2K pixel color video pickup system

    NASA Astrophysics Data System (ADS)

    Sugawara, Masayuki; Mitani, Kohji; Shimamoto, Hiroshi; Fujita, Yoshihiro; Yuyama, Ichiro; Itakura, Keijirou

    1998-12-01

    This paper describes the development of an experimental super- high-definition color video camera system. During the past several years there has been much interest in super-high- definition images as the next generation image media. One of the difficulties in implementing a super-high-definition motion imaging system is constructing the image-capturing section (camera). Even the state-of-the-art semiconductor technology can not realize the image sensor which has enough pixels and output data rate for super-high-definition images. The present study is an attempt to fill the gap in this respect. The authors intend to solve the problem by using new imaging method in which four HDTV sensors are attached on a new color separation optics so that their pixel sample pattern forms checkerboard pattern. A series of imaging experiments demonstrate that this technique is an effective approach to capturing super-high-definition moving images in the present situation where no image sensors exist for such images.

  8. Data simulation for the Lightning Imaging Sensor (LIS)

    NASA Technical Reports Server (NTRS)

    Boeck, William L.

    1991-01-01

    This project aims to build a data analysis system that will utilize existing video tape scenes of lightning as viewed from space. The resultant data will be used for the design and development of the Lightning Imaging Sensor (LIS) software and algorithm analysis. The desire for statistically significant metrics implies that a large data set needs to be analyzed. Before 1990 the quality and quantity of video was insufficient to build a usable data set. At this point in time, there is usable data from missions STS-34, STS-32, STS-31, STS-41, STS-37, and STS-39. During the summer of 1990, a manual analysis system was developed to demonstrate that the video analysis is feasible and to identify techniques to deduce information that was not directly available. Because the closed circuit television system used on the space shuttle was intended for documentary TV, the current value of the camera focal length and pointing orientation, which are needed for photoanalysis, are not included in the system data. A large effort was needed to discover ancillary data sources as well as develop indirect methods to estimate the necessary parameters. Any data system coping with full motion video faces an enormous bottleneck produced by the large data production rate and the need to move and store the digitized images. The manual system bypassed the video digitizing bottleneck by using a genlock to superimpose pixel coordinates on full motion video. Because the data set had to be obtained point by point by a human operating a computer mouse, the data output rate was small. The loan and subsequent acquisition of a Abekas digital frame store with a real time digitizer moved the bottleneck from data acquisition to a problem of data transfer and storage. The semi-automated analysis procedure was developed using existing equipment and is described. A fully automated system is described in the hope that the components may come on the market at reasonable prices in the next few years.

  9. A feasibility study of damage detection in beams using high-speed camera (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Wan, Chao; Yuan, Fuh-Gwo

    2017-04-01

    In this paper a method for damage detection in beam structures using high-speed camera is presented. Traditional methods of damage detection in structures typically involve contact (i.e., piezoelectric sensor or accelerometer) or non-contact sensors (i.e., laser vibrometer) which can be costly and time consuming to inspect an entire structure. With the popularity of the digital camera and the development of computer vision technology, video cameras offer a viable capability of measurement including higher spatial resolution, remote sensing and low-cost. In the study, a damage detection method based on the high-speed camera was proposed. The system setup comprises a high-speed camera and a line-laser which can capture the out-of-plane displacement of a cantilever beam. The cantilever beam with an artificial crack was excited and the vibration process was recorded by the camera. A methodology called motion magnification, which can amplify subtle motions in a video is used for modal identification of the beam. A finite element model was used for validation of the proposed method. Suggestions for applications of this methodology and challenges in future work will be discussed.

  10. Computationally efficient video restoration for Nyquist sampled imaging sensors combining an affine-motion-based temporal Kalman filter and adaptive Wiener filter.

    PubMed

    Rucci, Michael; Hardie, Russell C; Barnard, Kenneth J

    2014-05-01

    In this paper, we present a computationally efficient video restoration algorithm to address both blur and noise for a Nyquist sampled imaging system. The proposed method utilizes a temporal Kalman filter followed by a correlation-model based spatial adaptive Wiener filter (AWF). The Kalman filter employs an affine background motion model and novel process-noise variance estimate. We also propose and demonstrate a new multidelay temporal Kalman filter designed to more robustly treat local motion. The AWF is a spatial operation that performs deconvolution and adapts to the spatially varying residual noise left in the Kalman filter stage. In image areas where the temporal Kalman filter is able to provide significant noise reduction, the AWF can be aggressive in its deconvolution. In other areas, where less noise reduction is achieved with the Kalman filter, the AWF balances the deconvolution with spatial noise reduction. In this way, the Kalman filter and AWF work together effectively, but without the computational burden of full joint spatiotemporal processing. We also propose a novel hybrid system that combines a temporal Kalman filter and BM3D processing. To illustrate the efficacy of the proposed methods, we test the algorithms on both simulated imagery and video collected with a visible camera.

  11. Comparison of Computational Results with a Low-g, Nitrogen Slosh and Boiling Experiment

    NASA Technical Reports Server (NTRS)

    Stewart, Mark E.; Moder, Jeffrey P.

    2015-01-01

    This paper compares a fluid/thermal simulation, in Fluent, with a low-g, nitrogen slosh and boiling experiment. In 2010, the French Space Agency, CNES, performed cryogenic nitrogen experiments in a low-g aircraft campaign. From one parabolic flight, a low-g interval was simulated that focuses on low-g motion of nitrogen liquid and vapor with significant condensation, evaporation, and boiling. The computational results are compared with high-speed video, pressure data, heat transfer, and temperature data from sensors on the axis of the cylindrically shaped tank. These experimental and computational results compare favorably. The initial temperature stratification is in good agreement, and the two-phase fluid motion is qualitatively captured. Temperature data is matched except that the temperature sensors are unable to capture fast temperature transients when the sensors move from wet to dry (liquid to vapor) operation. Pressure evolution is approximately captured, but condensation and evaporation rate modeling and prediction need further theoretical analysis.

  12. Collaborative Point Paper on Border Surveillance Technology

    DTIC Science & Technology

    2007-06-01

    Systems PLC LORHIS (Long Range Hyperspectral Imaging System ) can be configured for either manned or unmanned aircraft to automatically detect and...Airships, and/or Aerostats, (RF, Electro-Optical, Infrared, Video) • Land- based Sensor Systems (Attended/Mobile and Unattended: e.g., CCD, Motion, Acoustic...electronic surveillance technologies for intrusion detection and warning. These ground- based systems are primarily short-range, up to around 500 meters

  13. Action Video Games Improve Direction Discrimination of Parafoveal Translational Global Motion but Not Reaction Times.

    PubMed

    Pavan, Andrea; Boyce, Matthew; Ghin, Filippo

    2016-10-01

    Playing action video games enhances visual motion perception. However, there is psychophysical evidence that action video games do not improve motion sensitivity for translational global moving patterns presented in fovea. This study investigates global motion perception in action video game players and compares their performance to that of non-action video game players and non-video game players. Stimuli were random dot kinematograms presented in the parafovea. Observers discriminated the motion direction of a target random dot kinematogram presented in one of the four visual quadrants. Action video game players showed lower motion coherence thresholds than the other groups. However, when the task was performed at threshold, we did not find differences between groups in terms of distributions of reaction times. These results suggest that action video games improve visual motion sensitivity in the near periphery of the visual field, rather than speed response. © The Author(s) 2016.

  14. The reliability and accuracy of estimating heart-rates from RGB video recorded on a consumer grade camera

    NASA Astrophysics Data System (ADS)

    Eaton, Adam; Vincely, Vinoin; Lloyd, Paige; Hugenberg, Kurt; Vishwanath, Karthik

    2017-03-01

    Video Photoplethysmography (VPPG) is a numerical technique to process standard RGB video data of exposed human skin and extracting the heart-rate (HR) from the skin areas. Being a non-contact technique, VPPG has the potential to provide estimates of subject's heart-rate, respiratory rate, and even the heart rate variability of human subjects with potential applications ranging from infant monitors, remote healthcare and psychological experiments, particularly given the non-contact and sensor-free nature of the technique. Though several previous studies have reported successful correlations in HR obtained using VPPG algorithms to HR measured using the gold-standard electrocardiograph, others have reported that these correlations are dependent on controlling for duration of the video-data analyzed, subject motion, and ambient lighting. Here, we investigate the ability of two commonly used VPPG-algorithms in extraction of human heart-rates under three different laboratory conditions. We compare the VPPG HR values extracted across these three sets of experiments to the gold-standard values acquired by using an electrocardiogram or a commercially available pulseoximeter. The two VPPG-algorithms were applied with and without KLT-facial feature tracking and detection algorithms from the Computer Vision MATLAB® toolbox. Results indicate that VPPG based numerical approaches have the ability to provide robust estimates of subject HR values and are relatively insensitive to the devices used to record the video data. However, they are highly sensitive to conditions of video acquisition including subject motion, the location, size and averaging techniques applied to regions-of-interest as well as to the number of video frames used for data processing.

  15. Anatomical calibration for wearable motion capture systems: Video calibrated anatomical system technique.

    PubMed

    Bisi, Maria Cristina; Stagni, Rita; Caroselli, Alessio; Cappello, Angelo

    2015-08-01

    Inertial sensors are becoming widely used for the assessment of human movement in both clinical and research applications, thanks to their usability out of the laboratory. This work aims to propose a method for calibrating anatomical landmark position in the wearable sensor reference frame with an ease to use, portable and low cost device. An off-the-shelf camera, a stick and a pattern, attached to the inertial sensor, compose the device. The proposed technique is referred to as video Calibrated Anatomical System Technique (vCAST). The absolute orientation of a synthetic femur was tracked both using the vCAST together with an inertial sensor and using stereo-photogrammetry as reference. Anatomical landmark calibration showed mean absolute error of 0.6±0.5 mm: these errors are smaller than those affecting the in-vivo identification of anatomical landmarks. The roll, pitch and yaw anatomical frame orientations showed root mean square errors close to the accuracy limit of the wearable sensor used (1°), highlighting the reliability of the proposed technique. In conclusion, the present paper proposes and preliminarily verifies the performance of a method (vCAST) for calibrating anatomical landmark position in the wearable sensor reference frame: the technique is low time consuming, highly portable, easy to implement and usable outside laboratory. Copyright © 2015 IPEM. Published by Elsevier Ltd. All rights reserved.

  16. Motion compensation for structured light sensors

    NASA Astrophysics Data System (ADS)

    Biswas, Debjani; Mertz, Christoph

    2015-05-01

    In order for structured light methods to work outside, the strong background from the sun needs to be suppressed. This can be done with bandpass filters, fast shutters, and background subtraction. In general this last method necessitates the sensor system to be stationary during data taking. The contribution of this paper is a method to compensate for the motion if the system is moving. The key idea is to use video stabilization techniques that work even if the illuminator is switched on and off from one frame to another. We used OpenCV functions and modules to implement a robust and efficient method. We evaluated it under various conditions and tested it on a moving robot outdoors. We will demonstrate that one can not only do 3D reconstruction under strong ambient light, but that it is also possible to observe optical properties of the objects in the environment.

  17. Applications and Innovations for Use of High Definition and High Resolution Digital Motion Imagery in Space Operations

    NASA Technical Reports Server (NTRS)

    Grubbs, Rodney

    2016-01-01

    The first live High Definition Television (HDTV) from a spacecraft was in November, 2006, nearly ten years before the 2016 SpaceOps Conference. Much has changed since then. Now, live HDTV from the International Space Station (ISS) is routine. HDTV cameras stream live video views of the Earth from the exterior of the ISS every day on UStream, and HDTV has even flown around the Moon on a Japanese Space Agency spacecraft. A great deal has been learned about the operations applicability of HDTV and high resolution imagery since that first live broadcast. This paper will discuss the current state of real-time and file based HDTV and higher resolution video for space operations. A potential roadmap will be provided for further development and innovations of high-resolution digital motion imagery, including gaps in technology enablers, especially for deep space and unmanned missions. Specific topics to be covered in the paper will include: An update on radiation tolerance and performance of various camera types and sensors and ramifications on the future applicability of these types of cameras for space operations; Practical experience with downlinking very large imagery files with breaks in link coverage; Ramifications of larger camera resolutions like Ultra-High Definition, 6,000 [pixels] and 8,000 [pixels] in space applications; Enabling technologies such as the High Efficiency Video Codec, Bundle Streaming Delay Tolerant Networking, Optical Communications and Bayer Pattern Sensors and other similar innovations; Likely future operations scenarios for deep space missions with extreme latency and intermittent communications links.

  18. Estimating Physical Activity Energy Expenditure with the Kinect Sensor in an Exergaming Environment

    PubMed Central

    Nathan, David; Huynh, Du Q.; Rubenson, Jonas; Rosenberg, Michael

    2015-01-01

    Active video games that require physical exertion during game play have been shown to confer health benefits. Typically, energy expended during game play is measured using devices attached to players, such as accelerometers, or portable gas analyzers. Since 2010, active video gaming technology incorporates marker-less motion capture devices to simulate human movement into game play. Using the Kinect Sensor and Microsoft SDK this research aimed to estimate the mechanical work performed by the human body and estimate subsequent metabolic energy using predictive algorithmic models. Nineteen University students participated in a repeated measures experiment performing four fundamental movements (arm swings, standing jumps, body-weight squats, and jumping jacks). Metabolic energy was captured using a Cortex Metamax 3B automated gas analysis system with mechanical movement captured by the combined motion data from two Kinect cameras. Estimations of the body segment properties, such as segment mass, length, centre of mass position, and radius of gyration, were calculated from the Zatsiorsky-Seluyanov's equations of de Leva, with adjustment made for posture cost. GPML toolbox implementation of the Gaussian Process Regression, a locally weighted k-Nearest Neighbour Regression, and a linear regression technique were evaluated for their performance on predicting the metabolic cost from new feature vectors. The experimental results show that Gaussian Process Regression outperformed the other two techniques by a small margin. This study demonstrated that physical activity energy expenditure during exercise, using the Kinect camera as a motion capture system, can be estimated from segmental mechanical work. Estimates for high-energy activities, such as standing jumps and jumping jacks, can be made accurately, but for low-energy activities, such as squatting, the posture of static poses should be considered as a contributing factor. When translated into the active video gaming environment, the results could be incorporated into game play to more accurately control the energy expenditure requirements. PMID:26000460

  19. Automatically rating trainee skill at a pediatric laparoscopic suturing task.

    PubMed

    Oquendo, Yousi A; Riddle, Elijah W; Hiller, Dennis; Blinman, Thane A; Kuchenbecker, Katherine J

    2018-04-01

    Minimally invasive surgeons must acquire complex technical skills while minimizing patient risk, a challenge that is magnified in pediatric surgery. Trainees need realistic practice with frequent detailed feedback, but human grading is tedious and subjective. We aim to validate a novel motion-tracking system and algorithms that automatically evaluate trainee performance of a pediatric laparoscopic suturing task. Subjects (n = 32) ranging from medical students to fellows performed two trials of intracorporeal suturing in a custom pediatric laparoscopic box trainer after watching a video of ideal performance. The motions of the tools and endoscope were recorded over time using a magnetic sensing system, and both tool grip angles were recorded using handle-mounted flex sensors. An expert rated the 63 trial videos on five domains from the Objective Structured Assessment of Technical Skill (OSATS), yielding summed scores from 5 to 20. Motion data from each trial were processed to calculate 280 features. We used regularized least squares regression to identify the most predictive features from different subsets of the motion data and then built six regression tree models that predict summed OSATS score. Model accuracy was evaluated via leave-one-subject-out cross-validation. The model that used all sensor data streams performed best, achieving 71% accuracy at predicting summed scores within 2 points, 89% accuracy within 4, and a correlation of 0.85 with human ratings. 59% of the rounded average OSATS score predictions were perfect, and 100% were within 1 point. This model employed 87 features, including none based on completion time, 77 from tool tip motion, 3 from tool tip visibility, and 7 from grip angle. Our novel hardware and software automatically rated previously unseen trials with summed OSATS scores that closely match human expert ratings. Such a system facilitates more feedback-intensive surgical training and may yield insights into the fundamental components of surgical skill.

  20. Astronaut-Induced Disturbances to the Microgravity Environment of the Mir Space Station

    NASA Technical Reports Server (NTRS)

    Newman, Dava J.; Amir, Amir R.; Beck, Sherwin M.

    2001-01-01

    In preparation for the International Space Station, the Enhanced Dynamic Load Sensors Space Flight Experiment measured the forces and moments astronauts exerted on the Mir Space Station during their daily on-orbit activities to quantify the astronaut-induced disturbances to the microgravity environment during a long-duration space mission. An examination of video recordings of the astronauts moving in the modules and using the instrumented crew restraint and mobility load sensors led to the identification of several typical astronaut motions and the quantification or the associated forces and moments exerted on the spacecraft. For 2806 disturbances recorded by the foot restraints and hand-hold sensor, the highest force magnitude was 137 N. For about 96% of the time, the maximum force magnitude was below 60 N, and for about 99% of the time the maximum force magnitude was below 90 N. For 95% of the astronaut motions, the rms force level was below 9.0 N. It can be concluded that expected astronaut-induced loads from usual intravehicular activity are considerably less than previously thought and will not significantly disturb the microgravity environment.

  1. Robotic Vehicle Communications Interoperability

    DTIC Science & Technology

    1988-08-01

    starter (cold start) X X Fire suppression X Fording control X Fuel control X Fuel tank selector X Garage toggle X Gear selector X X X X Hazard warning...optic Sensors Sensor switch Video Radar IR Thermal imaging system Image intensifier Laser ranger Video camera selector Forward Stereo Rear Sensor control...optic sensors Sensor switch Video Radar IR Thermal imaging system Image intensifier Laser ranger Video camera selector Forward Stereo Rear Sensor

  2. Motion video analysis using planar parallax

    NASA Astrophysics Data System (ADS)

    Sawhney, Harpreet S.

    1994-04-01

    Motion and structure analysis in video sequences can lead to efficient descriptions of objects and their motions. Interesting events in videos can be detected using such an analysis--for instance independent object motion when the camera itself is moving, figure-ground segregation based on the saliency of a structure compared to its surroundings. In this paper we present a method for 3D motion and structure analysis that uses a planar surface in the environment as a reference coordinate system to describe a video sequence. The motion in the video sequence is described as the motion of the reference plane, and the parallax motion of all the non-planar components of the scene. It is shown how this method simplifies the otherwise hard general 3D motion analysis problem. In addition, a natural coordinate system in the environment is used to describe the scene which can simplify motion based segmentation. This work is a part of an ongoing effort in our group towards video annotation and analysis for indexing and retrieval. Results from a demonstration system being developed are presented.

  3. 36 CFR 1254.88 - What are the rules for the Motion Picture, Sound, and Video Research Room at the National...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... Motion Picture, Sound, and Video Research Room at the National Archives at College Park? 1254.88 Section... to Using Copying Equipment § 1254.88 What are the rules for the Motion Picture, Sound, and Video.... (c) We provide you with a copy of the Motion Picture, Sound, and Video Research Room rules and a...

  4. 36 CFR 1254.88 - What are the rules for the Motion Picture, Sound, and Video Research Room at the National...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Motion Picture, Sound, and Video Research Room at the National Archives at College Park? 1254.88 Section... to Using Copying Equipment § 1254.88 What are the rules for the Motion Picture, Sound, and Video.... (c) We provide you with a copy of the Motion Picture, Sound, and Video Research Room rules and a...

  5. 36 CFR 1254.88 - What are the rules for the Motion Picture, Sound, and Video Research Room at the National...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Motion Picture, Sound, and Video Research Room at the National Archives at College Park? 1254.88 Section... to Using Copying Equipment § 1254.88 What are the rules for the Motion Picture, Sound, and Video.... (c) We provide you with a copy of the Motion Picture, Sound, and Video Research Room rules and a...

  6. 36 CFR 1254.88 - What are the rules for the Motion Picture, Sound, and Video Research Room at the National...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... Motion Picture, Sound, and Video Research Room at the National Archives at College Park? 1254.88 Section... to Using Copying Equipment § 1254.88 What are the rules for the Motion Picture, Sound, and Video.... (c) We provide you with a copy of the Motion Picture, Sound, and Video Research Room rules and a...

  7. Step-Count Accuracy of 3 Motion Sensors for Older and Frail Medical Inpatients.

    PubMed

    McCullagh, Ruth; Dillon, Christina; O'Connell, Ann Marie; Horgan, N Frances; Timmons, Suzanne

    2017-02-01

    To measure the step-count accuracy of an ankle-worn accelerometer, a thigh-worn accelerometer, and a pedometer in older and frail inpatients. Cross-sectional design study. Research room within a hospital. Convenience sample of inpatients (N=32; age, ≥65 years) who were able to walk 20m independently with or without a walking aid. Patients completed a 40-minute program of predetermined tasks while wearing the 3 motion sensors simultaneously. Video recording of the procedure provided the criterion measurement of step count. Mean percentage errors were calculated for all tasks, for slow versus fast walkers, for independent walkers versus walking-aid users, and over shorter versus longer distances. The intraclass correlation was calculated, and accuracy was graphically displayed by Bland-Altman plots. Thirty-two patients (mean age, 78.1±7.8y) completed the study. Fifteen (47%) were women, and 17 (51%) used walking aids. Their median speed was .46m/s (interquartile range [IQR], .36-.66m/s). The ankle-worn accelerometer overestimated steps (median error, 1% [IQR, -3% to 13%]). The other motion sensors underestimated steps (median error, 40% [IQR, -51% to -35%] and 38% [IQR -93% to -27%], respectively). The ankle-worn accelerometer proved to be more accurate over longer distances (median error, 3% [IQR, 0%-9%]) than over shorter distances (median error, 10% [IQR, -23% to 9%]). The ankle-worn accelerometer gave the most accurate step-count measurement and was most accurate over longer distances. Neither of the other motion sensors had acceptable margins of error. Copyright © 2016 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  8. 36 CFR § 1254.88 - What are the rules for the Motion Picture, Sound, and Video Research Room at the National...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Motion Picture, Sound, and Video Research Room at the National Archives at College Park? § 1254.88... to Using Copying Equipment § 1254.88 What are the rules for the Motion Picture, Sound, and Video.... (c) We provide you with a copy of the Motion Picture, Sound, and Video Research Room rules and a...

  9. Repurposing video recordings for structure motion estimations

    NASA Astrophysics Data System (ADS)

    Khaloo, Ali; Lattanzi, David

    2016-04-01

    Video monitoring of public spaces is becoming increasingly ubiquitous, particularly near essential structures and facilities. During any hazard event that dynamically excites a structure, such as an earthquake or hurricane, proximal video cameras may inadvertently capture the motion time-history of the structure during the event. If this dynamic time-history could be extracted from the repurposed video recording it would become a valuable forensic analysis tool for engineers performing post-disaster structural evaluations. The difficulty is that almost all potential video cameras are not installed to monitor structure motions, leading to camera perspective distortions and other associated challenges. This paper presents a method for extracting structure motions from videos using a combination of computer vision techniques. Images from a video recording are first reprojected into synthetic images that eliminate perspective distortion, using as-built knowledge of a structure for calibration. The motion of the camera itself during an event is also considered. Optical flow, a technique for tracking per-pixel motion, is then applied to these synthetic images to estimate the building motion. The developed method was validated using the experimental records of the NEESHub earthquake database. The results indicate that the technique is capable of estimating structural motions, particularly the frequency content of the response. Further work will evaluate variants and alternatives to the optical flow algorithm, as well as study the impact of video encoding artifacts on motion estimates.

  10. Tetherless ergonomics workstation to assess nurses' physical workload in a clinical setting.

    PubMed

    Smith, Warren D; Nave, Michael E; Hreljac, Alan P

    2011-01-01

    Nurses are at risk of physical injury when moving immobile patients. This paper describes the development and testing of a tetherless ergonomics workstation that is suitable for studying nurses' physical workload in a clinical setting. The workstation uses wearable sensors to record multiple channels of body orientation and muscle activity and wirelessly transmits them to a base station laptop computer for display, storage, and analysis. In preparation for use in a clinical setting, the workstation was tested in a laboratory equipped for multi-camera video motion analysis. The testing included a pilot study of the effect of bed height on student nurses' physical workload while they repositioned a volunteer posing as a bedridden patient toward the head of the bed. Each nurse subject chose a preferred bed height, and data were recorded, in randomized order, with the bed at this height, at 0.1 m below this height, and at 0.1 m above this height. The testing showed that the body orientation recordings made by the wearable sensors agreed closely with those obtained from the video motion analysis system. The pilot study showed the following trends: As the bed height was raised, the nurses' trunk flexion at both thoracic and lumbar sites and lumbar muscle effort decreased, whereas trapezius and deltoid muscle effort increased. These trends will be evaluated by further studies of practicing nurses in the clinical setting.

  11. Teasing Apart Complex Motions using VideoPoint

    NASA Astrophysics Data System (ADS)

    Fischer, Mark

    2002-10-01

    Using video analysis software such as VideoPoint, it is possible to explore the physics of any phenomenon that can be captured on videotape. The good news is that complex motions can be filmed and analyzed. The bad news is that the motions can become very complex very quickly. An example of such a complicated motion, the 2-dimensional motion of an object as filmed by a camera that is moving and rotating in the same plane will be discussed. Methods for extracting the desired object motion will be given as well as suggestions for shooting more easily analyzable video clips.

  12. Global optimization for motion estimation with applications to ultrasound videos of carotid artery plaques

    NASA Astrophysics Data System (ADS)

    Murillo, Sergio; Pattichis, Marios; Soliz, Peter; Barriga, Simon; Loizou, C. P.; Pattichis, C. S.

    2010-03-01

    Motion estimation from digital video is an ill-posed problem that requires a regularization approach. Regularization introduces a smoothness constraint that can reduce the resolution of the velocity estimates. The problem is further complicated for ultrasound videos (US), where speckle noise levels can be significant. Motion estimation using optical flow models requires the modification of several parameters to satisfy the optical flow constraint as well as the level of imposed smoothness. Furthermore, except in simulations or mostly unrealistic cases, there is no ground truth to use for validating the velocity estimates. This problem is present in all real video sequences that are used as input to motion estimation algorithms. It is also an open problem in biomedical applications like motion analysis of US of carotid artery (CA) plaques. In this paper, we study the problem of obtaining reliable ultrasound video motion estimates for atherosclerotic plaques for use in clinical diagnosis. A global optimization framework for motion parameter optimization is presented. This framework uses actual carotid artery motions to provide optimal parameter values for a variety of motions and is tested on ten different US videos using two different motion estimation techniques.

  13. Motion sickness and postural sway in console video games.

    PubMed

    Stoffregen, Thomas A; Faugloire, Elise; Yoshida, Ken; Flanagan, Moira B; Merhi, Omar

    2008-04-01

    We tested the hypotheses that (a) participants might develop motion sickness while playing "off-the-shelf" console video games and (b) postural motion would differ between sick and well participants, prior to the onset of motion sickness. There have been many anecdotal reports of motion sickness among people who play console video games (e.g., Xbox, PlayStation). Participants (40 undergraduate students) played a game continuously for up to 50 min while standing or sitting. We varied the distance to the display screen (and, consequently, the visual angle of the display). Across conditions, the incidence of motion sickness ranged from 42% to 56%; incidence did not differ across conditions. During game play, head and torso motion differed between sick and well participants prior to the onset of subjective symptoms of motion sickness. The results indicate that console video games carry a significant risk of motion sickness. Potential applications of this research include changes in the design of console video games and recommendations for how such systems should be used.

  14. An Energy-Efficient and High-Quality Video Transmission Architecture in Wireless Video-Based Sensor Networks.

    PubMed

    Aghdasi, Hadi S; Abbaspour, Maghsoud; Moghadam, Mohsen Ebrahimi; Samei, Yasaman

    2008-08-04

    Technological progress in the fields of Micro Electro-Mechanical Systems (MEMS) and wireless communications and also the availability of CMOS cameras, microphones and small-scale array sensors, which may ubiquitously capture multimedia content from the field, have fostered the development of low-cost limited resources Wireless Video-based Sensor Networks (WVSN). With regards to the constraints of videobased sensor nodes and wireless sensor networks, a supporting video stream is not easy to implement with the present sensor network protocols. In this paper, a thorough architecture is presented for video transmission over WVSN called Energy-efficient and high-Quality Video transmission Architecture (EQV-Architecture). This architecture influences three layers of communication protocol stack and considers wireless video sensor nodes constraints like limited process and energy resources while video quality is preserved in the receiver side. Application, transport, and network layers are the layers in which the compression protocol, transport protocol, and routing protocol are proposed respectively, also a dropping scheme is presented in network layer. Simulation results over various environments with dissimilar conditions revealed the effectiveness of the architecture in improving the lifetime of the network as well as preserving the video quality.

  15. A distributed automatic target recognition system using multiple low resolution sensors

    NASA Astrophysics Data System (ADS)

    Yue, Zhanfeng; Lakshmi Narasimha, Pramod; Topiwala, Pankaj

    2008-04-01

    In this paper, we propose a multi-agent system which uses swarming techniques to perform high accuracy Automatic Target Recognition (ATR) in a distributed manner. The proposed system can co-operatively share the information from low-resolution images of different looks and use this information to perform high accuracy ATR. An advanced, multiple-agent Unmanned Aerial Vehicle (UAV) systems-based approach is proposed which integrates the processing capabilities, combines detection reporting with live video exchange, and swarm behavior modalities that dramatically surpass individual sensor system performance levels. We employ real-time block-based motion analysis and compensation scheme for efficient estimation and correction of camera jitter, global motion of the camera/scene and the effects of atmospheric turbulence. Our optimized Partition Weighted Sum (PWS) approach requires only bitshifts and additions, yet achieves a stunning 16X pixel resolution enhancement, which is moreover parallizable. We develop advanced, adaptive particle-filtering based algorithms to robustly track multiple mobile targets by adaptively changing the appearance model of the selected targets. The collaborative ATR system utilizes the homographies between the sensors induced by the ground plane to overlap the local observation with the received images from other UAVs. The motion of the UAVs distorts estimated homography frame to frame. A robust dynamic homography estimation algorithm is proposed to address this, by using the homography decomposition and the ground plane surface estimation.

  16. Precise color images a high-speed color video camera system with three intensified sensors

    NASA Astrophysics Data System (ADS)

    Oki, Sachio; Yamakawa, Masafumi; Gohda, Susumu; Etoh, Takeharu G.

    1999-06-01

    High speed imaging systems have been used in a large field of science and engineering. Although the high speed camera systems have been improved to high performance, most of their applications are only to get high speed motion pictures. However, in some fields of science and technology, it is useful to get some other information, such as temperature of combustion flame, thermal plasma and molten materials. Recent digital high speed video imaging technology should be able to get such information from those objects. For this purpose, we have already developed a high speed video camera system with three-intensified-sensors and cubic prism image splitter. The maximum frame rate is 40,500 pps (picture per second) at 64 X 64 pixels and 4,500 pps at 256 X 256 pixels with 256 (8 bit) intensity resolution for each pixel. The camera system can store more than 1,000 pictures continuously in solid state memory. In order to get the precise color images from this camera system, we need to develop a digital technique, which consists of a computer program and ancillary instruments, to adjust displacement of images taken from two or three image sensors and to calibrate relationship between incident light intensity and corresponding digital output signals. In this paper, the digital technique for pixel-based displacement adjustment are proposed. Although the displacement of the corresponding circle was more than 8 pixels in original image, the displacement was adjusted within 0.2 pixels at most by this method.

  17. Video Analysis of Rolling Cylinders

    ERIC Educational Resources Information Center

    Phommarach, S.; Wattanakasiwich, P.; Johnston, I.

    2012-01-01

    In this work, we studied the rolling motion of solid and hollow cylinders down an inclined plane at different angles. The motions were captured on video at 300 frames s[superscript -1], and the videos were analyzed frame by frame using video analysis software. Data from the real motion were compared with the theory of rolling down an inclined…

  18. Real Time Apnoea Monitoring of Children Using the Microsoft Kinect Sensor: A Pilot Study.

    PubMed

    Al-Naji, Ali; Gibson, Kim; Lee, Sang-Heon; Chahl, Javaan

    2017-02-03

    The objective of this study was to design a non-invasive system for the observation of respiratory rates and detection of apnoea using analysis of real time image sequences captured in any given sleep position and under any light conditions (even in dark environments). A Microsoft Kinect sensor was used to visualize the variations in the thorax and abdomen from the respiratory rhythm. These variations were magnified, analyzed and detected at a distance of 2.5 m from the subject. A modified motion magnification system and frame subtraction technique were used to identify breathing movements by detecting rapid motion areas in the magnified frame sequences. The experimental results on a set of video data from five subjects (3 h for each subject) showed that our monitoring system can accurately measure respiratory rate and therefore detect apnoea in infants and young children. The proposed system is feasible, accurate, safe and low computational complexity, making it an efficient alternative for non-contact home sleep monitoring systems and advancing health care applications.

  19. Compressive Video Recovery Using Block Match Multi-Frame Motion Estimation Based on Single Pixel Cameras

    PubMed Central

    Bi, Sheng; Zeng, Xiao; Tang, Xin; Qin, Shujia; Lai, King Wai Chiu

    2016-01-01

    Compressive sensing (CS) theory has opened up new paths for the development of signal processing applications. Based on this theory, a novel single pixel camera architecture has been introduced to overcome the current limitations and challenges of traditional focal plane arrays. However, video quality based on this method is limited by existing acquisition and recovery methods, and the method also suffers from being time-consuming. In this paper, a multi-frame motion estimation algorithm is proposed in CS video to enhance the video quality. The proposed algorithm uses multiple frames to implement motion estimation. Experimental results show that using multi-frame motion estimation can improve the quality of recovered videos. To further reduce the motion estimation time, a block match algorithm is used to process motion estimation. Experiments demonstrate that using the block match algorithm can reduce motion estimation time by 30%. PMID:26950127

  20. Speed Biases With Real-Life Video Clips

    PubMed Central

    Rossi, Federica; Montanaro, Elisa; de’Sperati, Claudio

    2018-01-01

    We live almost literally immersed in an artificial visual world, especially motion pictures. In this exploratory study, we asked whether the best speed for reproducing a video is its original, shooting speed. By using adjustment and double staircase methods, we examined speed biases in viewing real-life video clips in three experiments, and assessed their robustness by manipulating visual and auditory factors. With the tested stimuli (short clips of human motion, mixed human-physical motion, physical motion and ego-motion), speed underestimation was the rule rather than the exception, although it depended largely on clip content, ranging on average from 2% (ego-motion) to 32% (physical motion). Manipulating display size or adding arbitrary soundtracks did not modify these speed biases. Estimated speed was not correlated with estimated duration of these same video clips. These results indicate that the sense of speed for real-life video clips can be systematically biased, independently of the impression of elapsed time. Measuring subjective visual tempo may integrate traditional methods that assess time perception: speed biases may be exploited to develop a simple, objective test of reality flow, to be used for example in clinical and developmental contexts. From the perspective of video media, measuring speed biases may help to optimize video reproduction speed and validate “natural” video compression techniques based on sub-threshold temporal squeezing. PMID:29615875

  1. Speed Biases With Real-Life Video Clips.

    PubMed

    Rossi, Federica; Montanaro, Elisa; de'Sperati, Claudio

    2018-01-01

    We live almost literally immersed in an artificial visual world, especially motion pictures. In this exploratory study, we asked whether the best speed for reproducing a video is its original, shooting speed. By using adjustment and double staircase methods, we examined speed biases in viewing real-life video clips in three experiments, and assessed their robustness by manipulating visual and auditory factors. With the tested stimuli (short clips of human motion, mixed human-physical motion, physical motion and ego-motion), speed underestimation was the rule rather than the exception, although it depended largely on clip content, ranging on average from 2% (ego-motion) to 32% (physical motion). Manipulating display size or adding arbitrary soundtracks did not modify these speed biases. Estimated speed was not correlated with estimated duration of these same video clips. These results indicate that the sense of speed for real-life video clips can be systematically biased, independently of the impression of elapsed time. Measuring subjective visual tempo may integrate traditional methods that assess time perception: speed biases may be exploited to develop a simple, objective test of reality flow, to be used for example in clinical and developmental contexts. From the perspective of video media, measuring speed biases may help to optimize video reproduction speed and validate "natural" video compression techniques based on sub-threshold temporal squeezing.

  2. Video segmentation and camera motion characterization using compressed data

    NASA Astrophysics Data System (ADS)

    Milanese, Ruggero; Deguillaume, Frederic; Jacot-Descombes, Alain

    1997-10-01

    We address the problem of automatically extracting visual indexes from videos, in order to provide sophisticated access methods to the contents of a video server. We focus on tow tasks, namely the decomposition of a video clip into uniform segments, and the characterization of each shot by camera motion parameters. For the first task we use a Bayesian classification approach to detecting scene cuts by analyzing motion vectors. For the second task a least- squares fitting procedure determines the pan/tilt/zoom camera parameters. In order to guarantee the highest processing speed, all techniques process and analyze directly MPEG-1 motion vectors, without need for video decompression. Experimental results are reported for a database of news video clips.

  3. Human silhouette matching based on moment invariants

    NASA Astrophysics Data System (ADS)

    Sun, Yong-Chao; Qiu, Xian-Jie; Xia, Shi-Hong; Wang, Zhao-Qi

    2005-07-01

    This paper aims to apply the method of silhouette matching based on moment invariants to infer the human motion parameters from video sequences of single monocular uncalibrated camera. Currently, there are two ways of tracking human motion: Marker and Markerless. While a hybrid framework is introduced in this paper to recover the input video contents. A standard 3D motion database is built up by marker technique in advance. Given a video sequences, human silhouettes are extracted as well as the viewpoint information of the camera which would be utilized to project the standard 3D motion database onto the 2D one. Therefore, the video recovery problem is formulated as a matching issue of finding the most similar body pose in standard 2D library with the one in video image. The framework is applied to the special trampoline sport where we can obtain the complicated human motion parameters in the single camera video sequences, and a lot of experiments are demonstrated that this approach is feasible in the field of monocular video-based 3D motion reconstruction.

  4. Automatic Association of Chats and Video Tracks for Activity Learning and Recognition in Aerial Video Surveillance

    PubMed Central

    Hammoud, Riad I.; Sahin, Cem S.; Blasch, Erik P.; Rhodes, Bradley J.; Wang, Tao

    2014-01-01

    We describe two advanced video analysis techniques, including video-indexed by voice annotations (VIVA) and multi-media indexing and explorer (MINER). VIVA utilizes analyst call-outs (ACOs) in the form of chat messages (voice-to-text) to associate labels with video target tracks, to designate spatial-temporal activity boundaries and to augment video tracking in challenging scenarios. Challenging scenarios include low-resolution sensors, moving targets and target trajectories obscured by natural and man-made clutter. MINER includes: (1) a fusion of graphical track and text data using probabilistic methods; (2) an activity pattern learning framework to support querying an index of activities of interest (AOIs) and targets of interest (TOIs) by movement type and geolocation; and (3) a user interface to support streaming multi-intelligence data processing. We also present an activity pattern learning framework that uses the multi-source associated data as training to index a large archive of full-motion videos (FMV). VIVA and MINER examples are demonstrated for wide aerial/overhead imagery over common data sets affording an improvement in tracking from video data alone, leading to 84% detection with modest misdetection/false alarm results due to the complexity of the scenario. The novel use of ACOs and chat messages in video tracking paves the way for user interaction, correction and preparation of situation awareness reports. PMID:25340453

  5. Reconstructing Interlaced High-Dynamic-Range Video Using Joint Learning.

    PubMed

    Inchang Choi; Seung-Hwan Baek; Kim, Min H

    2017-11-01

    For extending the dynamic range of video, it is a common practice to capture multiple frames sequentially with different exposures and combine them to extend the dynamic range of each video frame. However, this approach results in typical ghosting artifacts due to fast and complex motion in nature. As an alternative, video imaging with interlaced exposures has been introduced to extend the dynamic range. However, the interlaced approach has been hindered by jaggy artifacts and sensor noise, leading to concerns over image quality. In this paper, we propose a data-driven approach for jointly solving two specific problems of deinterlacing and denoising that arise in interlaced video imaging with different exposures. First, we solve the deinterlacing problem using joint dictionary learning via sparse coding. Since partial information of detail in differently exposed rows is often available via interlacing, we make use of the information to reconstruct details of the extended dynamic range from the interlaced video input. Second, we jointly solve the denoising problem by tailoring sparse coding to better handle additive noise in low-/high-exposure rows, and also adopt multiscale homography flow to temporal sequences for denoising. We anticipate that the proposed method will allow for concurrent capture of higher dynamic range video frames without suffering from ghosting artifacts. We demonstrate the advantages of our interlaced video imaging compared with the state-of-the-art high-dynamic-range video methods.

  6. Motion sickness, console video games, and head-mounted displays.

    PubMed

    Merhi, Omar; Faugloire, Elise; Flanagan, Moira; Stoffregen, Thomas A

    2007-10-01

    We evaluated the nauseogenic properties of commercial console video games (i.e., games that are sold to the public) when presented through a head-mounted display. Anecdotal reports suggest that motion sickness may occur among players of contemporary commercial console video games. Participants played standard console video games using an Xbox game system. We varied the participants' posture (standing vs. sitting) and the game (two Xbox games). Participants played for up to 50 min and were asked to discontinue if they experienced any symptoms of motion sickness. Sickness occurred in all conditions, but it was more common during standing. During seated play there were significant differences in head motion between sick and well participants before the onset of motion sickness. The results indicate that commercial console video game systems can induce motion sickness when presented via a head-mounted display and support the hypothesis that motion sickness is preceded by instability in the control of seated posture. Potential applications of this research include changes in the design of console video games and recommendations for how such systems should be used.

  7. The 3D Human Motion Control Through Refined Video Gesture Annotation

    NASA Astrophysics Data System (ADS)

    Jin, Yohan; Suk, Myunghoon; Prabhakaran, B.

    In the beginning of computer and video game industry, simple game controllers consisting of buttons and joysticks were employed, but recently game consoles are replacing joystick buttons with novel interfaces such as the remote controllers with motion sensing technology on the Nintendo Wii [1] Especially video-based human computer interaction (HCI) technique has been applied to games, and the representative game is 'Eyetoy' on the Sony PlayStation 2. Video-based HCI technique has great benefit to release players from the intractable game controller. Moreover, in order to communicate between humans and computers, video-based HCI is very crucial since it is intuitive, easy to get, and inexpensive. On the one hand, extracting semantic low-level features from video human motion data is still a major challenge. The level of accuracy is really dependent on each subject's characteristic and environmental noises. Of late, people have been using 3D motion-capture data for visualizing real human motions in 3D space (e.g, 'Tiger Woods' in EA Sports, 'Angelina Jolie' in Bear-Wolf movie) and analyzing motions for specific performance (e.g, 'golf swing' and 'walking'). 3D motion-capture system ('VICON') generates a matrix for each motion clip. Here, a column is corresponding to a human's sub-body part and row represents time frames of data capture. Thus, we can extract sub-body part's motion only by selecting specific columns. Different from low-level feature values of video human motion, 3D human motion-capture data matrix are not pixel values, but is closer to human level of semantics.

  8. Object tracking using multiple camera video streams

    NASA Astrophysics Data System (ADS)

    Mehrubeoglu, Mehrube; Rojas, Diego; McLauchlan, Lifford

    2010-05-01

    Two synchronized cameras are utilized to obtain independent video streams to detect moving objects from two different viewing angles. The video frames are directly correlated in time. Moving objects in image frames from the two cameras are identified and tagged for tracking. One advantage of such a system involves overcoming effects of occlusions that could result in an object in partial or full view in one camera, when the same object is fully visible in another camera. Object registration is achieved by determining the location of common features in the moving object across simultaneous frames. Perspective differences are adjusted. Combining information from images from multiple cameras increases robustness of the tracking process. Motion tracking is achieved by determining anomalies caused by the objects' movement across frames in time in each and the combined video information. The path of each object is determined heuristically. Accuracy of detection is dependent on the speed of the object as well as variations in direction of motion. Fast cameras increase accuracy but limit the speed and complexity of the algorithm. Such an imaging system has applications in traffic analysis, surveillance and security, as well as object modeling from multi-view images. The system can easily be expanded by increasing the number of cameras such that there is an overlap between the scenes from at least two cameras in proximity. An object can then be tracked long distances or across multiple cameras continuously, applicable, for example, in wireless sensor networks for surveillance or navigation.

  9. Improving ISR Radar Utilization (How I quit blaming the user and made the radar easier to use).

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Doerry, Armin Walter

    In modern multi - sensor multi - mode Intelligence, Surveillance, and Reconnaissance ( ISR ) platforms, the plethora of options available to a sensor/payload operator are quite large, leading to an over - worked operator often down - selecting to favorite sensors an d modes. For example, Full Motion Video (FMV) is justifiably a favorite sensor at the expense of radar modes, even if radar modes can offer unique and advantageous information. The challenge is then to increase the utilization of the radar modes in a man ner attractive to the sensor/payload operator. We propose that this is best accomplishedmore » by combining sensor modes and displays into 'super - modes'. - 4 - Acknowledgements This report is the result of a n unfunded research and development activity . Sandia Natio nal Laboratories is a multi - program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE - AC04 - 94AL850 00.« less

  10. Motion-Blur-Free High-Speed Video Shooting Using a Resonant Mirror

    PubMed Central

    Inoue, Michiaki; Gu, Qingyi; Takaki, Takeshi; Ishii, Idaku; Tajima, Kenji

    2017-01-01

    This study proposes a novel concept of actuator-driven frame-by-frame intermittent tracking for motion-blur-free video shooting of fast-moving objects. The camera frame and shutter timings are controlled for motion blur reduction in synchronization with a free-vibration-type actuator vibrating with a large amplitude at hundreds of hertz so that motion blur can be significantly reduced in free-viewpoint high-frame-rate video shooting for fast-moving objects by deriving the maximum performance of the actuator. We develop a prototype of a motion-blur-free video shooting system by implementing our frame-by-frame intermittent tracking algorithm on a high-speed video camera system with a resonant mirror vibrating at 750 Hz. It can capture 1024 × 1024 images of fast-moving objects at 750 fps with an exposure time of 0.33 ms without motion blur. Several experimental results for fast-moving objects verify that our proposed method can reduce image degradation from motion blur without decreasing the camera exposure time. PMID:29109385

  11. Automated content and quality assessment of full-motion-video for the generation of meta data

    NASA Astrophysics Data System (ADS)

    Harguess, Josh

    2015-05-01

    Virtually all of the video data (and full-motion-video (FMV)) that is currently collected and stored in support of missions has been corrupted to various extents by image acquisition and compression artifacts. Additionally, video collected by wide-area motion imagery (WAMI) surveillance systems and unmanned aerial vehicles (UAVs) and similar sources is often of low quality or in other ways corrupted so that it is not worth storing or analyzing. In order to make progress in the problem of automatic video analysis, the first problem that should be solved is deciding whether the content of the video is even worth analyzing to begin with. We present a work in progress to address three types of scenes which are typically found in real-world data stored in support of Department of Defense (DoD) missions: no or very little motion in the scene, large occlusions in the scene, and fast camera motion. Each of these produce video that is generally not usable to an analyst or automated algorithm for mission support and therefore should be removed or flagged to the user as such. We utilize recent computer vision advances in motion detection and optical flow to automatically assess FMV for the identification and generation of meta-data (or tagging) of video segments which exhibit unwanted scenarios as described above. Results are shown on representative real-world video data.

  12. A fast bilinear structure from motion algorithm using a video sequence and inertial sensors.

    PubMed

    Ramachandran, Mahesh; Veeraraghavan, Ashok; Chellappa, Rama

    2011-01-01

    In this paper, we study the benefits of the availability of a specific form of additional information—the vertical direction (gravity) and the height of the camera, both of which can be conveniently measured using inertial sensors and a monocular video sequence for 3D urban modeling. We show that in the presence of this information, the SfM equations can be rewritten in a bilinear form. This allows us to derive a fast, robust, and scalable SfM algorithm for large scale applications. The SfM algorithm developed in this paper is experimentally demonstrated to have favorable properties compared to the sparse bundle adjustment algorithm. We provide experimental evidence indicating that the proposed algorithm converges in many cases to solutions with lower error than state-of-art implementations of bundle adjustment. We also demonstrate that for the case of large reconstruction problems, the proposed algorithm takes lesser time to reach its solution compared to bundle adjustment. We also present SfM results using our algorithm on the Google StreetView research data set.

  13. Blind prediction of natural video quality.

    PubMed

    Saad, Michele A; Bovik, Alan C; Charrier, Christophe

    2014-03-01

    We propose a blind (no reference or NR) video quality evaluation model that is nondistortion specific. The approach relies on a spatio-temporal model of video scenes in the discrete cosine transform domain, and on a model that characterizes the type of motion occurring in the scenes, to predict video quality. We use the models to define video statistics and perceptual features that are the basis of a video quality assessment (VQA) algorithm that does not require the presence of a pristine video to compare against in order to predict a perceptual quality score. The contributions of this paper are threefold. 1) We propose a spatio-temporal natural scene statistics (NSS) model for videos. 2) We propose a motion model that quantifies motion coherency in video scenes. 3) We show that the proposed NSS and motion coherency models are appropriate for quality assessment of videos, and we utilize them to design a blind VQA algorithm that correlates highly with human judgments of quality. The proposed algorithm, called video BLIINDS, is tested on the LIVE VQA database and on the EPFL-PoliMi video database and shown to perform close to the level of top performing reduced and full reference VQA algorithms.

  14. Introductory Physics Experiments Using the Wiimote

    NASA Astrophysics Data System (ADS)

    Somers, William; Rooney, Frank; Ochoa, Romulo

    2009-03-01

    The Wii, a video game console, is a very popular device with millions of units sold worldwide over the past two years. Although computationally it is not a powerful machine, to a physics educator its most important components can be its controllers. The Wiimote (or remote) controller contains three accelerometers, an infrared detector, and Bluetooth connectivity at a relatively low price. Thanks to available open source code, any PC with Bluetooth capability can detect the information sent out by the Wiimote. We have designed several experiments for introductory physics courses that make use of the accelerometers and Bluetooth connectivity. We have adapted the Wiimote to measure the: variable acceleration in simple harmonic motion, centripetal and tangential accelerations in circular motion, and the accelerations generated when students lift weights. We present the results of our experiments and compare them with those obtained when using motion and/or force sensors.

  15. Learning Activity Models for Multiple Agents in a Smart Space

    NASA Astrophysics Data System (ADS)

    Crandall, Aaron; Cook, Diane J.

    With the introduction of more complex intelligent environment systems, the possibilities for customizing system behavior have increased dramatically. Significant headway has been made in tracking individuals through spaces using wireless devices [1, 18, 26] and in recognizing activities within the space based on video data (see chapter by Brubaker et al. and [6, 8, 23]), motion sensor data [9, 25], wearable sensors [13] or other sources of information [14, 15, 22]. However, much of the theory and most of the algorithms are designed to handle one individual in the space at a time. Resident tracking, activity recognition, event prediction, and behavior automation becomes significantly more difficult for multi-agent situations, when there are multiple residents in the environment.

  16. Video quality assessment method motivated by human visual perception

    NASA Astrophysics Data System (ADS)

    He, Meiling; Jiang, Gangyi; Yu, Mei; Song, Yang; Peng, Zongju; Shao, Feng

    2016-11-01

    Research on video quality assessment (VQA) plays a crucial role in improving the efficiency of video coding and the performance of video processing. It is well acknowledged that the motion energy model generates motion energy responses in a middle temporal area by simulating the receptive field of neurons in V1 for the motion perception of the human visual system. Motivated by the biological evidence for the visual motion perception, a VQA method is proposed in this paper, which comprises the motion perception quality index and the spatial index. To be more specific, the motion energy model is applied to evaluate the temporal distortion severity of each frequency component generated from the difference of Gaussian filter bank, which produces the motion perception quality index, and the gradient similarity measure is used to evaluate the spatial distortion of the video sequence to get the spatial quality index. The experimental results of the LIVE, CSIQ, and IVP video databases demonstrate that the random forests regression technique trained by the generated quality indices is highly correspondent to human visual perception and has many significant improvements than comparable well-performing methods. The proposed method has higher consistency with subjective perception and higher generalization capability.

  17. Fast Video Encryption Using the H.264 Error Propagation Property for Smart Mobile Devices

    PubMed Central

    Chung, Yongwha; Lee, Sungju; Jeon, Taewoong; Park, Daihee

    2015-01-01

    In transmitting video data securely over Video Sensor Networks (VSNs), since mobile handheld devices have limited resources in terms of processor clock speed and battery size, it is necessary to develop an efficient method to encrypt video data to meet the increasing demand for secure connections. Selective encryption methods can reduce the amount of computation needed while satisfying high-level security requirements. This is achieved by selecting an important part of the video data and encrypting it. In this paper, to ensure format compliance and security, we propose a special encryption method for H.264, which encrypts only the DC/ACs of I-macroblocks and the motion vectors of P-macroblocks. In particular, the proposed new selective encryption method exploits the error propagation property in an H.264 decoder and improves the collective performance by analyzing the tradeoff between the visual security level and the processing speed compared to typical selective encryption methods (i.e., I-frame, P-frame encryption, and combined I-/P-frame encryption). Experimental results show that the proposed method can significantly reduce the encryption workload without any significant degradation of visual security. PMID:25850068

  18. Pre-Capture Privacy for Small Vision Sensors.

    PubMed

    Pittaluga, Francesco; Koppal, Sanjeev Jagannatha

    2017-11-01

    The next wave of micro and nano devices will create a world with trillions of small networked cameras. This will lead to increased concerns about privacy and security. Most privacy preserving algorithms for computer vision are applied after image/video data has been captured. We propose to use privacy preserving optics that filter or block sensitive information directly from the incident light-field before sensor measurements are made, adding a new layer of privacy. In addition to balancing the privacy and utility of the captured data, we address trade-offs unique to miniature vision sensors, such as achieving high-quality field-of-view and resolution within the constraints of mass and volume. Our privacy preserving optics enable applications such as depth sensing, full-body motion tracking, people counting, blob detection and privacy preserving face recognition. While we demonstrate applications on macro-scale devices (smartphones, webcams, etc.) our theory has impact for smaller devices.

  19. Design and evaluation of the ReKon : an integrated detection and assessment perimeter system.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dabling, Jeffrey Glenn; Andersen, Jason Jann; McLaughlin, James O.

    2013-02-01

    Kontek Industries (Kannapolis, NC) and their subsidiary, Stonewater Control Systems (Kannapolis, NC), have entered into a cooperative research and development agreement with Sandia to jointly develop and evaluate an integrated perimeter security system solution, one that couples access delay with detection and assessment. This novel perimeter solution was designed to be configurable for use at facilities ranging from high-security military sites to commercial power plants, to petro/chemical facilities of various kinds. A prototype section of the perimeter has been produced and installed at the Sandia Test and Evaluation Center in Albuquerque, NM. This prototype system integrated fiber optic break sensors,more » active infrared sensors, fence disturbance sensors, video motion detection, and ground sensors. This report documents the design, testing, and performance evaluation of the developed ReKon system. The ability of the system to properly detect pedestrian or vehicle attempts to bypass, breach, or otherwise defeat the system is characterized, as well as the Nuisance Alarm Rate.« less

  20. Considerations in video playback design: using optic flow analysis to examine motion characteristics of live and computer-generated animation sequences.

    PubMed

    Woo, Kevin L; Rieucau, Guillaume

    2008-07-01

    The increasing use of the video playback technique in behavioural ecology reveals a growing need to ensure better control of the visual stimuli that focal animals experience. Technological advances now allow researchers to develop computer-generated animations instead of using video sequences of live-acting demonstrators. However, care must be taken to match the motion characteristics (speed and velocity) of the animation to the original video source. Here, we presented a tool based on the use of an optic flow analysis program to measure the resemblance of motion characteristics of computer-generated animations compared to videos of live-acting animals. We examined three distinct displays (tail-flick (TF), push-up body rock (PUBR), and slow arm wave (SAW)) exhibited by animations of Jacky dragons (Amphibolurus muricatus) that were compared to the original video sequences of live lizards. We found no significant differences between the motion characteristics of videos and animations across all three displays. Our results showed that our animations are similar the speed and velocity features of each display. Researchers need to ensure that similar motion characteristics in animation and video stimuli are represented, and this feature is a critical component in the future success of the video playback technique.

  1. As time passes by: Observed motion-speed and psychological time during video playback.

    PubMed

    Nyman, Thomas Jonathan; Karlsson, Eric Per Anders; Antfolk, Jan

    2017-01-01

    Research shows that psychological time (i.e., the subjective experience and assessment of the passage of time) is malleable and that the central nervous system re-calibrates temporal information in accordance with situational factors so that psychological time flows slower or faster. Observed motion-speed (e.g., the visual perception of a rolling ball) is an important situational factor which influences the production of time estimates. The present study examines previous findings showing that observed slow and fast motion-speed during video playback respectively results in over- and underproductions of intervals of time. Here, we investigated through three separate experiments: a) the main effect of observed motion-speed during video playback on a time production task and b) the interactive effect of the frame rate (frames per second; fps) and motion-speed during video playback on a time production task. No main effect of video playback-speed or interactive effect between video playback-speed and frame rate was found on time production.

  2. As time passes by: Observed motion-speed and psychological time during video playback

    PubMed Central

    Karlsson, Eric Per Anders; Antfolk, Jan

    2017-01-01

    Research shows that psychological time (i.e., the subjective experience and assessment of the passage of time) is malleable and that the central nervous system re-calibrates temporal information in accordance with situational factors so that psychological time flows slower or faster. Observed motion-speed (e.g., the visual perception of a rolling ball) is an important situational factor which influences the production of time estimates. The present study examines previous findings showing that observed slow and fast motion-speed during video playback respectively results in over- and underproductions of intervals of time. Here, we investigated through three separate experiments: a) the main effect of observed motion-speed during video playback on a time production task and b) the interactive effect of the frame rate (frames per second; fps) and motion-speed during video playback on a time production task. No main effect of video playback-speed or interactive effect between video playback-speed and frame rate was found on time production. PMID:28614353

  3. Video repairing under variable illumination using cyclic motions.

    PubMed

    Jia, Jiaya; Tai, Yu-Wing; Wu, Tai-Pang; Tang, Chi-Keung

    2006-05-01

    This paper presents a complete system capable of synthesizing a large number of pixels that are missing due to occlusion or damage in an uncalibrated input video. These missing pixels may correspond to the static background or cyclic motions of the captured scene. Our system employs user-assisted video layer segmentation, while the main processing in video repair is fully automatic. The input video is first decomposed into the color and illumination videos. The necessary temporal consistency is maintained by tensor voting in the spatio-temporal domain. Missing colors and illumination of the background are synthesized by applying image repairing. Finally, the occluded motions are inferred by spatio-temporal alignment of collected samples at multiple scales. We experimented on our system with some difficult examples with variable illumination, where the capturing camera can be stationary or in motion.

  4. An unsupervised video foreground co-localization and segmentation process by incorporating motion cues and frame features

    NASA Astrophysics Data System (ADS)

    Zhang, Chao; Zhang, Qian; Zheng, Chi; Qiu, Guoping

    2018-04-01

    Video foreground segmentation is one of the key problems in video processing. In this paper, we proposed a novel and fully unsupervised approach for foreground object co-localization and segmentation of unconstrained videos. We firstly compute both the actual edges and motion boundaries of the video frames, and then align them by their HOG feature maps. Then, by filling the occlusions generated by the aligned edges, we obtained more precise masks about the foreground object. Such motion-based masks could be derived as the motion-based likelihood. Moreover, the color-base likelihood is adopted for the segmentation process. Experimental Results show that our approach outperforms most of the State-of-the-art algorithms.

  5. Evaluation of Hands-On Clinical Exam Performance Using Marker-less Video Tracking.

    PubMed

    Azari, David; Pugh, Carla; Laufer, Shlomi; Cohen, Elaine; Kwan, Calvin; Chen, Chia-Hsiung Eric; Yen, Thomas Y; Hu, Yu Hen; Radwin, Robert

    2014-09-01

    This study investigates the potential of using marker-less video tracking of the hands for evaluating hands-on clinical skills. Experienced family practitioners attending a national conference were recruited and asked to conduct a breast examination on a simulator that simulates different clinical presentations. Videos were made of the clinician's hands during the exam and video processing software for tracking hand motion to quantify hand motion kinematics was used. Practitioner motion patterns indicated consistent behavior of participants across multiple pathologies. Different pathologies exhibited characteristic motion patterns in the aggregate at specific parts of an exam, indicating consistent inter-participant behavior. Marker-less video kinematic tracking therefore shows promise in discriminating between different examination procedures, clinicians, and pathologies.

  6. A smart home application to eldercare: current status and lessons learned.

    PubMed

    Skubic, Marjorie; Alexander, Gregory; Popescu, Mihail; Rantz, Marilyn; Keller, James

    2009-01-01

    To address an aging population, we have been investigating sensor networks for monitoring older adults in their homes. In this paper, we report ongoing work in which passive sensor networks have been installed in 17 apartments in an aging in place eldercare facility. The network under development includes simple motion sensors, video sensors, and a bed sensor that captures sleep restlessness and pulse and respiration levels. Data collection has been ongoing for over two years in some apartments. This longevity in sensor data collection is allowing us to study the data and develop algorithms for identifying alert conditions such as falls, as well as extracting typical daily activity patterns for an individual. The goal is to capture patterns representing physical and cognitive health conditions and then recognize when activity patterns begin to deviate from the norm. In doing so, we strive to provide early detection of potential problems which may lead to serious health events if left unattended. We describe the components of the network and show examples of logged sensor data with correlated references to health events. A summary is also included on the challenges encountered and the lessons learned as a result of our experiences in monitoring aging adults in their homes.

  7. Creating cinematic wide gamut HDR-video for the evaluation of tone mapping operators and HDR-displays

    NASA Astrophysics Data System (ADS)

    Froehlich, Jan; Grandinetti, Stefan; Eberhardt, Bernd; Walter, Simon; Schilling, Andreas; Brendel, Harald

    2014-03-01

    High quality video sequences are required for the evaluation of tone mapping operators and high dynamic range (HDR) displays. We provide scenic and documentary scenes with a dynamic range of up to 18 stops. The scenes are staged using professional film lighting, make-up and set design to enable the evaluation of image and material appearance. To address challenges for HDR-displays and temporal tone mapping operators, the sequences include highlights entering and leaving the image, brightness changing over time, high contrast skin tones, specular highlights and bright, saturated colors. HDR-capture is carried out using two cameras mounted on a mirror-rig. To achieve a cinematic depth of field, digital motion picture cameras with Super-35mm size sensors are used. We provide HDR-video sequences to serve as a common ground for the evaluation of temporal tone mapping operators and HDR-displays. They are available to the scientific community for further research.

  8. Fast generation of video holograms of three-dimensional moving objects using a motion compensation-based novel look-up table.

    PubMed

    Kim, Seung-Cheol; Dong, Xiao-Bin; Kwon, Min-Woo; Kim, Eun-Soo

    2013-05-06

    A novel approach for fast generation of video holograms of three-dimensional (3-D) moving objects using a motion compensation-based novel-look-up-table (MC-N-LUT) method is proposed. Motion compensation has been widely employed in compression of conventional 2-D video data because of its ability to exploit high temporal correlation between successive video frames. Here, this concept of motion-compensation is firstly applied to the N-LUT based on its inherent property of shift-invariance. That is, motion vectors of 3-D moving objects are extracted between the two consecutive video frames, and with them motions of the 3-D objects at each frame are compensated. Then, through this process, 3-D object data to be calculated for its video holograms are massively reduced, which results in a dramatic increase of the computational speed of the proposed method. Experimental results with three kinds of 3-D video scenarios reveal that the average number of calculated object points and the average calculation time for one object point of the proposed method, have found to be reduced down to 86.95%, 86.53% and 34.99%, 32.30%, respectively compared to those of the conventional N-LUT and temporal redundancy-based N-LUT (TR-N-LUT) methods.

  9. Joint Video Stitching and Stabilization from Moving Cameras.

    PubMed

    Guo, Heng; Liu, Shuaicheng; He, Tong; Zhu, Shuyuan; Zeng, Bing; Gabbouj, Moncef

    2016-09-08

    In this paper, we extend image stitching to video stitching for videos that are captured for the same scene simultaneously by multiple moving cameras. In practice, videos captured under this circumstance often appear shaky. Directly applying image stitching methods for shaking videos often suffers from strong spatial and temporal artifacts. To solve this problem, we propose a unified framework in which video stitching and stabilization are performed jointly. Specifically, our system takes several overlapping videos as inputs. We estimate both inter motions (between different videos) and intra motions (between neighboring frames within a video). Then, we solve an optimal virtual 2D camera path from all original paths. An enlarged field of view along the virtual path is finally obtained by a space-temporal optimization that takes both inter and intra motions into consideration. Two important components of this optimization are that (1) a grid-based tracking method is designed for an improved robustness, which produces features that are distributed evenly within and across multiple views, and (2) a mesh-based motion model is adopted for the handling of the scene parallax. Some experimental results are provided to demonstrate the effectiveness of our approach on various consumer-level videos and a Plugin, named "Video Stitcher" is developed at Adobe After Effects CC2015 to show the processed videos.

  10. 26 CFR 1.181-3 - Qualified film or television production.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... any motion picture film or video tape (including digital video) production the production costs of... person acquires a completed motion picture film or video tape (including digital video) that the seller... include property for which records are required to be maintained under 18 U.S.C. 2257. (c) Compensation...

  11. 26 CFR 1.181-3 - Qualified film or television production.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... any motion picture film or video tape (including digital video) production the production costs of... person acquires a completed motion picture film or video tape (including digital video) that the seller... include property for which records are required to be maintained under 18 U.S.C. 2257. (c) Compensation...

  12. 26 CFR 1.181-3 - Qualified film or television production.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... any motion picture film or video tape (including digital video) production the production costs of... person acquires a completed motion picture film or video tape (including digital video) that the seller... include property for which records are required to be maintained under 18 U.S.C. 2257. (c) Compensation...

  13. Real Time Apnoea Monitoring of Children Using the Microsoft Kinect Sensor: A Pilot Study

    PubMed Central

    Al-Naji, Ali; Gibson, Kim; Lee, Sang-Heon; Chahl, Javaan

    2017-01-01

    The objective of this study was to design a non-invasive system for the observation of respiratory rates and detection of apnoea using analysis of real time image sequences captured in any given sleep position and under any light conditions (even in dark environments). A Microsoft Kinect sensor was used to visualize the variations in the thorax and abdomen from the respiratory rhythm. These variations were magnified, analyzed and detected at a distance of 2.5 m from the subject. A modified motion magnification system and frame subtraction technique were used to identify breathing movements by detecting rapid motion areas in the magnified frame sequences. The experimental results on a set of video data from five subjects (3 h for each subject) showed that our monitoring system can accurately measure respiratory rate and therefore detect apnoea in infants and young children. The proposed system is feasible, accurate, safe and low computational complexity, making it an efficient alternative for non-contact home sleep monitoring systems and advancing health care applications. PMID:28165382

  14. A Web-Based Video Digitizing System for the Study of Projectile Motion.

    ERIC Educational Resources Information Center

    Chow, John W.; Carlton, Les G.; Ekkekakis, Panteleimon; Hay, James G.

    2000-01-01

    Discusses advantages of a video-based, digitized image system for the study and analysis of projectile motion in the physics laboratory. Describes the implementation of a web-based digitized video system. (WRM)

  15. The Relationship Between Pitching Mechanics and Injury: A Review of Current Concepts

    PubMed Central

    Chalmers, Peter N.; Wimmer, Markus A.; Verma, Nikhil N.; Cole, Brian J.; Romeo, Anthony A.; Cvetanovich, Gregory L.; Pearl, Michael L.; Chalmers, Peter N.; Wimmer, Markus A.; Verma, Nikhil N.; Cole, Brian J.; Romeo, Anthony A.; Cvetanovich, Gregory L.; Pearl, Michael L.

    2017-01-01

    Context: The overhand pitch is one of the fastest known human motions and places enormous forces and torques on the upper extremity. Shoulder and elbow pain and injury are common in high-level pitchers. A large body of research has been conducted to understand the pitching motion. Evidence Acquisition: A comprehensive review of the literature was performed to gain a full understanding of all currently available biomechanical and clinical evidence surrounding pitching motion analysis. These motion analysis studies use video motion analysis, electromyography, electromagnetic sensors, and markered motion analysis. This review includes studies performed between 1983 and 2016. Study Design: Clinical review. Level of Evidence: Level 5. Results: The pitching motion is a kinetic chain, in which the force generated by the large muscles of the lower extremity and trunk during the wind-up and stride phases are transferred to the ball through the shoulder and elbow during the cocking and acceleration phases. Numerous kinematic factors have been identified that increase shoulder and elbow torques, which are linked to increased risk for injury. Conclusion: Altered knee flexion at ball release, early trunk rotation, loss of shoulder rotational range of motion, increased elbow flexion at ball release, high pitch velocity, and increased pitcher fatigue may increase shoulder and elbow torques and risk for injury. PMID:28107113

  16. Identifying balance impairments in people with Parkinson's disease using video and wearable sensors.

    PubMed

    Stack, Emma; Agarwal, Veena; King, Rachel; Burnett, Malcolm; Tahavori, Fatemeh; Janko, Balazs; Harwin, William; Ashburn, Ann; Kunkel, Dorit

    2018-05-01

    Falls and near falls are common among people with Parkinson's (PwP). To date, most wearable sensor research focussed on fall detection, few studies explored if wearable sensors can detect instability. Can instability (caution or near-falls) be detected using wearable sensors in comparison to video analysis? Twenty-four people (aged 60-86) with and without Parkinson's were recruited from community groups. Movements (e.g. walking, turning, transfers and reaching) were observed in the gait laboratory and/or at home; recorded using clinical measures, video and five wearable sensors (attached on the waist, ankles and wrists). After defining 'caution' and 'instability', two researchers evaluated video data and a third the raw wearable sensor data; blinded to each other's evaluations. Agreement between video and sensor data was calculated on stability, timing, step count and strategy. Data was available for 117 performances: 82 (70%) appeared stable on video. Ratings agreed in 86/117 cases (74%). Highest agreement was noted for chair transfer, timed up and go test and 3 m walks. Video analysts noted caution (slow, contained movements, safety-enhancing postures and concentration) and/or instability (saving reactions, stopping after stumbling or veering) in 40/134 performances (30%): raw wearable sensor data identified 16/35 performances rated cautious or unstable (sensitivity 46%) and 70/82 rated stable (specificity 85%). There was a 54% chance that a performance identified from wearable sensors as cautious/unstable was so; rising to 80% for stable movements. Agreement between wearable sensor and video data suggested that wearable sensors can detect subtle instability and near-falls. Caution and instability were observed in nearly a third of performances, suggesting that simple, mildly challenging actions, with clearly defined start- and end-points, may be most amenable to monitoring during free-living at home. Using the genuine near-falls recorded, work continues to automatically detect subtle instability using algorithms. Crown Copyright © 2018. Published by Elsevier B.V. All rights reserved.

  17. Compact full-motion video hyperspectral cameras: development, image processing, and applications

    NASA Astrophysics Data System (ADS)

    Kanaev, A. V.

    2015-10-01

    Emergence of spectral pixel-level color filters has enabled development of hyper-spectral Full Motion Video (FMV) sensors operating in visible (EO) and infrared (IR) wavelengths. The new class of hyper-spectral cameras opens broad possibilities of its utilization for military and industry purposes. Indeed, such cameras are able to classify materials as well as detect and track spectral signatures continuously in real time while simultaneously providing an operator the benefit of enhanced-discrimination-color video. Supporting these extensive capabilities requires significant computational processing of the collected spectral data. In general, two processing streams are envisioned for mosaic array cameras. The first is spectral computation that provides essential spectral content analysis e.g. detection or classification. The second is presentation of the video to an operator that can offer the best display of the content depending on the performed task e.g. providing spatial resolution enhancement or color coding of the spectral analysis. These processing streams can be executed in parallel or they can utilize each other's results. The spectral analysis algorithms have been developed extensively, however demosaicking of more than three equally-sampled spectral bands has been explored scarcely. We present unique approach to demosaicking based on multi-band super-resolution and show the trade-off between spatial resolution and spectral content. Using imagery collected with developed 9-band SWIR camera we demonstrate several of its concepts of operation including detection and tracking. We also compare the demosaicking results to the results of multi-frame super-resolution as well as to the combined multi-frame and multiband processing.

  18. Visualizing Motion Patterns in Acupuncture Manipulation.

    PubMed

    Lee, Ye-Seul; Jung, Won-Mo; Lee, In-Seon; Lee, Hyangsook; Park, Hi-Joon; Chae, Younbyoung

    2016-07-16

    Acupuncture manipulation varies widely among practitioners in clinical settings, and it is difficult to teach novice students how to perform acupuncture manipulation techniques skillfully. The Acupuncture Manipulation Education System (AMES) is an open source software system designed to enhance acupuncture manipulation skills using visual feedback. Using a phantom acupoint and motion sensor, our method for acupuncture manipulation training provides visual feedback regarding the actual movement of the student's acupuncture manipulation in addition to the optimal or intended movement, regardless of whether the manipulation skill is lifting, thrusting, or rotating. Our results show that students could enhance their manipulation skills by training using this method. This video shows the process of manufacturing phantom acupoints and discusses several issues that may require the attention of individuals interested in creating phantom acupoints or operating this system.

  19. Two terminal micropower radar sensor

    DOEpatents

    McEwan, Thomas E.

    1995-01-01

    A simple, low power ultra-wideband radar motion sensor/switch configuration connects a power source and load to ground. The switch is connected to and controlled by the signal output of a radar motion sensor. The power input of the motion sensor is connected to the load through a diode which conducts power to the motion sensor when the switch is open. A storage capacitor or rechargeable battery is connected to the power input of the motion sensor. The storage capacitor or battery is charged when the switch is open and powers the motion sensor when the switch is closed. The motion sensor and switch are connected between the same two terminals between the source/load and ground.

  20. Two terminal micropower radar sensor

    DOEpatents

    McEwan, T.E.

    1995-11-07

    A simple, low power ultra-wideband radar motion sensor/switch configuration connects a power source and load to ground. The switch is connected to and controlled by the signal output of a radar motion sensor. The power input of the motion sensor is connected to the load through a diode which conducts power to the motion sensor when the switch is open. A storage capacitor or rechargeable battery is connected to the power input of the motion sensor. The storage capacitor or battery is charged when the switch is open and powers the motion sensor when the switch is closed. The motion sensor and switch are connected between the same two terminals between the source/load and ground. 3 figs.

  1. Video quality assessment using a statistical model of human visual speed perception.

    PubMed

    Wang, Zhou; Li, Qiang

    2007-12-01

    Motion is one of the most important types of information contained in natural video, but direct use of motion information in the design of video quality assessment algorithms has not been deeply investigated. Here we propose to incorporate a recent model of human visual speed perception [Nat. Neurosci. 9, 578 (2006)] and model visual perception in an information communication framework. This allows us to estimate both the motion information content and the perceptual uncertainty in video signals. Improved video quality assessment algorithms are obtained by incorporating the model as spatiotemporal weighting factors, where the weight increases with the information content and decreases with the perceptual uncertainty. Consistent improvement over existing video quality assessment algorithms is observed in our validation with the video quality experts group Phase I test data set.

  2. Full-frame video stabilization with motion inpainting.

    PubMed

    Matsushita, Yasuyuki; Ofek, Eyal; Ge, Weina; Tang, Xiaoou; Shum, Heung-Yeung

    2006-07-01

    Video stabilization is an important video enhancement technology which aims at removing annoying shaky motion from videos. We propose a practical and robust approach of video stabilization that produces full-frame stabilized videos with good visual quality. While most previous methods end up with producing smaller size stabilized videos, our completion method can produce full-frame videos by naturally filling in missing image parts by locally aligning image data of neighboring frames. To achieve this, motion inpainting is proposed to enforce spatial and temporal consistency of the completion in both static and dynamic image areas. In addition, image quality in the stabilized video is enhanced with a new practical deblurring algorithm. Instead of estimating point spread functions, our method transfers and interpolates sharper image pixels of neighboring frames to increase the sharpness of the frame. The proposed video completion and deblurring methods enabled us to develop a complete video stabilizer which can naturally keep the original image quality in the stabilized videos. The effectiveness of our method is confirmed by extensive experiments over a wide variety of videos.

  3. System and method for optical fiber based image acquisition suitable for use in turbine engines

    DOEpatents

    Baleine, Erwan; A V, Varun; Zombo, Paul J.; Varghese, Zubin

    2017-05-16

    A system and a method for image acquisition suitable for use in a turbine engine are disclosed. Light received from a field of view in an object plane is projected onto an image plane through an optical modulation device and is transferred through an image conduit to a sensor array. The sensor array generates a set of sampled image signals in a sensing basis based on light received from the image conduit. Finally, the sampled image signals are transformed from the sensing basis to a representation basis and a set of estimated image signals are generated therefrom. The estimated image signals are used for reconstructing an image and/or a motion-video of a region of interest within a turbine engine.

  4. Fast image interpolation for motion estimation using graphics hardware

    NASA Astrophysics Data System (ADS)

    Kelly, Francis; Kokaram, Anil

    2004-05-01

    Motion estimation and compensation is the key to high quality video coding. Block matching motion estimation is used in most video codecs, including MPEG-2, MPEG-4, H.263 and H.26L. Motion estimation is also a key component in the digital restoration of archived video and for post-production and special effects in the movie industry. Sub-pixel accurate motion vectors can improve the quality of the vector field and lead to more efficient video coding. However sub-pixel accuracy requires interpolation of the image data. Image interpolation is a key requirement of many image processing algorithms. Often interpolation can be a bottleneck in these applications, especially in motion estimation due to the large number pixels involved. In this paper we propose using commodity computer graphics hardware for fast image interpolation. We use the full search block matching algorithm to illustrate the problems and limitations of using graphics hardware in this way.

  5. Scene-aware joint global and local homographic video coding

    NASA Astrophysics Data System (ADS)

    Peng, Xiulian; Xu, Jizheng; Sullivan, Gary J.

    2016-09-01

    Perspective motion is commonly represented in video content that is captured and compressed for various applications including cloud gaming, vehicle and aerial monitoring, etc. Existing approaches based on an eight-parameter homography motion model cannot deal with this efficiently, either due to low prediction accuracy or excessive bit rate overhead. In this paper, we consider the camera motion model and scene structure in such video content and propose a joint global and local homography motion coding approach for video with perspective motion. The camera motion is estimated by a computer vision approach, and camera intrinsic and extrinsic parameters are globally coded at the frame level. The scene is modeled as piece-wise planes, and three plane parameters are coded at the block level. Fast gradient-based approaches are employed to search for the plane parameters for each block region. In this way, improved prediction accuracy and low bit costs are achieved. Experimental results based on the HEVC test model show that up to 9.1% bit rate savings can be achieved (with equal PSNR quality) on test video content with perspective motion. Test sequences for the example applications showed a bit rate savings ranging from 3.7 to 9.1%.

  6. Self-organizing neural integration of pose-motion features for human action recognition

    PubMed Central

    Parisi, German I.; Weber, Cornelius; Wermter, Stefan

    2015-01-01

    The visual recognition of complex, articulated human movements is fundamental for a wide range of artificial systems oriented toward human-robot communication, action classification, and action-driven perception. These challenging tasks may generally involve the processing of a huge amount of visual information and learning-based mechanisms for generalizing a set of training actions and classifying new samples. To operate in natural environments, a crucial property is the efficient and robust recognition of actions, also under noisy conditions caused by, for instance, systematic sensor errors and temporarily occluded persons. Studies of the mammalian visual system and its outperforming ability to process biological motion information suggest separate neural pathways for the distinct processing of pose and motion features at multiple levels and the subsequent integration of these visual cues for action perception. We present a neurobiologically-motivated approach to achieve noise-tolerant action recognition in real time. Our model consists of self-organizing Growing When Required (GWR) networks that obtain progressively generalized representations of sensory inputs and learn inherent spatio-temporal dependencies. During the training, the GWR networks dynamically change their topological structure to better match the input space. We first extract pose and motion features from video sequences and then cluster actions in terms of prototypical pose-motion trajectories. Multi-cue trajectories from matching action frames are subsequently combined to provide action dynamics in the joint feature space. Reported experiments show that our approach outperforms previous results on a dataset of full-body actions captured with a depth sensor, and ranks among the best results for a public benchmark of domestic daily actions. PMID:26106323

  7. Optical sensor feedback assistive technology to enable patients to play an active role in the management of their body dynamics during radiotherapy treatment

    NASA Astrophysics Data System (ADS)

    Parkhurst, J. M.; Price, G. J.; Sharrock, P. J.; Stratford, J.; Moore, C. J.

    2013-04-01

    Patient motion during treatment is well understood as a prime factor limiting radiotherapy success, with the risks most pronounced in modern safety critical therapies promising the greatest benefit. In this paper we describe a real-time visual feedback device designed to help patients to actively manage their body position, pose and motion. In addition to technical device details, we present preliminary trial results showing that its use enables volunteers to successfully manage their respiratory motion. The device enables patients to view their live body surface measurements relative to a prior reference, operating on the concept that co-operative engagement with patients will both improve geometric conformance and remove their perception of isolation, in turn easing stress related motion. The device is driven by a real-time wide field optical sensor system developed at The Christie. Feedback is delivered through three intuitive visualization modes of hierarchically increasing display complexity. The device can be used with any suitable display technology; in the presented study we use both personal video glasses and a standard LCD projector. The performance characteristics of the system were measured, with the frame rate, throughput and latency of the feedback device being 22.4 fps, 47.0 Mbps, 109.8 ms, and 13.7 fps, 86.4 Mbps, 119.1 ms for single and three-channel modes respectively. The pilot study, using ten healthy volunteers over three sessions, shows that the use of visual feedback resulted in both a reduction in the participants' respiratory amplitude, and a decrease in their overall body motion variability.

  8. Quantitative assessment of human motion using video motion analysis

    NASA Technical Reports Server (NTRS)

    Probe, John D.

    1993-01-01

    In the study of the dynamics and kinematics of the human body a wide variety of technologies has been developed. Photogrammetric techniques are well documented and are known to provide reliable positional data from recorded images. Often these techniques are used in conjunction with cinematography and videography for analysis of planar motion, and to a lesser degree three-dimensional motion. Cinematography has been the most widely used medium for movement analysis. Excessive operating costs and the lag time required for film development, coupled with recent advances in video technology, have allowed video based motion analysis systems to emerge as a cost effective method of collecting and analyzing human movement. The Anthropometric and Biomechanics Lab at Johnson Space Center utilizes the video based Ariel Performance Analysis System (APAS) to develop data on shirtsleeved and space-suited human performance in order to plan efficient on-orbit intravehicular and extravehicular activities. APAS is a fully integrated system of hardware and software for biomechanics and the analysis of human performance and generalized motion measurement. Major components of the complete system include the video system, the AT compatible computer, and the proprietary software.

  9. Magnetization distribution of hydrothermal deposits from three component magnetometer survey using ROV in the Lau Basin, the southwestern Pacific

    NASA Astrophysics Data System (ADS)

    Kim, C.; Choi, S.; Park, C.

    2013-12-01

    Deep sea three component magnetic surveys, using ROV (Remotely Operated Vehicle), were conducted at Apr., 2011 and Jan., 2012 in TA25 and TA26 seamounts, the Lau Basin, the southwestern Pacific. At 2011, the survey area was only the western slope of the caldera of TA25 using IBRV(Ice Breaker Research Vessel) ARAON of KIOST (Korea Institute of Ocean Science & Technology) and ROV of Oceaneering Co. And, at Jan. 2012, the magnetic survey was conducted in the western (site A) and eastern (site B) slopes of the caldera of TA25 and the summit area of TA26 using German R/V SONNE and ROV of ROPOS Co. The 2011 and 2012 three component magnetic survey lines were the 13 N-S lines and the 29 N-S lines (TA25-East : 12 lines, TA25-West : 11 lines, TA26 : 6 lines) with about 100 m spacing, respectively. Also, we conducted the 8 figure circle rotation survey of ROV for magnetic calibration at 2011 and 2012. For the magnetic survey, the magnetometer sensor was attached with the line frame of ROV and the data logger and motion sensor in ROV. The three component magnetometer measure the X (North), Y (East) and Z (Vertical) vector components of a magnetic field. A motion sensor (Octans) provided us the data of pitch, roll, yaw for the correction of the magnetic data to the motion of ROV. In the survey, ROV followed the tracks of the plan at 50 m above seafloor. The data of the magnetometer and motion sensors and the USBL(Ultra Short Base Line) data of the position of ROV were recorded on a notebook through the optical cable of ROV. Hydrothermal fluids over Curie temperature can quickly alter or replace the iron-rich magnetic minerals, reducing the magnetic remanence of the crustal rocks, in some cases to near 0 A/m magnetization. Low magnetization zones occur in the south-western and northern parts of TA25 site A and the south-south-western, north-western and central parts of TA25 site B. TA26 has low magnetization zones in the central part. The low magnetization zones of the survey areas usually appear in groups. Some of these low magnetization zones are well matched with the chimney sites or vent areas based on the results of video or rock sampling. The results of three component magnetic data are fully utilized by finding possible hydrothermal vents of the survey areas. Deployment of ROV The magnetization of TA25 site A and the results of video and rock sample.

  10. Portable color multimedia training systems based on monochrome laptop computers (CBT-in-a-briefcase), with spinoff implications for video uplink and downlink in spaceflight operations

    NASA Technical Reports Server (NTRS)

    Scott, D. W.

    1994-01-01

    This report describes efforts to use digital motion video compression technology to develop a highly portable device that would convert 1990-91 era IBM-compatible and/or MacIntosh notebook computers into full-color, motion-video capable multimedia training systems. An architecture was conceived that would permit direct conversion of existing laser-disk-based multimedia courses with little or no reauthoring. The project did not physically demonstrate certain critical video keying techniques, but their implementation should be feasible. This investigation of digital motion video has spawned two significant spaceflight projects at MSFC: one to downlink multiple high-quality video signals from Spacelab, and the other to uplink videoconference-quality video in realtime and high quality video off-line, plus investigate interactive, multimedia-based techniques for enhancing onboard science operations. Other airborne or spaceborne spinoffs are possible.

  11. The Coverage Problem in Video-Based Wireless Sensor Networks: A Survey

    PubMed Central

    Costa, Daniel G.; Guedes, Luiz Affonso

    2010-01-01

    Wireless sensor networks typically consist of a great number of tiny low-cost electronic devices with limited sensing and computing capabilities which cooperatively communicate to collect some kind of information from an area of interest. When wireless nodes of such networks are equipped with a low-power camera, visual data can be retrieved, facilitating a new set of novel applications. The nature of video-based wireless sensor networks demands new algorithms and solutions, since traditional wireless sensor networks approaches are not feasible or even efficient for that specialized communication scenario. The coverage problem is a crucial issue of wireless sensor networks, requiring specific solutions when video-based sensors are employed. In this paper, it is surveyed the state of the art of this particular issue, regarding strategies, algorithms and general computational solutions. Open research areas are also discussed, envisaging promising investigation considering coverage in video-based wireless sensor networks. PMID:22163651

  12. Video sensor architecture for surveillance applications.

    PubMed

    Sánchez, Jordi; Benet, Ginés; Simó, José E

    2012-01-01

    This paper introduces a flexible hardware and software architecture for a smart video sensor. This sensor has been applied in a video surveillance application where some of these video sensors are deployed, constituting the sensory nodes of a distributed surveillance system. In this system, a video sensor node processes images locally in order to extract objects of interest, and classify them. The sensor node reports the processing results to other nodes in the cloud (a user or higher level software) in the form of an XML description. The hardware architecture of each sensor node has been developed using two DSP processors and an FPGA that controls, in a flexible way, the interconnection among processors and the image data flow. The developed node software is based on pluggable components and runs on a provided execution run-time. Some basic and application-specific software components have been developed, in particular: acquisition, segmentation, labeling, tracking, classification and feature extraction. Preliminary results demonstrate that the system can achieve up to 7.5 frames per second in the worst case, and the true positive rates in the classification of objects are better than 80%.

  13. Video Sensor Architecture for Surveillance Applications

    PubMed Central

    Sánchez, Jordi; Benet, Ginés; Simó, José E.

    2012-01-01

    This paper introduces a flexible hardware and software architecture for a smart video sensor. This sensor has been applied in a video surveillance application where some of these video sensors are deployed, constituting the sensory nodes of a distributed surveillance system. In this system, a video sensor node processes images locally in order to extract objects of interest, and classify them. The sensor node reports the processing results to other nodes in the cloud (a user or higher level software) in the form of an XML description. The hardware architecture of each sensor node has been developed using two DSP processors and an FPGA that controls, in a flexible way, the interconnection among processors and the image data flow. The developed node software is based on pluggable components and runs on a provided execution run-time. Some basic and application-specific software components have been developed, in particular: acquisition, segmentation, labeling, tracking, classification and feature extraction. Preliminary results demonstrate that the system can achieve up to 7.5 frames per second in the worst case, and the true positive rates in the classification of objects are better than 80%. PMID:22438723

  14. Architecture and Protocol of a Semantic System Designed for Video Tagging with Sensor Data in Mobile Devices

    PubMed Central

    Macias, Elsa; Lloret, Jaime; Suarez, Alvaro; Garcia, Miguel

    2012-01-01

    Current mobile phones come with several sensors and powerful video cameras. These video cameras can be used to capture good quality scenes, which can be complemented with the information gathered by the sensors also embedded in the phones. For example, the surroundings of a beach recorded by the camera of the mobile phone, jointly with the temperature of the site can let users know via the Internet if the weather is nice enough to swim. In this paper, we present a system that tags the video frames of the video recorded from mobile phones with the data collected by the embedded sensors. The tagged video is uploaded to a video server, which is placed on the Internet and is accessible by any user. The proposed system uses a semantic approach with the stored information in order to make easy and efficient video searches. Our experimental results show that it is possible to tag video frames in real time and send the tagged video to the server with very low packet delay variations. As far as we know there is not any other application developed as the one presented in this paper. PMID:22438753

  15. Architecture and protocol of a semantic system designed for video tagging with sensor data in mobile devices.

    PubMed

    Macias, Elsa; Lloret, Jaime; Suarez, Alvaro; Garcia, Miguel

    2012-01-01

    Current mobile phones come with several sensors and powerful video cameras. These video cameras can be used to capture good quality scenes, which can be complemented with the information gathered by the sensors also embedded in the phones. For example, the surroundings of a beach recorded by the camera of the mobile phone, jointly with the temperature of the site can let users know via the Internet if the weather is nice enough to swim. In this paper, we present a system that tags the video frames of the video recorded from mobile phones with the data collected by the embedded sensors. The tagged video is uploaded to a video server, which is placed on the Internet and is accessible by any user. The proposed system uses a semantic approach with the stored information in order to make easy and efficient video searches. Our experimental results show that it is possible to tag video frames in real time and send the tagged video to the server with very low packet delay variations. As far as we know there is not any other application developed as the one presented in this paper.

  16. Motion Tracker: Camera-Based Monitoring of Bodily Movements Using Motion Silhouettes

    PubMed Central

    Westlund, Jacqueline Kory; D’Mello, Sidney K.; Olney, Andrew M.

    2015-01-01

    Researchers in the cognitive and affective sciences investigate how thoughts and feelings are reflected in the bodily response systems including peripheral physiology, facial features, and body movements. One specific question along this line of research is how cognition and affect are manifested in the dynamics of general body movements. Progress in this area can be accelerated by inexpensive, non-intrusive, portable, scalable, and easy to calibrate movement tracking systems. Towards this end, this paper presents and validates Motion Tracker, a simple yet effective software program that uses established computer vision techniques to estimate the amount a person moves from a video of the person engaged in a task (available for download from http://jakory.com/motion-tracker/). The system works with any commercially available camera and with existing videos, thereby affording inexpensive, non-intrusive, and potentially portable and scalable estimation of body movement. Strong between-subject correlations were obtained between Motion Tracker’s estimates of movement and body movements recorded from the seat (r =.720) and back (r = .695 for participants with higher back movement) of a chair affixed with pressure-sensors while completing a 32-minute computerized task (Study 1). Within-subject cross-correlations were also strong for both the seat (r =.606) and back (r = .507). In Study 2, between-subject correlations between Motion Tracker’s movement estimates and movements recorded from an accelerometer worn on the wrist were also strong (rs = .801, .679, and .681) while people performed three brief actions (e.g., waving). Finally, in Study 3 the within-subject cross-correlation was high (r = .855) when Motion Tracker’s estimates were correlated with the movement of a person’s head as tracked with a Kinect while the person was seated at a desk (Study 3). Best-practice recommendations, limitations, and planned extensions of the system are discussed. PMID:26086771

  17. Trajectory of coronary motion and its significance in robotic motion cancellation.

    PubMed

    Cattin, Philippe; Dave, Hitendu; Grünenfelder, Jürg; Szekely, Gabor; Turina, Marko; Zünd, Gregor

    2004-05-01

    To characterize remaining coronary artery motion of beating pig hearts after stabilization with an 'Octopus' using an optical remote analysis technique. Three pigs (40, 60 and 65 kg) underwent full sternotomy after receiving general anesthesia. An 8-bit high speed black and white video camera (50 frames/s) coupled with a laser sensor (60 microm resolution) were used to capture heart wall motion in all three dimensions. Dopamine infusion was used to deliberately modulate cardiac contractility. Synchronized ECG, blood pressure, airway pressure and video data of the region around the first branching point of the left anterior descending (LAD) coronary artery after Octopus stabilization were captured for stretches of 8 s each. Several sequences of the same region were captured over a period of several minutes. Computerized off-line analysis allowed us to perform minute characterization of the heart wall motion. The movement of the points of interest on the LAD ranged from 0.22 to 0.81 mm in the lateral plane (x/y-axis) and 0.5-2.6 mm out of the plane (z-axis). Fast excursions (>50 microm/s in the lateral plane) occurred corresponding to the QRS complex and the T wave; while slow excursion phases (<50 microm/s in the lateral plane) were observed during the P wave and the ST segment. The trajectories of the points of interest during consecutive cardiac cycles as well as during cardiac cycles minutes apart remained comparable (the differences were negligible), provided the hemodynamics remained stable. Inotrope-induced changes in cardiac contractility influenced not only the maximum excursion, but also the shape of the trajectory. Normal positive pressure ventilation displacing the heart in the thoracic cage was evident by the displacement of the reference point of the trajectory. The movement of the coronary artery after stabilization appears to be still significant. Minute characterization of the trajectory of motion could provide the substrate for achieving motion cancellation for existing robotic systems. Velocity plots could also help improve gated cardiac imaging.

  18. Data fusion for improved camera-based detection of respiration in neonates

    NASA Astrophysics Data System (ADS)

    Jorge, João.; Villarroel, Mauricio; Chaichulee, Sitthichok; McCormick, Kenny; Tarassenko, Lionel

    2018-02-01

    Monitoring respiration during neonatal sleep is notoriously difficult due to the nonstationary nature of the signals and the presence of spurious noise. Current approaches rely on the use of adhesive sensors, which can damage the fragile skin of premature infants. Recently, non-contact methods using low-cost RGB cameras have been proposed to acquire this vital sign from (a) motion or (b) photoplethysmographic signals extracted from the video recordings. Recent developments in deep learning have yielded robust methods for subject detection in video data. In the analysis described here, we present a novel technique for combining respiratory information from high-level visual descriptors provided by a multi-task convolutional neural network. Using blind source separation, we find the combination of signals which best suppresses pulse and motion distortions and subsequently use this to extract a respiratory signal. Evaluation results were obtained from recordings on 5 neonatal patients nursed in the Neonatal Intensive Care Unit (NICU) at the John Radcliffe Hospital, Oxford, UK. We compared respiratory rates derived from this fused breathing signal against those measured using the gold standard provided by the attending clinical staff. We show that respiratory rate (RR) be accurately estimated over the entire range of respiratory frequencies.

  19. Automated video-based assessment of surgical skills for training and evaluation in medical schools.

    PubMed

    Zia, Aneeq; Sharma, Yachna; Bettadapura, Vinay; Sarin, Eric L; Ploetz, Thomas; Clements, Mark A; Essa, Irfan

    2016-09-01

    Routine evaluation of basic surgical skills in medical schools requires considerable time and effort from supervising faculty. For each surgical trainee, a supervisor has to observe the trainees in person. Alternatively, supervisors may use training videos, which reduces some of the logistical overhead. All these approaches however are still incredibly time consuming and involve human bias. In this paper, we present an automated system for surgical skills assessment by analyzing video data of surgical activities. We compare different techniques for video-based surgical skill evaluation. We use techniques that capture the motion information at a coarser granularity using symbols or words, extract motion dynamics using textural patterns in a frame kernel matrix, and analyze fine-grained motion information using frequency analysis. We were successfully able to classify surgeons into different skill levels with high accuracy. Our results indicate that fine-grained analysis of motion dynamics via frequency analysis is most effective in capturing the skill relevant information in surgical videos. Our evaluations show that frequency features perform better than motion texture features, which in-turn perform better than symbol-/word-based features. Put succinctly, skill classification accuracy is positively correlated with motion granularity as demonstrated by our results on two challenging video datasets.

  20. Content-based video retrieval by example video clip

    NASA Astrophysics Data System (ADS)

    Dimitrova, Nevenka; Abdel-Mottaleb, Mohamed

    1997-01-01

    This paper presents a novel approach for video retrieval from a large archive of MPEG or Motion JPEG compressed video clips. We introduce a retrieval algorithm that takes a video clip as a query and searches the database for clips with similar contents. Video clips are characterized by a sequence of representative frame signatures, which are constructed from DC coefficients and motion information (`DC+M' signatures). The similarity between two video clips is determined by using their respective signatures. This method facilitates retrieval of clips for the purpose of video editing, broadcast news retrieval, or copyright violation detection.

  1. Biomechanical analysis using Kinovea for sports application

    NASA Astrophysics Data System (ADS)

    Muaza Nor Adnan, Nor; Patar, Mohd Nor Azmi Ab; Lee, Hokyoo; Yamamoto, Shin-Ichiroh; Jong-Young, Lee; Mahmud, Jamaluddin

    2018-04-01

    This paper assesses the reliability of HD VideoCam–Kinovea as an alternative tool in conducting motion analysis and measuring knee relative angle of drop jump movement. The motion capture and analysis procedure were conducted in the Biomechanics Lab, Shibaura Institute of Technology, Omiya Campus, Japan. A healthy subject without any gait disorder (BMI of 28.60 ± 1.40) was recruited. The volunteered subject was asked to per the drop jump movement on preset platform and the motion was simultaneously recorded using an established infrared motion capture system (Hawk–Cortex) and a HD VideoCam in the sagittal plane only. The capture was repeated for 5 times. The outputs (video recordings) from the HD VideoCam were input into Kinovea (an open-source software) and the drop jump pattern was tracked and analysed. These data are compared with the drop jump pattern tracked and analysed earlier using the Hawk–Cortex system. In general, the results obtained (drop jump pattern) using the HD VideoCam–Kinovea are close to the results obtained using the established motion capture system. Basic statistical analyses show that most average variances are less than 10%, thus proving the repeatability of the protocol and the reliability of the results. It can be concluded that the integration of HD VideoCam–Kinovea has the potential to become a reliable motion capture–analysis system. Moreover, it is low cost, portable and easy to use. As a conclusion, the current study and its findings are found useful and has contributed to enhance significant knowledge pertaining to motion capture-analysis, drop jump movement and HD VideoCam–Kinovea integration.

  2. Low-Cost MEMS Sensors and Vision System for Motion and Position Estimation of a Scooter

    PubMed Central

    Guarnieri, Alberto; Pirotti, Francesco; Vettore, Antonio

    2013-01-01

    The possibility to identify with significant accuracy the position of a vehicle in a mapping reference frame for driving directions and best-route analysis is a topic which is attracting a lot of interest from the research and development sector. To reach the objective of accurate vehicle positioning and integrate response events, it is necessary to estimate position, orientation and velocity of the system with high measurement rates. In this work we test a system which uses low-cost sensors, based on Micro Electro-Mechanical Systems (MEMS) technology, coupled with information derived from a video camera placed on a two-wheel motor vehicle (scooter). In comparison to a four-wheel vehicle; the dynamics of a two-wheel vehicle feature a higher level of complexity given that more degrees of freedom must be taken into account. For example a motorcycle can twist sideways; thus generating a roll angle. A slight pitch angle has to be considered as well; since wheel suspensions have a higher degree of motion compared to four-wheel motor vehicles. In this paper we present a method for the accurate reconstruction of the trajectory of a “Vespa” scooter; which can be used as alternative to the “classical” approach based on GPS/INS sensor integration. Position and orientation of the scooter are obtained by integrating MEMS-based orientation sensor data with digital images through a cascade of a Kalman filter and a Bayesian particle filter. PMID:23348036

  3. Low-Cost MEMS sensors and vision system for motion and position estimation of a scooter.

    PubMed

    Guarnieri, Alberto; Pirotti, Francesco; Vettore, Antonio

    2013-01-24

    The possibility to identify with significant accuracy the position of a vehicle in a mapping reference frame for driving directions and best-route analysis is a topic which is attracting a lot of interest from the research and development sector. To reach the objective of accurate vehicle positioning and integrate response events, it is necessary to estimate position, orientation and velocity of the system with high measurement rates. In this work we test a system which uses low-cost sensors, based on Micro Electro-Mechanical Systems (MEMS) technology, coupled with information derived from a video camera placed on a two-wheel motor vehicle (scooter). In comparison to a four-wheel vehicle; the dynamics of a two-wheel vehicle feature a higher level of complexity given that more degrees of freedom must be taken into account. For example a motorcycle can twist sideways; thus generating a roll angle. A slight pitch angle has to be considered as well; since wheel suspensions have a higher degree of motion compared to four-wheel motor vehicles. In this paper we present a method for the accurate reconstruction of the trajectory of a "Vespa" scooter; which can be used as alternative to the "classical" approach based on GPS/INS sensor integration. Position and orientation of the scooter are obtained by integrating MEMS-based orientation sensor data with digital images through a cascade of a Kalman filter and a Bayesian particle filter.

  4. Smart sensing surveillance video system

    NASA Astrophysics Data System (ADS)

    Hsu, Charles; Szu, Harold

    2016-05-01

    An intelligent video surveillance system is able to detect and identify abnormal and alarming situations by analyzing object movement. The Smart Sensing Surveillance Video (S3V) System is proposed to minimize video processing and transmission, thus allowing a fixed number of cameras to be connected on the system, and making it suitable for its applications in remote battlefield, tactical, and civilian applications including border surveillance, special force operations, airfield protection, perimeter and building protection, and etc. The S3V System would be more effective if equipped with visual understanding capabilities to detect, analyze, and recognize objects, track motions, and predict intentions. In addition, alarm detection is performed on the basis of parameters of the moving objects and their trajectories, and is performed using semantic reasoning and ontologies. The S3V System capabilities and technologies have great potential for both military and civilian applications, enabling highly effective security support tools for improving surveillance activities in densely crowded environments. It would be directly applicable to solutions for emergency response personnel, law enforcement, and other homeland security missions, as well as in applications requiring the interoperation of sensor networks with handheld or body-worn interface devices.

  5. Using optical flow for the detection of floating mines in IR image sequences

    NASA Astrophysics Data System (ADS)

    Borghgraef, Alexander; Acheroy, Marc

    2006-09-01

    In the first Gulf War, unmoored floating mines proved to be a real hazard for shipping traffic. An automated system capable of detecting these and other free-floating small objects, using readily available sensors such as infra-red cameras, would prove to be a valuable mine-warfare asset, and could double as a collision avoidance mechanism, and a search-and-rescue aid. The noisy background provided by the sea surface, and occlusion by waves make it difficult to detect small floating objects using only algorithms based upon the intensity, size or shape of the target. This leads us to look at the sequence of images for temporal detection characteristics. The target's apparent motion is such a determinant, given the contrast between the bobbing motion of the floating object and the strong horizontal component present in the propagation of the wavefronts. We have applied the Proesmans optical flow algorithm to IR video footage of practice mines, in order to extract the motion characteristic and a threshold on the vertical motion characteristic is then imposed to detect the floating targets.

  6. Software-codec-based full motion video conferencing on the PC using visual pattern image sequence coding

    NASA Astrophysics Data System (ADS)

    Barnett, Barry S.; Bovik, Alan C.

    1995-04-01

    This paper presents a real time full motion video conferencing system based on the Visual Pattern Image Sequence Coding (VPISC) software codec. The prototype system hardware is comprised of two personal computers, two camcorders, two frame grabbers, and an ethernet connection. The prototype system software has a simple structure. It runs under the Disk Operating System, and includes a user interface, a video I/O interface, an event driven network interface, and a free running or frame synchronous video codec that also acts as the controller for the video and network interfaces. Two video coders have been tested in this system. Simple implementations of Visual Pattern Image Coding and VPISC have both proven to support full motion video conferencing with good visual quality. Future work will concentrate on expanding this prototype to support the motion compensated version of VPISC, as well as encompassing point-to-point modem I/O and multiple network protocols. The application will be ported to multiple hardware platforms and operating systems. The motivation for developing this prototype system is to demonstrate the practicality of software based real time video codecs. Furthermore, software video codecs are not only cheaper, but are more flexible system solutions because they enable different computer platforms to exchange encoded video information without requiring on-board protocol compatible video codex hardware. Software based solutions enable true low cost video conferencing that fits the `open systems' model of interoperability that is so important for building portable hardware and software applications.

  7. Flexibility Versus Expertise: A Closer Look at the Employment of United States Air Force Imagery Analysts

    DTIC Science & Technology

    2017-10-01

    significant pressure upon Air Force imagery analysts to exhibit expertise in multiple disciplines including full-motion video , electro-optical still...disciplines varies, but the greatest divergence is between full-motion video and all other forms of still imagery. This paper delves into three...motion video discipline were to be created. The research reveals several positive aspects of this course of action but precautions would be required

  8. Bandwidth characteristics of multimedia data traffic on a local area network

    NASA Technical Reports Server (NTRS)

    Chuang, Shery L.; Doubek, Sharon; Haines, Richard F.

    1993-01-01

    Limited spacecraft communication links call for users to investigate the potential use of video compression and multimedia technologies to optimize bandwidth allocations. The objective was to determine the transmission characteristics of multimedia data - motion video, text or bitmap graphics, and files transmitted independently and simultaneously over an ethernet local area network. Commercial desktop video teleconferencing hardware and software and Intel's proprietary Digital Video Interactive (DVI) video compression algorithm were used, and typical task scenarios were selected. The transmission time, packet size, number of packets, and network utilization of the data were recorded. Each data type - compressed motion video, text and/or bitmapped graphics, and a compressed image file - was first transmitted independently and its characteristics recorded. The results showed that an average bandwidth of 7.4 kilobits per second (kbps) was used to transmit graphics; an average bandwidth of 86.8 kbps was used to transmit an 18.9-kilobyte (kB) image file; a bandwidth of 728.9 kbps was used to transmit compressed motion video at 15 frames per second (fps); and a bandwidth of 75.9 kbps was used to transmit compressed motion video at 1.5 fps. Average packet sizes were 933 bytes for graphics, 498.5 bytes for the image file, 345.8 bytes for motion video at 15 fps, and 341.9 bytes for motion video at 1.5 fps. Simultaneous transmission of multimedia data types was also characterized. The multimedia packets used transmission bandwidths of 341.4 kbps and 105.8kbps. Bandwidth utilization varied according to the frame rate (frames per second) setting for the transmission of motion video. Packet size did not vary significantly between the data types. When these characteristics are applied to Space Station Freedom (SSF), the packet sizes fall within the maximum specified by the Consultative Committee for Space Data Systems (CCSDS). The uplink of imagery to SSF may be performed at minimal frame rates and/or within seconds of delay, depending on the user's allocated bandwidth. Further research to identify the acceptable delay interval and its impact on human performance is required. Additional studies in network performance using various video compression algorithms and integrated multimedia techniques are needed to determine the optimal design approach for utilizing SSF's data communications system.

  9. Variable disparity-motion estimation based fast three-view video coding

    NASA Astrophysics Data System (ADS)

    Bae, Kyung-Hoon; Kim, Seung-Cheol; Hwang, Yong Seok; Kim, Eun-Soo

    2009-02-01

    In this paper, variable disparity-motion estimation (VDME) based 3-view video coding is proposed. In the encoding, key-frame coding (KFC) based motion estimation and variable disparity estimation (VDE) for effectively fast three-view video encoding are processed. These proposed algorithms enhance the performance of 3-D video encoding/decoding system in terms of accuracy of disparity estimation and computational overhead. From some experiments, stereo sequences of 'Pot Plant' and 'IVO', it is shown that the proposed algorithm's PSNRs is 37.66 and 40.55 dB, and the processing time is 0.139 and 0.124 sec/frame, respectively.

  10. Automated detection of videotaped neonatal seizures of epileptic origin.

    PubMed

    Karayiannis, Nicolaos B; Xiong, Yaohua; Tao, Guozhi; Frost, James D; Wise, Merrill S; Hrachovy, Richard A; Mizrahi, Eli M

    2006-06-01

    This study aimed at the development of a seizure-detection system by training neural networks with quantitative motion information extracted from short video segments of neonatal seizures of the myoclonic and focal clonic types and random infant movements. The motion of the infants' body parts was quantified by temporal motion-strength signals extracted from video segments by motion-segmentation methods based on optical flow computation. The area of each frame occupied by the infants' moving body parts was segmented by clustering the motion parameters obtained by fitting an affine model to the pixel velocities. The motion of the infants' body parts also was quantified by temporal motion-trajectory signals extracted from video recordings by robust motion trackers based on block-motion models. These motion trackers were developed to adjust autonomously to illumination and contrast changes that may occur during the video-frame sequence. Video segments were represented by quantitative features obtained by analyzing motion-strength and motion-trajectory signals in both the time and frequency domains. Seizure recognition was performed by conventional feed-forward neural networks, quantum neural networks, and cosine radial basis function neural networks, which were trained to detect neonatal seizures of the myoclonic and focal clonic types and to distinguish them from random infant movements. The computational tools and procedures developed for automated seizure detection were evaluated on a set of 240 video segments of 54 patients exhibiting myoclonic seizures (80 segments), focal clonic seizures (80 segments), and random infant movements (80 segments). Regardless of the decision scheme used for interpreting the responses of the trained neural networks, all the neural network models exhibited sensitivity and specificity>90%. For one of the decision schemes proposed for interpreting the responses of the trained neural networks, the majority of the trained neural-network models exhibited sensitivity>90% and specificity>95%. In particular, cosine radial basis function neural networks achieved the performance targets of this phase of the project (i.e., sensitivity>95% and specificity>95%). The best among the motion segmentation and tracking methods developed in this study produced quantitative features that constitute a reliable basis for detecting neonatal seizures. The performance targets of this phase of the project were achieved by combining the quantitative features obtained by analyzing motion-strength signals with those produced by analyzing motion-trajectory signals. The computational procedures and tools developed in this study to perform off-line analysis of short video segments will be used in the next phase of this project, which involves the integration of these procedures and tools into a system that can process and analyze long video recordings of infants monitored for seizures in real time.

  11. Wearable inertial sensors in swimming motion analysis: a systematic review.

    PubMed

    de Magalhaes, Fabricio Anicio; Vannozzi, Giuseppe; Gatta, Giorgio; Fantozzi, Silvia

    2015-01-01

    The use of contemporary technology is widely recognised as a key tool for enhancing competitive performance in swimming. Video analysis is traditionally used by coaches to acquire reliable biomechanical data about swimming performance; however, this approach requires a huge computational effort, thus introducing a delay in providing quantitative information. Inertial and magnetic sensors, including accelerometers, gyroscopes and magnetometers, have been recently introduced to assess the biomechanics of swimming performance. Research in this field has attracted a great deal of interest in the last decade due to the gradual improvement of the performance of sensors and the decreasing cost of miniaturised wearable devices. With the aim of describing the state of the art of current developments in this area, a systematic review of the existing methods was performed using the following databases: PubMed, ISI Web of Knowledge, IEEE Xplore, Google Scholar, Scopus and Science Direct. Twenty-seven articles published in indexed journals and conference proceedings, focusing on the biomechanical analysis of swimming by means of inertial sensors were reviewed. The articles were categorised according to sensor's specification, anatomical sites where the sensors were attached, experimental design and applications for the analysis of swimming performance. Results indicate that inertial sensors are reliable tools for swimming biomechanical analyses.

  12. The Accuracy of Conventional 2D Video for Quantifying Upper Limb Kinematics in Repetitive Motion Occupational Tasks

    PubMed Central

    Chen, Chia-Hsiung; Azari, David; Hu, Yu Hen; Lindstrom, Mary J.; Thelen, Darryl; Yen, Thomas Y.; Radwin, Robert G.

    2015-01-01

    Objective Marker-less 2D video tracking was studied as a practical means to measure upper limb kinematics for ergonomics evaluations. Background Hand activity level (HAL) can be estimated from speed and duty cycle. Accuracy was measured using a cross correlation template-matching algorithm for tracking a region of interest on the upper extremities. Methods Ten participants performed a paced load transfer task while varying HAL (2, 4, and 5) and load (2.2 N, 8.9 N and 17.8 N). Speed and acceleration measured from 2D video were compared against ground truth measurements using 3D infrared motion capture. Results The median absolute difference between 2D video and 3D motion capture was 86.5 mm/s for speed, and 591 mm/s2 for acceleration, and less than 93 mm/s for speed and 656 mm/s2 for acceleration when camera pan and tilt were within ±30 degrees. Conclusion Single-camera 2D video had sufficient accuracy (< 100 mm/s) for evaluating HAL. Practitioner Summary This study demonstrated that 2D video tracking had sufficient accuracy to measure HAL for ascertaining the American Conference of Government Industrial Hygienists Threshold Limit Value® for repetitive motion when the camera is located within ±30 degrees off the plane of motion when compared against 3D motion capture for a simulated repetitive motion task. PMID:25978764

  13. The role of optical flow in automated quality assessment of full-motion video

    NASA Astrophysics Data System (ADS)

    Harguess, Josh; Shafer, Scott; Marez, Diego

    2017-09-01

    In real-world video data, such as full-motion-video (FMV) taken from unmanned vehicles, surveillance systems, and other sources, various corruptions to the raw data is inevitable. This can be due to the image acquisition process, noise, distortion, and compression artifacts, among other sources of error. However, we desire methods to analyze the quality of the video to determine whether the underlying content of the corrupted video can be analyzed by humans or machines and to what extent. Previous approaches have shown that motion estimation, or optical flow, can be an important cue in automating this video quality assessment. However, there are many different optical flow algorithms in the literature, each with their own advantages and disadvantages. We examine the effect of the choice of optical flow algorithm (including baseline and state-of-the-art), on motionbased automated video quality assessment algorithms.

  14. Detecting Human Activity Using Acoustic, Seismic, Accelerometer, Video, and E-field Sensors

    DTIC Science & Technology

    2011-09-01

    Detecting Human Activity using Acoustic, Seismic, Accelerometer, Video, and E-field Sensors by Sarah H. Walker and Geoffrey H. Goldman...Adelphi, MD 20783-1197 ARL-TR-5729 September 2011 Detecting Human Activity using Acoustic, Seismic, Accelerometer, Video, and E-field Sensors...DD-MM-YYYY) September 2011 2. REPORT TYPE 3. DATES COVERED (From - To) 4. TITLE AND SUBTITLE Detecting Human Activity using Acoustic

  15. Next Generation Advanced Video Guidance Sensor: Low Risk Rendezvous and Docking Sensor

    NASA Technical Reports Server (NTRS)

    Lee, Jimmy; Carrington, Connie; Spencer, Susan; Bryan, Thomas; Howard, Ricky T.; Johnson, Jimmie

    2008-01-01

    The Next Generation Advanced Video Guidance Sensor (NGAVGS) is being built and tested at MSFC. This paper provides an overview of current work on the NGAVGS, a summary of the video guidance heritage, and the AVGS performance on the Orbital Express mission. This paper also provides a discussion of applications to ISS cargo delivery vehicles, CEV, and future lunar applications.

  16. Free Space Optical Communication in the Military Environment

    DTIC Science & Technology

    2014-09-01

    Communications Commission FDA Food and Drug Administration FMV Full Motion Video FOB Forward Operating Base FOENEX Free-Space Optical Experimental Network...from radio and voice to chat message and email. Data-rich multimedia content, such as high-definition pictures, video chat, video files, and...introduction of full-motion video (FMV) via numerous different Intelligence Surveillance and Reconnaissance (ISR) systems, such as targeting pods on

  17. Quantitative assessment of human motion using video motion analysis

    NASA Technical Reports Server (NTRS)

    Probe, John D.

    1990-01-01

    In the study of the dynamics and kinematics of the human body, a wide variety of technologies was developed. Photogrammetric techniques are well documented and are known to provide reliable positional data from recorded images. Often these techniques are used in conjunction with cinematography and videography for analysis of planar motion, and to a lesser degree three-dimensional motion. Cinematography has been the most widely used medium for movement analysis. Excessive operating costs and the lag time required for film development coupled with recent advances in video technology have allowed video based motion analysis systems to emerge as a cost effective method of collecting and analyzing human movement. The Anthropometric and Biomechanics Lab at Johnson Space Center utilizes the video based Ariel Performance Analysis System to develop data on shirt-sleeved and space-suited human performance in order to plan efficient on orbit intravehicular and extravehicular activities. The system is described.

  18. Pixel decomposition for tracking in low resolution videos

    NASA Astrophysics Data System (ADS)

    Govinda, Vivekanand; Ralph, Jason F.; Spencer, Joseph W.; Goulermas, John Y.; Yang, Lihua; Abbas, Alaa M.

    2008-04-01

    This paper describes a novel set of algorithms that allows indoor activity to be monitored using data from very low resolution imagers and other non-intrusive sensors. The objects are not resolved but activity may still be determined. This allows the use of such technology in sensitive environments where privacy must be maintained. Spectral un-mixing algorithms from remote sensing were adapted for this environment. These algorithms allow the fractional contributions from different colours within each pixel to be estimated and this is used to assist in the detection and monitoring of small objects or sub-pixel motion.

  19. CD-I and Full Motion Video.

    ERIC Educational Resources Information Center

    Chen, Ching-chih

    1991-01-01

    Describes compact disc interactive (CD-I) as a multimedia home entertainment system that combines audio, visual, text, graphic, and interactive capabilities. Full-screen video and full-motion video (FMV) are explained, hardware for FMV decoding is described, software is briefly discussed, and CD-I titles planned for future production are listed.…

  20. Video Analysis of Muscle Motion

    ERIC Educational Resources Information Center

    Foster, Boyd

    2004-01-01

    In this article, the author discusses how video cameras can help students in physical education and sport science classes successfully learn and present anatomy and kinesiology content at levels. Video analysis of physical activity is an excellent way to expand student knowledge of muscle location and function, planes and axes of motion, and…

  1. The Successful Development of an Automated Rendezvous and Capture (AR&C) System for the National Aeronautics and Space Administration

    NASA Technical Reports Server (NTRS)

    Roe, Fred D.; Howard, Richard T.

    2003-01-01

    During the 1990's, the Marshall Space Flight Center (MSFC) conducted pioneering research in the development of an automated rendezvous and capture/docking (AR&C) system for U.S. space vehicles. Development and demonstration of a rendezvous sensor was identified early in the AR&C Program as the critical enabling technology that allows automated proximity operations and docking. A first generation rendezvous sensor, the Video Guidance Sensor (VGS), was developed and successfully flown on STS-87 and STS-95, proving the concept of a video- based sensor. A ground demonstration of the entire system and software was successfully tested. Advances in both video and signal processing technologies and the lessons learned from the two successful flight experiments provided a baseline for the development, by the MSFC, of a new generation of video based rendezvous sensor. The Advanced Video Guidance Sensor (AGS) has greatly increased performance and additional capability for longer-range operation with a new target designed as a direct replacement for existing ISS hemispherical reflectors.

  2. Senior residents' perceived need of and preferences for "smart home" sensor technologies.

    PubMed

    Demiris, George; Hensel, Brian K; Skubic, Marjorie; Rantz, Marilyn

    2008-01-01

    The goal of meeting the desire of older adults to remain independent in their home setting while controlling healthcare costs has led to the conceptualization of "smart homes." A smart home is a residence equipped with technology that enhances safety of residents and monitors their health conditions. The study aim is to assess older adults' perceptions of specific smart home technologies (i.e., a bed sensor, gait monitor, stove sensor, motion sensor, and video sensor). The study setting is TigerPlace, a retirement community designed according to the Aging in Place model. Focus group sessions with fourteen residents were conducted to assess perceived advantages and concerns associated with specific applications, and preferences for recipients of sensor-generated information pertaining to residents' activity levels, sleep patterns and potential emergencies. Sessions were audio-taped; tapes were transcribed, and a content analysis was performed. A total of fourteen older adults over the age of 65 participated in three focus group sessions Most applications were perceived as useful, and participants would agree to their installation in their own home. Preference for specific sensors related to sensors' appearance and residents' own level of frailty and perceived need. Specific concerns about privacy were raised. The findings indicate an overall positive attitude toward sensor technologies for nonobtrusive monitoring. Researchers and practitioners are called upon to address ethical and technical challenges in this emerging domain.

  3. The influence of motion quality on responses towards video playback stimuli.

    PubMed

    Ware, Emma; Saunders, Daniel R; Troje, Nikolaus F

    2015-05-11

    Visual motion, a critical cue in communication, can be manipulated and studied using video playback methods. A primary concern for the video playback researcher is the degree to which objects presented on video appear natural to the non-human subject. Here we argue that the quality of motion cues on video, as determined by the video's image presentation rate (IPR), are of particular importance in determining a subject's social response behaviour. We present an experiment testing the effect of variations in IPR on pigeon (Columbia livia) response behaviour towards video images of courting opposite sex partners. Male and female pigeons were presented with three video playback stimuli, each containing a different social partner. Each stimulus was then modified to appear at one of three IPRs: 15, 30 or 60 progressive (p) frames per second. The results showed that courtship behaviour became significantly longer in duration as IPR increased. This finding implies that the IPR significantly affects the perceived quality of motion cues impacting social behaviour. In males we found that the duration of courtship also depended on the social partner viewed and that this effect interacted with the effects of IPR on behaviour. Specifically, the effect of social partner reached statistical significance only when the stimuli were displayed at 60 p, demonstrating the potential for erroneous results when insufficient IPRs are used. In addition to demonstrating the importance of IPR in video playback experiments, these findings help to highlight and describe the role of visual motion processing in communication behaviour. © 2015. Published by The Company of Biologists Ltd.

  4. Testing fine motor coordination via telehealth: effects of video characteristics on reliability and validity.

    PubMed

    Hoenig, Helen M; Amis, Kristopher; Edmonds, Carol; Morgan, Michelle S; Landerman, Lawrence; Caves, Kevin

    2017-01-01

    Background There is limited research about the effects of video quality on the accuracy of assessments of physical function. Methods A repeated measures study design was used to assess reliability and validity of the finger-nose test (FNT) and the finger-tapping test (FTT) carried out with 50 veterans who had impairment in gross and/or fine motor coordination. Videos were scored by expert raters under eight differing conditions, including in-person, high definition video with slow motion review and standard speed videos with varying bit rates and frame rates. Results FTT inter-rater reliability was excellent with slow motion video (ICC 0.98-0.99) and good (ICC 0.59) under the normal speed conditions. Inter-rater reliability for FNT 'attempts' was excellent (ICC 0.97-0.99) for all viewing conditions; for FNT 'misses' it was good to excellent (ICC 0.89) with slow motion review but substantially worse (ICC 0.44) on the normal speed videos. FTT criterion validity (i.e. compared to slow motion review) was excellent (β = 0.94) for the in-person rater and good ( β = 0.77) on normal speed videos. Criterion validity for FNT 'attempts' was excellent under all conditions ( r ≥ 0.97) and for FNT 'misses' it was good to excellent under all conditions ( β = 0.61-0.81). Conclusions In general, the inter-rater reliability and validity of the FNT and FTT assessed via video technology is similar to standard clinical practices, but is enhanced with slow motion review and/or higher bit rate.

  5. Creating Stop-Motion Videos with iPads to Support Students' Understanding of Cell Processes: "Because You Have to Know What You're Talking about to Be Able to Do It"

    ERIC Educational Resources Information Center

    Deaton, Cynthia C. M.; Deaton, Benjamin E.; Ivankovic, Diana; Norris, Frank A.

    2013-01-01

    The purpose of this qualitative case study is two-fold: (a) describe the implementation of a stop-motion animation video activity to support students' understanding of cell processes, and (b) present research findings about students' beliefs and use of iPads to support their creation of stop-motion videos in an introductory biology course. Data…

  6. Apparatus for Investigating Momentum and Energy Conservation With MBL and Video Analysis

    NASA Astrophysics Data System (ADS)

    George, Elizabeth; Vazquez-Abad, Jesus

    1998-04-01

    We describe the development and use of a laboratory setup that is appropriate for computer-aided student investigation of the principles of conservation of momentum and mechanical energy in collisions. The setup consists of two colliding carts on a low-friction track, with one of the carts (the target) attached to a spring, whose extension or compression takes the place of the pendulum's rise in the traditional ballistic pendulum apparatus. Position vs. time data for each cart are acquired either by using two motion sensors or by digitizing images obtained with a video camera. This setup allows students to examine the time history of momentum and mechanical energy during the entire collision process, rather than simply focusing on the before and after regions. We believe that this setup is suitable for helping students gain understanding as the processes involved are simple to follow visually, to manipulate, and to analyze.

  7. Giving students the run of sprinting models

    NASA Astrophysics Data System (ADS)

    Heck, André; Ellermeijer, Ton

    2009-11-01

    A biomechanical study of sprinting is an interesting task for students who have a background in mechanics and calculus. These students can work with real data and do practical investigations similar to the way sports scientists do research. Student research activities are viable when the students are familiar with tools to collect and work with data from sensors and video recordings and with modeling tools for comparing simulation and experimental results. This article describes a multipurpose system, named COACH, that offers a versatile integrated set of tools for learning, doing, and teaching mathematics and science in a computer-based inquiry approach. Automated tracking of reference points and correction of perspective distortion in videos, state-of-the-art algorithms for data smoothing and numerical differentiation, and graphical system dynamics based modeling are some of the built-in techniques that are suitable for motion analysis. Their implementation and their application in student activities involving models of running are discussed.

  8. Diffraction-based optical sensor detection system for capture-restricted environments

    NASA Astrophysics Data System (ADS)

    Khandekar, Rahul M.; Nikulin, Vladimir V.

    2008-04-01

    The use of digital cameras and camcorders in prohibited areas presents a growing problem. Piracy in the movie theaters results in huge revenue loss to the motion picture industry every year, but still image and video capture may present even a bigger threat if performed in high-security locations. While several attempts are being made to address this issue, an effective solution is yet to be found. We propose to approach this problem using a very commonly observed optical phenomenon. Cameras and camcorders use CCD and CMOS sensors, which include a number of photosensitive elements/pixels arranged in a certain fashion. Those are photosites in CCD sensors and semiconductor elements in CMOS sensors. They are known to reflect a small fraction of incident light, but could also act as a diffraction grating, resulting in the optical response that could be utilized to identify the presence of such a sensor. A laser-based detection system is proposed that accounts for the elements in the optical train of the camera, as well as the eye-safety of the people who could be exposed to optical beam radiation. This paper presents preliminary experimental data, as well as the proof-of-concept simulation results.

  9. Motion based parsing for video from observational psychology

    NASA Astrophysics Data System (ADS)

    Kokaram, Anil; Doyle, Erika; Lennon, Daire; Joyeux, Laurent; Fuller, Ray

    2006-01-01

    In Psychology it is common to conduct studies involving the observation of humans undertaking some task. The sessions are typically recorded on video and used for subjective visual analysis. The subjective analysis is tedious and time consuming, not only because much useless video material is recorded but also because subjective measures of human behaviour are not necessarily repeatable. This paper presents tools using content based video analysis that allow automated parsing of video from one such study involving Dyslexia. The tools rely on implicit measures of human motion that can be generalised to other applications in the domain of human observation. Results comparing quantitative assessment of human motion with subjective assessment are also presented, illustrating that the system is a useful scientific tool.

  10. Encrypting Digital Camera with Automatic Encryption Key Deletion

    NASA Technical Reports Server (NTRS)

    Oakley, Ernest C. (Inventor)

    2007-01-01

    A digital video camera includes an image sensor capable of producing a frame of video data representing an image viewed by the sensor, an image memory for storing video data such as previously recorded frame data in a video frame location of the image memory, a read circuit for fetching the previously recorded frame data, an encryption circuit having an encryption key input connected to receive the previously recorded frame data from the read circuit as an encryption key, an un-encrypted data input connected to receive the frame of video data from the image sensor and an encrypted data output port, and a write circuit for writing a frame of encrypted video data received from the encrypted data output port of the encryption circuit to the memory and overwriting the video frame location storing the previously recorded frame data.

  11. The Next Generation Advanced Video Guidance Sensor: Flight Heritage and Current Development

    NASA Technical Reports Server (NTRS)

    Howard, Richard T.; Bryan, Thomas C.

    2009-01-01

    The Next Generation Advanced Video Guidance Sensor (NGAVGS) is the latest in a line of sensors that have flown four times in the last 10 years. The NGAVGS has been under development for the last two years as a long-range proximity operations and docking sensor for use in an Automated Rendezvous and Docking (AR&D) system. The first autonomous rendezvous and docking in the history of the U.S. Space Program was successfully accomplished by Orbital Express, using the Advanced Video Guidance Sensor (AVGS) as the primary docking sensor. That flight proved that the United States now has a mature and flight proven sensor technology for supporting Crew Exploration Vehicles (CEV) and Commercial Orbital Transport Systems (COTS) Automated Rendezvous and Docking (AR&D). NASA video sensors have worked well in the past: the AVGS used on the Demonstration of Autonomous Rendezvous Technology (DART) mission operated successfully in "spot mode" out to 2 km, and the first generation rendezvous and docking sensor, the Video Guidance Sensor (VGS), was developed and successfully flown on Space Shuttle flights in 1997 and 1998. This paper presents the flight heritage and results of the sensor technology, some hardware trades for the current sensor, and discusses the needs of future vehicles that may rendezvous and dock with the International Space Station (ISS) and other Constellation vehicles. It also discusses approaches for upgrading AVGS to address parts obsolescence, and concepts for minimizing the sensor footprint, weight, and power requirements. In addition, the testing of the various NGAVGS development units will be discussed along with the use of the NGAVGS as a proximity operations and docking sensor.

  12. Pyroelectric IR sensor arrays for fall detection in the older population

    NASA Astrophysics Data System (ADS)

    Sixsmith, A.; Johnson, N.; Whatmore, R.

    2005-09-01

    Uncooled pyroelectric sensor arrays have been studied over many years for their uses in thermal imaging applications. These arrays will only detect changes in IR flux and so systems based upon them are very good at detecting movements of people in the scene without sensing the background, if they are used in staring mode. Relatively-low element count arrays (16 x 16) can be used for a variety of people sensing applications, including people counting (for safety applications), queue monitoring etc. With appropriate signal processing such systems can be also be used for the detection of particular events such as a person falling over. There is a considerable need for automatic fall detection amongst older people, but there are important limitations to some of the current and emerging technologies available for this. Simple sensors, such as 1 or 2 element pyroelectric infra-red sensors provide crude data that is difficult to interpret; the use of devices worn on the person, such as wrist communicator and motion detectors have potential, but are reliant on the person being able and willing to wear the device; video cameras may be seen as intrusive and require considerable human resources to monitor activity while machine-interpretation of camera images is complex and may be difficult in this application area. The use of a pyroelectric thermal array sensor was seen to have a number of potential benefits. The sensor is wall-mounted and does not require the user to wear a device. It enables detailed analysis of a subject's motion to be achieved locally, within the detector, using only a modest processor. This is possible due to the relative ease with which data from the sensor can be interpreted relative to the data generated by alternative sensors such as video devices. In addition to the cost-effectiveness of this solution, it was felt that the lack of detail in the low-level data, together with the elimination of the need to transmit data outside the detector, would help to avert feelings intrusiveness on the part of the end-user.The main benefits of this type of technology would be for older people who spend time alone in unsupervised environments. This would include people living alone in ordinary housing or in sheltered accommodation (apartment complexes for older people with local warden) and non-communal areas in residential/nursing home environments (e.g. bedrooms and ensuite bathrooms and toilets). This paper will review the development of the array, the pyroelectric ceramic material upon which it is based and the system capabilities. It will present results from the Framework 5 SIMBAD project, which used the system to monitor the movements of elderly people over a considerable period of time.

  13. Motion Pictures and Video Cassettes 1971. AV-USA Supplement 2.

    ERIC Educational Resources Information Center

    Hope, Thomas W.

    The financial status of the motion picture and of the video cassette industry in 1970 are reviewed. Based on production rates and income of these industries, trends are discovered. Figures on local origination of television programing and commercials are also included. The section on video cassettes includes the following information: the current…

  14. A Highly Miniaturized, Wireless Inertial Measurement Unit for Characterizing the Dynamics of Pitched Baseballs and Softballs

    PubMed Central

    McGinnis, Ryan S.; Perkins, Noel C.

    2012-01-01

    Baseball and softball pitch types are distinguished by the path and speed of the ball which, in turn, are determined by the angular velocity of the ball and the velocity of the ball center at the instant of release from the pitcher's hand. While radar guns and video-based motion capture (mocap) resolve ball speed, they provide little information about how the angular velocity of the ball and the velocity of the ball center develop and change during the throwing motion. Moreover, mocap requires measurements in a controlled lab environment and by a skilled technician. This study addresses these shortcomings by introducing a highly miniaturized, wireless inertial measurement unit (IMU) that is embedded in both baseballs and softballs. The resulting “ball-embedded” sensor resolves ball dynamics right on the field of play. Experimental results from ten pitches, five thrown by one softball pitcher and five by one baseball pitcher, demonstrate that this sensor technology can deduce the magnitude and direction of the ball's velocity at release to within 4.6% of measurements made using standard mocap. Moreover, the IMU directly measures the angular velocity of the ball, which further enables the analysis of different pitch types.

  15. An improved multi-paths optimization method for video stabilization

    NASA Astrophysics Data System (ADS)

    Qin, Tao; Zhong, Sheng

    2018-03-01

    For video stabilization, the difference between original camera motion path and the optimized one is proportional to the cropping ratio and warping ratio. A good optimized path should preserve the moving tendency of the original one meanwhile the cropping ratio and warping ratio of each frame should be kept in a proper range. In this paper we use an improved warping-based motion representation model, and propose a gauss-based multi-paths optimization method to get a smoothing path and obtain a stabilized video. The proposed video stabilization method consists of two parts: camera motion path estimation and path smoothing. We estimate the perspective transform of adjacent frames according to warping-based motion representation model. It works well on some challenging videos where most previous 2D methods or 3D methods fail for lacking of long features trajectories. The multi-paths optimization method can deal well with parallax, as we calculate the space-time correlation of the adjacent grid, and then a kernel of gauss is used to weigh the motion of adjacent grid. Then the multi-paths are smoothed while minimize the crop ratio and the distortion. We test our method on a large variety of consumer videos, which have casual jitter and parallax, and achieve good results.

  16. Algorithm-Based Motion Magnification for Video Processing in Urological Laparoscopy.

    PubMed

    Adams, Fabian; Schoelly, Reto; Schlager, Daniel; Schoenthaler, Martin; Schoeb, Dominik S; Wilhelm, Konrad; Hein, Simon; Wetterauer, Ulrich; Miernik, Arkadiusz

    2017-06-01

    Minimally invasive surgery is in constant further development and has replaced many conventional operative procedures. If vascular structure movement could be detected during these procedures, it could reduce the risk of vascular injury and conversion to open surgery. The recently proposed motion-amplifying algorithm, Eulerian Video Magnification (EVM), has been shown to substantially enhance minimal object changes in digitally recorded video that is barely perceptible to the human eye. We adapted and examined this technology for use in urological laparoscopy. Video sequences of routine urological laparoscopic interventions were recorded and further processed using spatial decomposition and filtering algorithms. The freely available EVM algorithm was investigated for its usability in real-time processing. In addition, a new image processing technology, the CRS iimotion Motion Magnification (CRSMM) algorithm, was specifically adjusted for endoscopic requirements, applied, and validated by our working group. Using EVM, no significant motion enhancement could be detected without severe impairment of the image resolution, motion, and color presentation. The CRSMM algorithm significantly improved image quality in terms of motion enhancement. In particular, the pulsation of vascular structures could be displayed more accurately than in EVM. Motion magnification image processing technology has the potential for clinical importance as a video optimizing modality in endoscopic and laparoscopic surgery. Barely detectable (micro)movements can be visualized using this noninvasive marker-free method. Despite these optimistic results, the technology requires considerable further technical development and clinical tests.

  17. Two novel motion-based algorithms for surveillance video analysis on embedded platforms

    NASA Astrophysics Data System (ADS)

    Vijverberg, Julien A.; Loomans, Marijn J. H.; Koeleman, Cornelis J.; de With, Peter H. N.

    2010-05-01

    This paper proposes two novel motion-vector based techniques for target detection and target tracking in surveillance videos. The algorithms are designed to operate on a resource-constrained device, such as a surveillance camera, and to reuse the motion vectors generated by the video encoder. The first novel algorithm for target detection uses motion vectors to construct a consistent motion mask, which is combined with a simple background segmentation technique to obtain a segmentation mask. The second proposed algorithm aims at multi-target tracking and uses motion vectors to assign blocks to targets employing five features. The weights of these features are adapted based on the interaction between targets. These algorithms are combined in one complete analysis application. The performance of this application for target detection has been evaluated for the i-LIDS sterile zone dataset and achieves an F1-score of 0.40-0.69. The performance of the analysis algorithm for multi-target tracking has been evaluated using the CAVIAR dataset and achieves an MOTP of around 9.7 and MOTA of 0.17-0.25. On a selection of targets in videos from other datasets, the achieved MOTP and MOTA are 8.8-10.5 and 0.32-0.49 respectively. The execution time on a PC-based platform is 36 ms. This includes the 20 ms for generating motion vectors, which are also required by the video encoder.

  18. Hand-writing motion tracking with vision-inertial sensor fusion: calibration and error correction.

    PubMed

    Zhou, Shengli; Fei, Fei; Zhang, Guanglie; Liu, Yunhui; Li, Wen J

    2014-08-25

    The purpose of this study was to improve the accuracy of real-time ego-motion tracking through inertial sensor and vision sensor fusion. Due to low sampling rates supported by web-based vision sensor and accumulation of errors in inertial sensors, ego-motion tracking with vision sensors is commonly afflicted by slow updating rates, while motion tracking with inertial sensor suffers from rapid deterioration in accuracy with time. This paper starts with a discussion of developed algorithms for calibrating two relative rotations of the system using only one reference image. Next, stochastic noises associated with the inertial sensor are identified using Allan Variance analysis, and modeled according to their characteristics. Finally, the proposed models are incorporated into an extended Kalman filter for inertial sensor and vision sensor fusion. Compared with results from conventional sensor fusion models, we have shown that ego-motion tracking can be greatly enhanced using the proposed error correction model.

  19. Distributed video coding for wireless video sensor networks: a review of the state-of-the-art architectures.

    PubMed

    Imran, Noreen; Seet, Boon-Chong; Fong, A C M

    2015-01-01

    Distributed video coding (DVC) is a relatively new video coding architecture originated from two fundamental theorems namely, Slepian-Wolf and Wyner-Ziv. Recent research developments have made DVC attractive for applications in the emerging domain of wireless video sensor networks (WVSNs). This paper reviews the state-of-the-art DVC architectures with a focus on understanding their opportunities and gaps in addressing the operational requirements and application needs of WVSNs.

  20. Using a Digital Video Camera to Study Motion

    ERIC Educational Resources Information Center

    Abisdris, Gil; Phaneuf, Alain

    2007-01-01

    To illustrate how a digital video camera can be used to analyze various types of motion, this simple activity analyzes the motion and measures the acceleration due to gravity of a basketball in free fall. Although many excellent commercially available data loggers and software can accomplish this task, this activity requires almost no financial…

  1. Spatio-Temporal Constrained Human Trajectory Generation from the PIR Motion Detector Sensor Network Data: A Geometric Algebra Approach

    PubMed Central

    Yu, Zhaoyuan; Yuan, Linwang; Luo, Wen; Feng, Linyao; Lv, Guonian

    2015-01-01

    Passive infrared (PIR) motion detectors, which can support long-term continuous observation, are widely used for human motion analysis. Extracting all possible trajectories from the PIR sensor networks is important. Because the PIR sensor does not log location and individual information, none of the existing methods can generate all possible human motion trajectories that satisfy various spatio-temporal constraints from the sensor activation log data. In this paper, a geometric algebra (GA)-based approach is developed to generate all possible human trajectories from the PIR sensor network data. Firstly, the representation of the geographical network, sensor activation response sequences and the human motion are represented as algebraic elements using GA. The human motion status of each sensor activation are labeled using the GA-based trajectory tracking. Then, a matrix multiplication approach is developed to dynamically generate the human trajectories according to the sensor activation log and the spatio-temporal constraints. The method is tested with the MERL motion database. Experiments show that our method can flexibly extract the major statistical pattern of the human motion. Compared with direct statistical analysis and tracklet graph method, our method can effectively extract all possible trajectories of the human motion, which makes it more accurate. Our method is also likely to provides a new way to filter other passive sensor log data in sensor networks. PMID:26729123

  2. Spatio-Temporal Constrained Human Trajectory Generation from the PIR Motion Detector Sensor Network Data: A Geometric Algebra Approach.

    PubMed

    Yu, Zhaoyuan; Yuan, Linwang; Luo, Wen; Feng, Linyao; Lv, Guonian

    2015-12-30

    Passive infrared (PIR) motion detectors, which can support long-term continuous observation, are widely used for human motion analysis. Extracting all possible trajectories from the PIR sensor networks is important. Because the PIR sensor does not log location and individual information, none of the existing methods can generate all possible human motion trajectories that satisfy various spatio-temporal constraints from the sensor activation log data. In this paper, a geometric algebra (GA)-based approach is developed to generate all possible human trajectories from the PIR sensor network data. Firstly, the representation of the geographical network, sensor activation response sequences and the human motion are represented as algebraic elements using GA. The human motion status of each sensor activation are labeled using the GA-based trajectory tracking. Then, a matrix multiplication approach is developed to dynamically generate the human trajectories according to the sensor activation log and the spatio-temporal constraints. The method is tested with the MERL motion database. Experiments show that our method can flexibly extract the major statistical pattern of the human motion. Compared with direct statistical analysis and tracklet graph method, our method can effectively extract all possible trajectories of the human motion, which makes it more accurate. Our method is also likely to provides a new way to filter other passive sensor log data in sensor networks.

  3. Intergraph video and images exploitation capabilities

    NASA Astrophysics Data System (ADS)

    Colla, Simone; Manesis, Charalampos

    2013-08-01

    The current paper focuses on the capture, fusion and process of aerial imagery in order to leverage full motion video, giving analysts the ability to collect, analyze, and maximize the value of video assets. Unmanned aerial vehicles (UAV) have provided critical real-time surveillance and operational support to military organizations, and are a key source of intelligence, particularly when integrated with other geospatial data. In the current workflow, at first, the UAV operators plan the flight by using a flight planning software. During the flight the UAV send a live video stream directly on the field to be processed by Intergraph software, to generate and disseminate georeferenced images trough a service oriented architecture based on ERDAS Apollo suite. The raw video-based data sources provide the most recent view of a situation and can augment other forms of geospatial intelligence - such as satellite imagery and aerial photos - to provide a richer, more detailed view of the area of interest. To effectively use video as a source of intelligence, however, the analyst needs to seamlessly fuse the video with these other types of intelligence, such as map features and annotations. Intergraph has developed an application that automatically generates mosaicked georeferenced image, tags along the video route which can then be seamlessly integrated with other forms of static data, such as aerial photos, satellite imagery, or geospatial layers and features. Consumers will finally have the ability to use a single, streamlined system to complete the entire geospatial information lifecycle: capturing geospatial data using sensor technology; processing vector, raster, terrain data into actionable information; managing, fusing, and sharing geospatial data and video toghether; and finally, rapidly and securely delivering integrated information products, ensuring individuals can make timely decisions.

  4. Quality evaluation of motion-compensated edge artifacts in compressed video.

    PubMed

    Leontaris, Athanasios; Cosman, Pamela C; Reibman, Amy R

    2007-04-01

    Little attention has been paid to an impairment common in motion-compensated video compression: the addition of high-frequency (HF) energy as motion compensation displaces blocking artifacts off block boundaries. In this paper, we employ an energy-based approach to measure this motion-compensated edge artifact, using both compressed bitstream information and decoded pixels. We evaluate the performance of our proposed metric, along with several blocking and blurring metrics, on compressed video in two ways. First, ordinal scales are evaluated through a series of expectations that a good quality metric should satisfy: the objective evaluation. Then, the best performing metrics are subjectively evaluated. The same subjective data set is finally used to obtain interval scales to gain more insight. Experimental results show that we accurately estimate the percentage of the added HF energy in compressed video.

  5. Identifying sports videos using replay, text, and camera motion features

    NASA Astrophysics Data System (ADS)

    Kobla, Vikrant; DeMenthon, Daniel; Doermann, David S.

    1999-12-01

    Automated classification of digital video is emerging as an important piece of the puzzle in the design of content management systems for digital libraries. The ability to classify videos into various classes such as sports, news, movies, or documentaries, increases the efficiency of indexing, browsing, and retrieval of video in large databases. In this paper, we discuss the extraction of features that enable identification of sports videos directly from the compressed domain of MPEG video. These features include detecting the presence of action replays, determining the amount of scene text in vide, and calculating various statistics on camera and/or object motion. The features are derived from the macroblock, motion,and bit-rate information that is readily accessible from MPEG video with very minimal decoding, leading to substantial gains in processing speeds. Full-decoding of selective frames is required only for text analysis. A decision tree classifier built using these features is able to identify sports clips with an accuracy of about 93 percent.

  6. Estimating Intensities and/or Strong Motion Parameters Using Civilian Monitoring Videos: The May 12, 2008, Wenchuan Earthquake

    NASA Astrophysics Data System (ADS)

    Yang, Xiaolin; Wu, Zhongliang; Jiang, Changsheng; Xia, Min

    2011-05-01

    One of the important issues in macroseismology and engineering seismology is how to get as much intensity and/or strong motion data as possible. We collected and studied several cases in the May 12, 2008, Wenchuan earthquake, exploring the possibility of estimating intensities and/or strong ground motion parameters using civilian monitoring videos which were deployed originally for security purposes. We used 53 video recordings in different places to determine the intensity distribution of the earthquake, which is shown to be consistent with the intensity distribution mapped by field investigation, and even better than that given by the Community Internet Intensity Map. In some of the videos, the seismic wave propagation is clearly visible, and can be measured with the reference of some artificial objects such as cars and/or trucks. By measuring the propagating wave, strong motion parameters can be roughly but quantitatively estimated. As a demonstration of this `propagating-wave method', we used a series of civilian videos recorded in different parts of Sichuan and Shaanxi and estimated the local PGAs. The estimate is compared with the measurement reported by strong motion instruments. The result shows that civilian monitoring video provide a practical way of collecting and estimating intensity and/or strong motion parameters, having the advantage of being dynamic, and being able to be played back for further analysis, reflecting a new trend for macroseismology in our digital era.

  7. Effects of video compression on target acquisition performance

    NASA Astrophysics Data System (ADS)

    Espinola, Richard L.; Cha, Jae; Preece, Bradley

    2008-04-01

    The bandwidth requirements of modern target acquisition systems continue to increase with larger sensor formats and multi-spectral capabilities. To obviate this problem, still and moving imagery can be compressed, often resulting in greater than 100 fold decrease in required bandwidth. Compression, however, is generally not error-free and the generated artifacts can adversely affect task performance. The U.S. Army RDECOM CERDEC Night Vision and Electronic Sensors Directorate recently performed an assessment of various compression techniques on static imagery for tank identification. In this paper, we expand this initial assessment by studying and quantifying the effect of various video compression algorithms and their impact on tank identification performance. We perform a series of controlled human perception tests using three dynamic simulated scenarios: target moving/sensor static, target static/sensor static, sensor tracking the target. Results of this study will quantify the effect of video compression on target identification and provide a framework to evaluate video compression on future sensor systems.

  8. X-Ray Calibration Facility/Advanced Video Guidance Sensor Test

    NASA Technical Reports Server (NTRS)

    Johnston, N. A. S.; Howard, R. T.; Watson, D. W.

    2004-01-01

    The advanced video guidance sensor was tested in the X-Ray Calibration facility at Marshall Space Flight Center to establish performance during vacuum. Two sensors were tested and a timeline for each are presented. The sensor and test facility are discussed briefly. A new test stand was also developed. A table establishing sensor bias and spot size growth for several ranges is detailed along with testing anomalies.

  9. Novel true-motion estimation algorithm and its application to motion-compensated temporal frame interpolation.

    PubMed

    Dikbas, Salih; Altunbasak, Yucel

    2013-08-01

    In this paper, a new low-complexity true-motion estimation (TME) algorithm is proposed for video processing applications, such as motion-compensated temporal frame interpolation (MCTFI) or motion-compensated frame rate up-conversion (MCFRUC). Regular motion estimation, which is often used in video coding, aims to find the motion vectors (MVs) to reduce the temporal redundancy, whereas TME aims to track the projected object motion as closely as possible. TME is obtained by imposing implicit and/or explicit smoothness constraints on the block-matching algorithm. To produce better quality-interpolated frames, the dense motion field at interpolation time is obtained for both forward and backward MVs; then, bidirectional motion compensation using forward and backward MVs is applied by mixing both elegantly. Finally, the performance of the proposed algorithm for MCTFI is demonstrated against recently proposed methods and smoothness constraint optical flow employed by a professional video production suite. Experimental results show that the quality of the interpolated frames using the proposed method is better when compared with the MCFRUC techniques.

  10. Constrained motion estimation-based error resilient coding for HEVC

    NASA Astrophysics Data System (ADS)

    Guo, Weihan; Zhang, Yongfei; Li, Bo

    2018-04-01

    Unreliable communication channels might lead to packet losses and bit errors in the videos transmitted through it, which will cause severe video quality degradation. This is even worse for HEVC since more advanced and powerful motion estimation methods are introduced to further remove the inter-frame dependency and thus improve the coding efficiency. Once a Motion Vector (MV) is lost or corrupted, it will cause distortion in the decoded frame. More importantly, due to motion compensation, the error will propagate along the motion prediction path, accumulate over time, and significantly degrade the overall video presentation quality. To address this problem, we study the problem of encoder-sider error resilient coding for HEVC and propose a constrained motion estimation scheme to mitigate the problem of error propagation to subsequent frames. The approach is achieved by cutting off MV dependencies and limiting the block regions which are predicted by temporal motion vector. The experimental results show that the proposed method can effectively suppress the error propagation caused by bit errors of motion vector and can improve the robustness of the stream in the bit error channels. When the bit error probability is 10-5, an increase of the decoded video quality (PSNR) by up to1.310dB and on average 0.762 dB can be achieved, compared to the reference HEVC.

  11. Applications of Phase-Based Motion Processing

    NASA Technical Reports Server (NTRS)

    Branch, Nicholas A.; Stewart, Eric C.

    2018-01-01

    Image pyramids provide useful information in determining structural response at low cost using commercially available cameras. The current effort applies previous work on the complex steerable pyramid to analyze and identify imperceptible linear motions in video. Instead of implicitly computing motion spectra through phase analysis of the complex steerable pyramid and magnifying the associated motions, instead present a visual technique and the necessary software to display the phase changes of high frequency signals within video. The present technique quickly identifies regions of largest motion within a video with a single phase visualization and without the artifacts of motion magnification, but requires use of the computationally intensive Fourier transform. While Riesz pyramids present an alternative to the computationally intensive complex steerable pyramid for motion magnification, the Riesz formulation contains significant noise, and motion magnification still presents large amounts of data that cannot be quickly assessed by the human eye. Thus, user-friendly software is presented for quickly identifying structural response through optical flow and phase visualization in both Python and MATLAB.

  12. Adaptive temporal compressive sensing for video with motion estimation

    NASA Astrophysics Data System (ADS)

    Wang, Yeru; Tang, Chaoying; Chen, Yueting; Feng, Huajun; Xu, Zhihai; Li, Qi

    2018-04-01

    In this paper, we present an adaptive reconstruction method for temporal compressive imaging with pixel-wise exposure. The motion of objects is first estimated from interpolated images with a designed coding mask. With the help of motion estimation, image blocks are classified according to the degree of motion and reconstructed with the corresponding dictionary, which was trained beforehand. Both the simulation and experiment results show that the proposed method can obtain accurate motion information before reconstruction and efficiently reconstruct compressive video.

  13. Next Generation Advanced Video Guidance Sensor Development and Test

    NASA Technical Reports Server (NTRS)

    Howard, Richard T.; Bryan, Thomas C.; Lee, Jimmy; Robertson, Bryan

    2009-01-01

    The Advanced Video Guidance Sensor (AVGS) was the primary docking sensor for the Orbital Express mission. The sensor performed extremely well during the mission, and the technology has been proven on orbit in other flights too. Parts obsolescence issues prevented the construction of more AVGS units, so the next generation of sensor was designed with current parts and updated to support future programs. The Next Generation Advanced Video Guidance Sensor (NGAVGS) has been tested as a breadboard, two different brassboard units, and a prototype. The testing revealed further improvements that could be made and demonstrated capability beyond that ever demonstrated by the sensor on orbit. This paper presents some of the sensor history, parts obsolescence issues, radiation concerns, and software improvements to the NGAVGS. In addition, some of the testing and test results are presented. The NGAVGS has shown that it will meet the general requirements for any space proximity operations or docking need.

  14. A sensor and video based ontology for activity recognition in smart environments.

    PubMed

    Mitchell, D; Morrow, Philip J; Nugent, Chris D

    2014-01-01

    Activity recognition is used in a wide range of applications including healthcare and security. In a smart environment activity recognition can be used to monitor and support the activities of a user. There have been a range of methods used in activity recognition including sensor-based approaches, vision-based approaches and ontological approaches. This paper presents a novel approach to activity recognition in a smart home environment which combines sensor and video data through an ontological framework. The ontology describes the relationships and interactions between activities, the user, objects, sensors and video data.

  15. Proximity Operations and Docking Sensor Development

    NASA Technical Reports Server (NTRS)

    Howard, Richard T.; Bryan, Thomas C.; Brewster, Linda L.; Lee, James E.

    2009-01-01

    The Next Generation Advanced Video Guidance Sensor (NGAVGS) has been under development for the last three years as a long-range proximity operations and docking sensor for use in an Automated Rendezvous and Docking (AR&D) system. The first autonomous rendezvous and docking in the history of the U.S. Space Program was successfully accomplished by Orbital Express, using the Advanced Video Guidance Sensor (AVGS) as the primary docking sensor. That flight proved that the United States now has a mature and flight proven sensor technology for supporting Crew Exploration Vehicles (CEV) and Commercial Orbital Transport Systems (COTS) Automated Rendezvous and Docking (AR&D). NASA video sensors have worked well in the past: the AVGS used on the Demonstration of Autonomous Rendezvous Technology (DART) mission operated successfully in spot mode out to 2 km, and the first generation rendezvous and docking sensor, the Video Guidance Sensor (VGS), was developed and successfully flown on Space Shuttle flights in 1997 and 1998. 12 Parts obsolescence issues prevent the construction of more AVGS units, and the next generation sensor was updated to allow it to support the CEV and COTS programs. The flight proven AR&D sensor has been redesigned to update parts and add additional capabilities for CEV and COTS with the development of the Next Generation AVGS at the Marshall Space Flight Center. The obsolete imager and processor are being replaced with new radiation tolerant parts. In addition, new capabilities include greater sensor range, auto ranging capability, and real-time video output. This paper presents some sensor hardware trades, use of highly integrated laser components, and addresses the needs of future vehicles that may rendezvous and dock with the International Space Station (ISS) and other Constellation vehicles. It also discusses approaches for upgrading AVGS to address parts obsolescence, and concepts for minimizing the sensor footprint, weight, and power requirements. In addition, the testing of the brassboard and proto-type NGAVGS units will be discussed along with the use of the NGAVGS as a proximity operations and docking sensor.

  16. Determination of the static friction coefficient from circular motion

    NASA Astrophysics Data System (ADS)

    Molina-Bolívar, J. A.; Cabrerizo-Vílchez, M. A.

    2014-07-01

    This paper describes a physics laboratory exercise for determining the coefficient of static friction between two surfaces. The circular motion of a coin placed on the surface of a rotating turntable has been studied. For this purpose, the motion is recorded with a high-speed digital video camera recording at 240 frames s-1, and the videos are analyzed using Tracker video-analysis software, allowing the students to dynamically model the motion of the coin. The students have to obtain the static friction coefficient by comparing the centripetal and maximum static friction forces. The experiment only requires simple and inexpensive materials. The dynamics of circular motion and static friction forces are difficult for many students to understand. The proposed laboratory exercise addresses these topics, which are relevant to the physics curriculum.

  17. Efficient subtle motion detection from high-speed video for sound recovery and vibration analysis using singular value decomposition-based approach

    NASA Astrophysics Data System (ADS)

    Zhang, Dashan; Guo, Jie; Jin, Yi; Zhu, Chang'an

    2017-09-01

    High-speed cameras provide full field measurement of structure motions and have been applied in nondestructive testing and noncontact structure monitoring. Recently, a phase-based method has been proposed to extract sound-induced vibrations from phase variations in videos, and this method provides insights into the study of remote sound surveillance and material analysis. An efficient singular value decomposition (SVD)-based approach is introduced to detect sound-induced subtle motions from pixel intensities in silent high-speed videos. A high-speed camera is initially applied to capture a video of the vibrating objects stimulated by sound fluctuations. Then, subimages collected from a small region on the captured video are reshaped into vectors and reconstructed to form a matrix. Orthonormal image bases (OIBs) are obtained from the SVD of the matrix; available vibration signal can then be obtained by projecting subsequent subimages onto specific OIBs. A simulation test is initiated to validate the effectiveness and efficiency of the proposed method. Two experiments are conducted to demonstrate the potential applications in sound recovery and material analysis. Results show that the proposed method efficiently detects subtle motions from the video.

  18. [A new human machine interface in neurosurgery: The Leap Motion(®). Technical note regarding a new touchless interface].

    PubMed

    Di Tommaso, L; Aubry, S; Godard, J; Katranji, H; Pauchot, J

    2016-06-01

    Currently, cross-sectional imaging viewing is used in routine practice whereas the surgical procedure requires physical contact with an interface (mouse or touch-sensitive screen). This type of contact results in a risk of lack of aseptic control and causes loss of time. The recent appearance of devices such as the Leap Motion(®) (Leap Motion society, San Francisco, USA) a sensor which enables to interact with the computer without any physical contact is of major interest in the field of surgery. However, its configuration and ergonomics produce key challenges in order to adapt to the practitioner's requirements, the imaging software as well as the surgical environment. This article aims to suggest an easy configuration of the Leap Motion(®) in neurosurgery on a PC for an optimized utilization with Carestream(®) Vue PACS v11.3.4 (Carestream Health, Inc., Rochester, USA) using a plug-in (to download at: https://drive.google.com/?usp=chrome_app#folders/0B_F4eBeBQc3ybElEeEhqME5DQkU) and a video tutorial (https://www.youtube.com/watch?v=yVPTgxg-SIk). Copyright © 2016 Elsevier Masson SAS. All rights reserved.

  19. Active Learning Approaches by Visualizing ICT Devices with Milliseconds Resolution for Deeper Understanding in Physics

    NASA Astrophysics Data System (ADS)

    Kobayashi, Akizo; Okiharu, Fumiko

    2010-07-01

    We are developing various modularized materials in physics education to overcome students' misconceptions by use of ICT, i.e. video analysis software and ultra-high-speed digital movies, motion detector, force sensors, current and voltage probes, temperature sensors etc. Furthermore, we also present some new modules of active learning approaches on electric circuit using high speed camera and voltage probes with milliseconds resolution. We are now especially trying to improve conceptual understanding by use of ICT devices with milliseconds resolution in various areas of physics education We give some modules of mass measurements by video analysis of collision phenomena by using high speed cameras—Casio EX-F1(1200 fps), EX-FH20(1000 fps) and EX-FC100/150(1000 fps). We present several new modules on collision phenomena to establish deeper understanding of conservation laws of momentum. We discuss some effective results of trial on a physics education training courses for science educators, and those for science teachers during the renewal years of teacher's license after every ten years in Japan. Finally, we discuss on some typical results of pre-test and post-test in our active learning approaches based on ICT, i.e. some evidence on improvements of physics education (increasing ratio of correct answer are 50%-level).

  20. Hand-Writing Motion Tracking with Vision-Inertial Sensor Fusion: Calibration and Error Correction

    PubMed Central

    Zhou, Shengli; Fei, Fei; Zhang, Guanglie; Liu, Yunhui; Li, Wen J.

    2014-01-01

    The purpose of this study was to improve the accuracy of real-time ego-motion tracking through inertial sensor and vision sensor fusion. Due to low sampling rates supported by web-based vision sensor and accumulation of errors in inertial sensors, ego-motion tracking with vision sensors is commonly afflicted by slow updating rates, while motion tracking with inertial sensor suffers from rapid deterioration in accuracy with time. This paper starts with a discussion of developed algorithms for calibrating two relative rotations of the system using only one reference image. Next, stochastic noises associated with the inertial sensor are identified using Allan Variance analysis, and modeled according to their characteristics. Finally, the proposed models are incorporated into an extended Kalman filter for inertial sensor and vision sensor fusion. Compared with results from conventional sensor fusion models, we have shown that ego-motion tracking can be greatly enhanced using the proposed error correction model. PMID:25157546

  1. Activity recognition with wearable sensors on loose clothing

    PubMed Central

    Howard, Matthew

    2017-01-01

    Observing human motion in natural everyday environments (such as the home), has evoked a growing interest in the development of on-body wearable sensing technology. However, wearable sensors suffer from motion artefacts introduced by the non-rigid attachment of sensors to the body, and the prevailing view is that it is necessary to eliminate these artefacts. This paper presents findings that suggest that these artefacts can, in fact, be used to distinguish between similar motions, by exploiting additional information provided by the fabric motion. An experimental study is presented whereby factors of both the motion and the properties of the fabric are analysed in the context of motion similarity. It is seen that while standard rigidly attached sensors have difficultly in distinguishing between similar motions, sensors mounted onto fabric exhibit significant differences (p < 0.01). An evaluation of the physical properties of the fabric shows that the stiffness of the material plays a role in this, with a trade-off between additional information and extraneous motion. This effect is evaluated in an online motion classification task, and the use of fabric-mounted sensors demonstrates an increase in prediction accuracy over rigidly attached sensors. PMID:28976978

  2. Activity recognition with wearable sensors on loose clothing.

    PubMed

    Michael, Brendan; Howard, Matthew

    2017-01-01

    Observing human motion in natural everyday environments (such as the home), has evoked a growing interest in the development of on-body wearable sensing technology. However, wearable sensors suffer from motion artefacts introduced by the non-rigid attachment of sensors to the body, and the prevailing view is that it is necessary to eliminate these artefacts. This paper presents findings that suggest that these artefacts can, in fact, be used to distinguish between similar motions, by exploiting additional information provided by the fabric motion. An experimental study is presented whereby factors of both the motion and the properties of the fabric are analysed in the context of motion similarity. It is seen that while standard rigidly attached sensors have difficultly in distinguishing between similar motions, sensors mounted onto fabric exhibit significant differences (p < 0.01). An evaluation of the physical properties of the fabric shows that the stiffness of the material plays a role in this, with a trade-off between additional information and extraneous motion. This effect is evaluated in an online motion classification task, and the use of fabric-mounted sensors demonstrates an increase in prediction accuracy over rigidly attached sensors.

  3. Keeping up with video game technology: objective analysis of Xbox Kinect™ and PlayStation 3 Move™ for use in burn rehabilitation.

    PubMed

    Parry, Ingrid; Carbullido, Clarissa; Kawada, Jason; Bagley, Anita; Sen, Soman; Greenhalgh, David; Palmieri, Tina

    2014-08-01

    Commercially available interactive video games are commonly used in rehabilitation to aide in physical recovery from a variety of conditions and injuries, including burns. Most video games were not originally designed for rehabilitation purposes and although some games have shown therapeutic potential in burn rehabilitation, the physical demands of more recently released video games, such as Microsoft Xbox Kinect™ (Kinect) and Sony PlayStation 3 Move™ (PS Move), have not been objectively evaluated. Video game technology is constantly evolving and demonstrating different immersive qualities and interactive demands that may or may not have therapeutic potential for patients recovering from burns. This study analyzed the upper extremity motion demands of Kinect and PS Move using three-dimensional motion analysis to determine their applicability in burn rehabilitation. Thirty normal children played each video game while real-time movement of their upper extremities was measured to determine maximal excursion and amount of elevation time. Maximal shoulder flexion, shoulder abduction and elbow flexion range of motion were significantly greater while playing Kinect than the PS Move (p≤0.01). Elevation time of the arms above 120° was also significantly longer with Kinect (p<0.05). The physical demands for shoulder and elbow range of motion while playing the Kinect, and to a lesser extent PS Move, are comparable to functional motion needed for daily tasks such as eating with a utensil and hair combing. Therefore, these more recently released commercially available video games show therapeutic potential in burn rehabilitation. Objectively quantifying the physical demands of video games commonly used in rehabilitation aides clinicians in the integration of them into practice and lays the framework for further research on their efficacy. Copyright © 2013 Elsevier Ltd and ISBI. All rights reserved.

  4. Mode extraction on wind turbine blades via phase-based video motion estimation

    NASA Astrophysics Data System (ADS)

    Sarrafi, Aral; Poozesh, Peyman; Niezrecki, Christopher; Mao, Zhu

    2017-04-01

    In recent years, image processing techniques are being applied more often for structural dynamics identification, characterization, and structural health monitoring. Although as a non-contact and full-field measurement method, image processing still has a long way to go to outperform other conventional sensing instruments (i.e. accelerometers, strain gauges, laser vibrometers, etc.,). However, the technologies associated with image processing are developing rapidly and gaining more attention in a variety of engineering applications including structural dynamics identification and modal analysis. Among numerous motion estimation and image-processing methods, phase-based video motion estimation is considered as one of the most efficient methods regarding computation consumption and noise robustness. In this paper, phase-based video motion estimation is adopted for structural dynamics characterization on a 2.3-meter long Skystream wind turbine blade, and the modal parameters (natural frequencies, operating deflection shapes) are extracted. Phase-based video processing adopted in this paper provides reliable full-field 2-D motion information, which is beneficial for manufacturing certification and model updating at the design stage. The phase-based video motion estimation approach is demonstrated through processing data on a full-scale commercial structure (i.e. a wind turbine blade) with complex geometry and properties, and the results obtained have a good correlation with the modal parameters extracted from accelerometer measurements, especially for the first four bending modes, which have significant importance in blade characterization.

  5. Automated detection of videotaped neonatal seizures based on motion segmentation methods.

    PubMed

    Karayiannis, Nicolaos B; Tao, Guozhi; Frost, James D; Wise, Merrill S; Hrachovy, Richard A; Mizrahi, Eli M

    2006-07-01

    This study was aimed at the development of a seizure detection system by training neural networks using quantitative motion information extracted by motion segmentation methods from short video recordings of infants monitored for seizures. The motion of the infants' body parts was quantified by temporal motion strength signals extracted from video recordings by motion segmentation methods based on optical flow computation. The area of each frame occupied by the infants' moving body parts was segmented by direct thresholding, by clustering of the pixel velocities, and by clustering the motion parameters obtained by fitting an affine model to the pixel velocities. The computational tools and procedures developed for automated seizure detection were tested and evaluated on 240 short video segments selected and labeled by physicians from a set of video recordings of 54 patients exhibiting myoclonic seizures (80 segments), focal clonic seizures (80 segments), and random infant movements (80 segments). The experimental study described in this paper provided the basis for selecting the most effective strategy for training neural networks to detect neonatal seizures as well as the decision scheme used for interpreting the responses of the trained neural networks. Depending on the decision scheme used for interpreting the responses of the trained neural networks, the best neural networks exhibited sensitivity above 90% or specificity above 90%. The best among the motion segmentation methods developed in this study produced quantitative features that constitute a reliable basis for detecting myoclonic and focal clonic neonatal seizures. The performance targets of this phase of the project may be achieved by combining the quantitative features described in this paper with those obtained by analyzing motion trajectory signals produced by motion tracking methods. A video system based upon automated analysis potentially offers a number of advantages. Infants who are at risk for seizures could be monitored continuously using relatively inexpensive and non-invasive video techniques that supplement direct observation by nursery personnel. This would represent a major advance in seizure surveillance and offers the possibility for earlier identification of potential neurological problems and subsequent intervention.

  6. Motion Artifact Quantification and Sensor Fusion for Unobtrusive Health Monitoring.

    PubMed

    Hoog Antink, Christoph; Schulz, Florian; Leonhardt, Steffen; Walter, Marian

    2017-12-25

    Sensors integrated into objects of everyday life potentially allow unobtrusive health monitoring at home. However, since the coupling of sensors and subject is not as well-defined as compared to a clinical setting, the signal quality is much more variable and can be disturbed significantly by motion artifacts. One way of tackling this challenge is the combined evaluation of multiple channels via sensor fusion. For robust and accurate sensor fusion, analyzing the influence of motion on different modalities is crucial. In this work, a multimodal sensor setup integrated into an armchair is presented that combines capacitively coupled electrocardiography, reflective photoplethysmography, two high-frequency impedance sensors and two types of ballistocardiography sensors. To quantify motion artifacts, a motion protocol performed by healthy volunteers is recorded with a motion capture system, and reference sensors perform cardiorespiratory monitoring. The shape-based signal-to-noise ratio SNR S is introduced and used to quantify the effect on motion on different sensing modalities. Based on this analysis, an optimal combination of sensors and fusion methodology is developed and evaluated. Using the proposed approach, beat-to-beat heart-rate is estimated with a coverage of 99.5% and a mean absolute error of 7.9 ms on 425 min of data from seven volunteers in a proof-of-concept measurement scenario.

  7. Flight of a falling maple seed

    NASA Astrophysics Data System (ADS)

    Lee, Injae; Choi, Haecheon

    2017-09-01

    This paper is associated with a video winner of a 2016 APS/DFD Gallery of Fluid Motion Award. The original video is available from the Gallery of Fluid Motion, https://doi.org/10.1103/APS.DFD.2016.GFM.V0046

  8. Compliant finger sensor for sensorimotor studies in MEG and MR environment

    NASA Astrophysics Data System (ADS)

    Li, Y.; Yong, X.; Cheung, T. P. L.; Menon, C.

    2016-07-01

    Magnetoencephalography (MEG) and functional magnetic resonance imaging (fMRI) are widely used for functional brain imaging. The correlations between the sensorimotor functions of the hand and brain activities have been investigated in MEG/fMRI studies. Currently, limited information can be drawn from these studies due to the limitations of existing motion sensors that are used to detect hand movements. One major challenge in designing these motion sensors is to limit the signal interference between the motion sensors and the MEG/fMRI. In this work, a novel finger motion sensor, which contains low-ferromagnetic and non-conductive materials, is introduced. The finger sensor consists of four air-filled chambers. When compressed by finger(s), the pressure change in the chambers can be detected by the electronics of the finger sensor. Our study has validated that the interference between the finger sensor and an MEG is negligible. Also, by applying a support vector machine algorithm to the data obtained from the finger sensor, at least 11 finger patterns can be discriminated. Comparing to the use of traditional electromyography (EMG) in detecting finger motion, our proposed finger motion sensor is not only MEG/fMRI compatible, it is also easy to use. As the signals acquired from the sensor have a higher SNR than that of the EMG, no complex algorithms are required to detect different finger movement patterns. Future studies can utilize this motion sensor to investigate brain activations during different finger motions and correlate the activations with the sensory and motor functions respectively.

  9. Digital Motion Imagery, Interoperability Challenges for Space Operations

    NASA Technical Reports Server (NTRS)

    Grubbs, Rodney

    2012-01-01

    With advances in available bandwidth from spacecraft and between terrestrial control centers, digital motion imagery and video is becoming more practical as a data gathering tool for science and engineering, as well as for sharing missions with the public. The digital motion imagery and video industry has done a good job of creating standards for compression, distribution, and physical interfaces. Compressed data streams can easily be transmitted or distributed over radio frequency, internet protocol, and other data networks. All of these standards, however, can make sharing video between spacecraft and terrestrial control centers a frustrating and complicated task when different standards and protocols are used by different agencies. This paper will explore the challenges presented by the abundance of motion imagery and video standards, interfaces and protocols with suggestions for common formats that could simplify interoperability between spacecraft and ground support systems. Real-world examples from the International Space Station will be examined. The paper will also discuss recent trends in the development of new video compression algorithms, as well likely expanded use of Delay (or Disruption) Tolerant Networking nodes.

  10. Integrating motion, illumination, and structure in video sequences with applications in illumination-invariant tracking.

    PubMed

    Xu, Yilei; Roy-Chowdhury, Amit K

    2007-05-01

    In this paper, we present a theory for combining the effects of motion, illumination, 3D structure, albedo, and camera parameters in a sequence of images obtained by a perspective camera. We show that the set of all Lambertian reflectance functions of a moving object, at any position, illuminated by arbitrarily distant light sources, lies "close" to a bilinear subspace consisting of nine illumination variables and six motion variables. This result implies that, given an arbitrary video sequence, it is possible to recover the 3D structure, motion, and illumination conditions simultaneously using the bilinear subspace formulation. The derivation builds upon existing work on linear subspace representations of reflectance by generalizing it to moving objects. Lighting can change slowly or suddenly, locally or globally, and can originate from a combination of point and extended sources. We experimentally compare the results of our theory with ground truth data and also provide results on real data by using video sequences of a 3D face and the entire human body with various combinations of motion and illumination directions. We also show results of our theory in estimating 3D motion and illumination model parameters from a video sequence.

  11. Learning the moves: the effect of familiarity and facial motion on person recognition across large changes in viewing format.

    PubMed

    Roark, Dana A; O'Toole, Alice J; Abdi, Hervé; Barrett, Susan E

    2006-01-01

    Familiarity with a face or person can support recognition in tasks that require generalization to novel viewing contexts. Using naturalistic viewing conditions requiring recognition of people from face or whole body gait stimuli, we investigated the effects of familiarity, facial motion, and direction of learning/test transfer on person recognition. Participants were familiarized with previously unknown people from gait videos and were tested on faces (experiment 1a) or were familiarized with faces and were tested with gait videos (experiment 1b). Recognition was more accurate when learning from the face and testing with the gait videos, than when learning from the gait videos and testing with the face. The repetition of a single stimulus, either the face or gait, produced strong recognition gains across transfer conditions. Also, the presentation of moving faces resulted in better performance than that of static faces. In experiment 2, we investigated the role of facial motion further by testing recognition with static profile images. Motion provided no benefit for recognition, indicating that structure-from-motion is an unlikely source of the motion advantage found in the first set of experiments.

  12. Motion adaptive Kalman filter for super-resolution

    NASA Astrophysics Data System (ADS)

    Richter, Martin; Nasse, Fabian; Schröder, Hartmut

    2011-01-01

    Superresolution is a sophisticated strategy to enhance image quality of both low and high resolution video, performing tasks like artifact reduction, scaling and sharpness enhancement in one algorithm, all of them reconstructing high frequency components (above Nyquist frequency) in some way. Especially recursive superresolution algorithms can fulfill high quality aspects because they control the video output using a feed-back loop and adapt the result in the next iteration. In addition to excellent output quality, temporal recursive methods are very hardware efficient and therefore even attractive for real-time video processing. A very promising approach is the utilization of Kalman filters as proposed by Farsiu et al. Reliable motion estimation is crucial for the performance of superresolution. Therefore, robust global motion models are mainly used, but this also limits the application of superresolution algorithm. Thus, handling sequences with complex object motion is essential for a wider field of application. Hence, this paper proposes improvements by extending the Kalman filter approach using motion adaptive variance estimation and segmentation techniques. Experiments confirm the potential of our proposal for ideal and real video sequences with complex motion and further compare its performance to state-of-the-art methods like trainable filters.

  13. Test-retest reliability of computer-based video analysis of general movements in healthy term-born infants.

    PubMed

    Valle, Susanne Collier; Støen, Ragnhild; Sæther, Rannei; Jensenius, Alexander Refsum; Adde, Lars

    2015-10-01

    A computer-based video analysis has recently been presented for quantitative assessment of general movements (GMs). This method's test-retest reliability, however, has not yet been evaluated. The aim of the current study was to evaluate the test-retest reliability of computer-based video analysis of GMs, and to explore the association between computer-based video analysis and the temporal organization of fidgety movements (FMs). Test-retest reliability study. 75 healthy, term-born infants were recorded twice the same day during the FMs period using a standardized video set-up. The computer-based movement variables "quantity of motion mean" (Qmean), "quantity of motion standard deviation" (QSD) and "centroid of motion standard deviation" (CSD) were analyzed, reflecting the amount of motion and the variability of the spatial center of motion of the infant, respectively. In addition, the association between the variable CSD and the temporal organization of FMs was explored. Intraclass correlation coefficients (ICC 1.1 and ICC 3.1) were calculated to assess test-retest reliability. The ICC values for the variables CSD, Qmean and QSD were 0.80, 0.80 and 0.86 for ICC (1.1), respectively; and 0.80, 0.86 and 0.90 for ICC (3.1), respectively. There were significantly lower CSD values in the recordings with continual FMs compared to the recordings with intermittent FMs (p<0.05). This study showed high test-retest reliability of computer-based video analysis of GMs, and a significant association between our computer-based video analysis and the temporal organization of FMs. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  14. Detecting and Analyzing Multiple Moving Objects in Crowded Environments with Coherent Motion Regions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheriyadat, Anil M.

    Understanding the world around us from large-scale video data requires vision systems that can perform automatic interpretation. While human eyes can unconsciously perceive independent objects in crowded scenes and other challenging operating environments, automated systems have difficulty detecting, counting, and understanding their behavior in similar scenes. Computer scientists at ORNL have a developed a technology termed as "Coherent Motion Region Detection" that invloves identifying multiple indepedent moving objects in crowded scenes by aggregating low-level motion cues extracted from moving objects. Humans and other species exploit such low-level motion cues seamlessely to perform perceptual grouping for visual understanding. The algorithm detectsmore » and tracks feature points on moving objects resulting in partial trajectories that span coherent 3D region in the space-time volume defined by the video. In the case of multi-object motion, many possible coherent motion regions can be constructed around the set of trajectories. The unique approach in the algorithm is to identify all possible coherent motion regions, then extract a subset of motion regions based on an innovative measure to automatically locate moving objects in crowded environments.The software reports snapshot of the object, count, and derived statistics ( count over time) from input video streams. The software can directly process videos streamed over the internet or directly from a hardware device (camera).« less

  15. Wireless Sensor Network Deployment for Monitoring Wildlife Passages

    PubMed Central

    Garcia-Sanchez, Antonio-Javier; Garcia-Sanchez, Felipe; Losilla, Fernando; Kulakowski, Pawel; Garcia-Haro, Joan; Rodríguez, Alejandro; López-Bao, José-Vicente; Palomares, Francisco

    2010-01-01

    Wireless Sensor Networks (WSNs) are being deployed in very diverse application scenarios, including rural and forest environments. In these particular contexts, specimen protection and conservation is a challenge, especially in natural reserves, dangerous locations or hot spots of these reserves (i.e., roads, railways, and other civil infrastructures). This paper proposes and studies a WSN based system for generic target (animal) tracking in the surrounding area of wildlife passages built to establish safe ways for animals to cross transportation infrastructures. In addition, it allows target identification through the use of video sensors connected to strategically deployed nodes. This deployment is designed on the basis of the IEEE 802.15.4 standard, but it increases the lifetime of the nodes through an appropriate scheduling. The system has been evaluated for the particular scenario of wildlife monitoring in passages across roads. For this purpose, different schemes have been simulated in order to find the most appropriate network operational parameters. Moreover, a novel prototype, provided with motion detector sensors, has also been developed and its design feasibility demonstrated. Original software modules providing new functionalities have been implemented and included in this prototype. Finally, main performance evaluation results of the whole system are presented and discussed in depth. PMID:22163601

  16. Parallel Key Frame Extraction for Surveillance Video Service in a Smart City.

    PubMed

    Zheng, Ran; Yao, Chuanwei; Jin, Hai; Zhu, Lei; Zhang, Qin; Deng, Wei

    2015-01-01

    Surveillance video service (SVS) is one of the most important services provided in a smart city. It is very important for the utilization of SVS to provide design efficient surveillance video analysis techniques. Key frame extraction is a simple yet effective technique to achieve this goal. In surveillance video applications, key frames are typically used to summarize important video content. It is very important and essential to extract key frames accurately and efficiently. A novel approach is proposed to extract key frames from traffic surveillance videos based on GPU (graphics processing units) to ensure high efficiency and accuracy. For the determination of key frames, motion is a more salient feature in presenting actions or events, especially in surveillance videos. The motion feature is extracted in GPU to reduce running time. It is also smoothed to reduce noise, and the frames with local maxima of motion information are selected as the final key frames. The experimental results show that this approach can extract key frames more accurately and efficiently compared with several other methods.

  17. Ideas for Use of an IPad in Introductory Physics Education

    NASA Astrophysics Data System (ADS)

    Aurora, Tarlok S.

    2014-03-01

    Mobile devices such as an IPad, tablet computers and smartphones offer an opportunity to collect information to facilitate physics teaching and learning. The data collected with built-in sensors, such as a video camera, may be analyzed on the mobile device itself or on a desktop computer. In this work, first, the circular motion of a steel ball rolling in a cereal bowl was analyzed to show that it consisted of two simple harmonic motions, in perpendicular directions. Secondly, motion of two balls-one dropped vertically down, and the other one launched as a projectile - was analyzed. Data was analyzed with Logger Pro software, and value of g was determined graphically. Details of the work, its limitations and additional examples will be described. The material so obtained may be used as a demonstration, in a classroom, to clarify physics concepts. In a school, where students are required to have such portable devices, one may assign such activities as homework, to enhance student engagement in learning physics. The author is thankful to USciences for the IPad; and Rich Cosgriff, Phyllis Blumberg and Elia Eschenazi for useful discussions.

  18. Wearable Stretch Sensors for Motion Measurement of the Wrist Joint Based on Dielectric Elastomers.

    PubMed

    Huang, Bo; Li, Mingyu; Mei, Tao; McCoul, David; Qin, Shihao; Zhao, Zhanfeng; Zhao, Jianwen

    2017-11-23

    Motion capture of the human body potentially holds great significance for exoskeleton robots, human-computer interaction, sports analysis, rehabilitation research, and many other areas. Dielectric elastomer sensors (DESs) are excellent candidates for wearable human motion capture systems because of their intrinsic characteristics of softness, light weight, and compliance. In this paper, DESs were applied to measure all component motions of the wrist joints. Five sensors were mounted to different positions on the wrist, and each one is for one component motion. To find the best position to mount the sensors, the distribution of the muscles is analyzed. Even so, the component motions and the deformation of the sensors are coupled; therefore, a decoupling method was developed. By the decoupling algorithm, all component motions can be measured with a precision of 5°, which meets the requirements of general motion capture systems.

  19. High-speed imaging using CMOS image sensor with quasi pixel-wise exposure

    NASA Astrophysics Data System (ADS)

    Sonoda, T.; Nagahara, H.; Endo, K.; Sugiyama, Y.; Taniguchi, R.

    2017-02-01

    Several recent studies in compressive video sensing have realized scene capture beyond the fundamental trade-off limit between spatial resolution and temporal resolution using random space-time sampling. However, most of these studies showed results for higher frame rate video that were produced by simulation experiments or using an optically simulated random sampling camera, because there are currently no commercially available image sensors with random exposure or sampling capabilities. We fabricated a prototype complementary metal oxide semiconductor (CMOS) image sensor with quasi pixel-wise exposure timing that can realize nonuniform space-time sampling. The prototype sensor can reset exposures independently by columns and fix these amount of exposure by rows for each 8x8 pixel block. This CMOS sensor is not fully controllable via the pixels, and has line-dependent controls, but it offers flexibility when compared with regular CMOS or charge-coupled device sensors with global or rolling shutters. We propose a method to realize pseudo-random sampling for high-speed video acquisition that uses the flexibility of the CMOS sensor. We reconstruct the high-speed video sequence from the images produced by pseudo-random sampling using an over-complete dictionary.

  20. A method of intentional movement estimation of oblique small-UAV videos stabilized based on homography model

    NASA Astrophysics Data System (ADS)

    Guo, Shiyi; Mai, Ying; Zhao, Hongying; Gao, Pengqi

    2013-05-01

    The airborne video streams of small-UAVs are commonly plagued with distractive jittery and shaking motions, disorienting rotations, noisy and distorted images and other unwanted movements. These problems collectively make it very difficult for observers to obtain useful information from the video. Due to the small payload of small-UAVs, it is a priority to improve the image quality by means of electronic image stabilization. But when small-UAV makes a turn, affected by the flight characteristics of it, the video is easy to become oblique. This brings a lot of difficulties to electronic image stabilization technology. Homography model performed well in the oblique image motion estimation, while bringing great challenges to intentional motion estimation. Therefore, in this paper, we focus on solve the problem of the video stabilized when small-UAVs banking and turning. We attend to the small-UAVs fly along with an arc of a fixed turning radius. For this reason, after a series of experimental analysis on the flight characteristics and the path how small-UAVs turned, we presented a new method to estimate the intentional motion in which the path of the frame center was used to fit the video moving track. Meanwhile, the image sequences dynamic mosaic was done to make up for the limited field of view. At last, the proposed algorithm was carried out and validated by actual airborne videos. The results show that the proposed method is effective to stabilize the oblique video of small-UAVs.

  1. Multi-Sensor Methods for Mobile Radar Motion Capture and Compensation

    NASA Astrophysics Data System (ADS)

    Nakata, Robert

    Remote sensing has many applications, including surveying and mapping, geophysics exploration, military surveillance, search and rescue and counter-terrorism operations. Remote sensor systems typically use visible image, infrared or radar sensors. Camera based image sensors can provide high spatial resolution but are limited to line-of-sight capture during daylight. Infrared sensors have lower resolution but can operate during darkness. Radar sensors can provide high resolution motion measurements, even when obscured by weather, clouds and smoke and can penetrate walls and collapsed structures constructed with non-metallic materials up to 1 m to 2 m in depth depending on the wavelength and transmitter power level. However, any platform motion will degrade the target signal of interest. In this dissertation, we investigate alternative methodologies to capture platform motion, including a Body Area Network (BAN) that doesn't require external fixed location sensors, allowing full mobility of the user. We also investigated platform stabilization and motion compensation techniques to reduce and remove the signal distortion introduced by the platform motion. We evaluated secondary ultrasonic and radar sensors to stabilize the platform resulting in an average 5 dB of Signal to Interference Ratio (SIR) improvement. We also implemented a Digital Signal Processing (DSP) motion compensation algorithm that improved the SIR by 18 dB on average. These techniques could be deployed on a quadcopter platform and enable the detection of respiratory motion using an onboard radar sensor.

  2. A Comprehensive Motion Estimation Technique for the Improvement of EIS Methods Based on the SURF Algorithm and Kalman Filter.

    PubMed

    Cheng, Xuemin; Hao, Qun; Xie, Mengdi

    2016-04-07

    Video stabilization is an important technology for removing undesired motion in videos. This paper presents a comprehensive motion estimation method for electronic image stabilization techniques, integrating the speeded up robust features (SURF) algorithm, modified random sample consensus (RANSAC), and the Kalman filter, and also taking camera scaling and conventional camera translation and rotation into full consideration. Using SURF in sub-pixel space, feature points were located and then matched. The false matched points were removed by modified RANSAC. Global motion was estimated by using the feature points and modified cascading parameters, which reduced the accumulated errors in a series of frames and improved the peak signal to noise ratio (PSNR) by 8.2 dB. A specific Kalman filter model was established by considering the movement and scaling of scenes. Finally, video stabilization was achieved with filtered motion parameters using the modified adjacent frame compensation. The experimental results proved that the target images were stabilized even when the vibrating amplitudes of the video become increasingly large.

  3. Photo-consistency registration of a 4D cardiac motion model to endoscopic video for image guidance of robotic coronary artery bypass

    NASA Astrophysics Data System (ADS)

    Figl, Michael; Rueckert, Daniel; Edwards, Eddie

    2009-02-01

    The aim of the work described in this paper is registration of a 4D preoperative motion model of the heart to the video view of the patient through the intraoperative endoscope. The heart motion is cyclical and can be modelled using multiple reconstructions of cardiac gated coronary CT. We propose the use of photoconsistency between the two views through the da Vinci endoscope to align to the preoperative heart surface model from CT. The temporal alignment from the video to the CT model could in principle be obtained from the ECG signal. We propose averaging of the photoconsistency over the cardiac cycle to improve the registration compared to a single view. Though there is considerable motion of the heart, after correct temporal alignment we suggest that the remaining motion should be close to rigid. Results are presented for simulated renderings and for real video of a beating heart phantom. We found much smoother sections at the minimum when using multiple phases for the registration, furthermore convergence was found to be better when more phases are used.

  4. Pyxis handheld polarimetric imager

    NASA Astrophysics Data System (ADS)

    Chenault, David B.; Pezzaniti, J. Larry; Vaden, Justin P.

    2016-05-01

    The instrumentation for measuring infrared polarization signatures has seen significant advancement over the last decade. Previous work has shown the value of polarimetric imagery for a variety of target detection scenarios including detection of manmade targets in clutter and detection of ground and maritime targets while recent work has shown improvements in contrast for aircraft detection and biometric markers. These data collection activities have generally used laboratory or prototype systems with limitations on the allowable amount of target motion or the sensor platform and usually require an attached computer for data acquisition and processing. Still, performance and sensitivity have been steadily getting better while size, weight, and power requirements have been getting smaller enabling polarimetric imaging for a greater or real world applications. In this paper, we describe Pyxis®, a microbolometer based imaging polarimeter that produces live polarimetric video of conventional, polarimetric, and fused image products. A polarization microgrid array integrated in the optical system captures all polarization states simultaneously and makes the system immune to motion artifacts of either the sensor or the scene. The system is battery operated, rugged, and weighs about a quarter pound, and can be helmet mounted or handheld. On board processing of polarization and fused image products enable the operator to see polarimetric signatures in real time. Both analog and digital outputs are possible with sensor control available through a tablet interface. A top level description of Pyxis® is given followed by performance characteristics and representative data.

  5. Response of Seismometer with Symmetric Triaxial Sensor Configuration to Complex Ground Motion

    NASA Astrophysics Data System (ADS)

    Graizer, V.

    2007-12-01

    Most instruments used in seismological practice to record ground motion in all directions use three sensors oriented toward North, East and upward. In this standard configuration horizontal and vertical sensors differ in their construction because of gravity acceleration always applied to a vertical sensor. An alternative way of symmetric sensor configuration was first introduced by Galperin (1955) for petroleum exploration. In this arrangement three identical sensors are also positioned orthogonally to each other but are tilted at the same angle of 54.7 degrees to the vertical axis (triaxial system of coordinate balanced on its corner). Records obtained using symmetric configuration must be rotated into an earth referenced X, Y, Z coordinate system. A number of recent seismological instruments (e.g., broadband seismometers Streckeisen STS-2, Trillium of Nanometrics and Cronos of Kinemetrics) are using symmetric sensor configuration. In most of seismological studies it is assumed that rotational (rocking and torsion) components of earthquake ground motion are small enough to be neglected. However, recently examples were shown when rotational components are significant relative to translational components of motions. Response of pendulums installed in standard configuration (vertical and two horizontals) to complex input motion that includes rotations has been studied in a number of publications. We consider the response of pendulums in a symmetric sensor configuration to complex input motions including rotations, and the resultant triaxial system response. Possible implications of using symmetric sensor configuration in strong motion studies are discussed. Considering benefits of equal design of all three sensors in symmetric configuration, and as a result potentially lower cost of the three-component accelerograph, it may be useful for strong motion measurements not requiring high resolution post signal processing. The disadvantage of this configuration is that if one of the sensors is not working properly or there is a misalignment of sensors, it results in degradation of all three components. Symmetric sensor configuration requires identical processing of each channel putting a number of limitations on further processing of strong motion records.

  6. Simulation and ground testing with the Advanced Video Guidance Sensor

    NASA Technical Reports Server (NTRS)

    Howard, Richard T.; Johnston, Albert S.; Bryan, Thomas C.; Book, Michael L.

    2005-01-01

    The Advanced Video Guidance Sensor (AVGS), an active sensor system that provides near-range 6-degree-of-freedom sensor data, has been developed as part of an automatic rendezvous and docking system for the Demonstration of Autonomous Rendezvous Technology (DART). The sensor determines the relative positions and attitudes between the active sensor and the passive target at ranges up to 300 meters. The AVGS uses laser diodes to illuminate retro-reflectors in the target, a solid-state imager to detect the light returned from the target, and image capture electronics and a digital signal processor to convert the video information into the relative positions and attitudes. The development of the sensor, through initial prototypes, final prototypes, and three flight units, has required a great deal of testing at every phase, and the different types of testing, their effectiveness, and their results, are presented in this paper, focusing on the testing of the flight units. Testing has improved the sensor's performance.

  7. Multiple Sensor Camera for Enhanced Video Capturing

    NASA Astrophysics Data System (ADS)

    Nagahara, Hajime; Kanki, Yoshinori; Iwai, Yoshio; Yachida, Masahiko

    A resolution of camera has been drastically improved under a current request for high-quality digital images. For example, digital still camera has several mega pixels. Although a video camera has the higher frame-rate, the resolution of a video camera is lower than that of still camera. Thus, the high-resolution is incompatible with the high frame rate of ordinary cameras in market. It is difficult to solve this problem by a single sensor, since it comes from physical limitation of the pixel transfer rate. In this paper, we propose a multi-sensor camera for capturing a resolution and frame-rate enhanced video. Common multi-CCDs camera, such as 3CCD color camera, has same CCD for capturing different spectral information. Our approach is to use different spatio-temporal resolution sensors in a single camera cabinet for capturing higher resolution and frame-rate information separately. We build a prototype camera which can capture high-resolution (2588×1958 pixels, 3.75 fps) and high frame-rate (500×500, 90 fps) videos. We also proposed the calibration method for the camera. As one of the application of the camera, we demonstrate an enhanced video (2128×1952 pixels, 90 fps) generated from the captured videos for showing the utility of the camera.

  8. Fusion: ultra-high-speed and IR image sensors

    NASA Astrophysics Data System (ADS)

    Etoh, T. Goji; Dao, V. T. S.; Nguyen, Quang A.; Kimata, M.

    2015-08-01

    Most targets of ultra-high-speed video cameras operating at more than 1 Mfps, such as combustion, crack propagation, collision, plasma, spark discharge, an air bag at a car accident and a tire under a sudden brake, generate sudden heat. Researchers in these fields require tools to measure the high-speed motion and heat simultaneously. Ultra-high frame rate imaging is achieved by an in-situ storage image sensor. Each pixel of the sensor is equipped with multiple memory elements to record a series of image signals simultaneously at all pixels. Image signals stored in each pixel are read out after an image capturing operation. In 2002, we developed an in-situ storage image sensor operating at 1 Mfps 1). However, the fill factor of the sensor was only 15% due to a light shield covering the wide in-situ storage area. Therefore, in 2011, we developed a backside illuminated (BSI) in-situ storage image sensor to increase the sensitivity with 100% fill factor and a very high quantum efficiency 2). The sensor also achieved a much higher frame rate,16.7 Mfps, thanks to the wiring on the front side with more freedom 3). The BSI structure has another advantage that it has less difficulties in attaching an additional layer on the backside, such as scintillators. This paper proposes development of an ultra-high-speed IR image sensor in combination of advanced nano-technologies for IR imaging and the in-situ storage technology for ultra-highspeed imaging with discussion on issues in the integration.

  9. Illusory Motion Reproduced by Deep Neural Networks Trained for Prediction

    PubMed Central

    Watanabe, Eiji; Kitaoka, Akiyoshi; Sakamoto, Kiwako; Yasugi, Masaki; Tanaka, Kenta

    2018-01-01

    The cerebral cortex predicts visual motion to adapt human behavior to surrounding objects moving in real time. Although the underlying mechanisms are still unknown, predictive coding is one of the leading theories. Predictive coding assumes that the brain's internal models (which are acquired through learning) predict the visual world at all times and that errors between the prediction and the actual sensory input further refine the internal models. In the past year, deep neural networks based on predictive coding were reported for a video prediction machine called PredNet. If the theory substantially reproduces the visual information processing of the cerebral cortex, then PredNet can be expected to represent the human visual perception of motion. In this study, PredNet was trained with natural scene videos of the self-motion of the viewer, and the motion prediction ability of the obtained computer model was verified using unlearned videos. We found that the computer model accurately predicted the magnitude and direction of motion of a rotating propeller in unlearned videos. Surprisingly, it also represented the rotational motion for illusion images that were not moving physically, much like human visual perception. While the trained network accurately reproduced the direction of illusory rotation, it did not detect motion components in negative control pictures wherein people do not perceive illusory motion. This research supports the exciting idea that the mechanism assumed by the predictive coding theory is one of basis of motion illusion generation. Using sensory illusions as indicators of human perception, deep neural networks are expected to contribute significantly to the development of brain research. PMID:29599739

  10. Illusory Motion Reproduced by Deep Neural Networks Trained for Prediction.

    PubMed

    Watanabe, Eiji; Kitaoka, Akiyoshi; Sakamoto, Kiwako; Yasugi, Masaki; Tanaka, Kenta

    2018-01-01

    The cerebral cortex predicts visual motion to adapt human behavior to surrounding objects moving in real time. Although the underlying mechanisms are still unknown, predictive coding is one of the leading theories. Predictive coding assumes that the brain's internal models (which are acquired through learning) predict the visual world at all times and that errors between the prediction and the actual sensory input further refine the internal models. In the past year, deep neural networks based on predictive coding were reported for a video prediction machine called PredNet. If the theory substantially reproduces the visual information processing of the cerebral cortex, then PredNet can be expected to represent the human visual perception of motion. In this study, PredNet was trained with natural scene videos of the self-motion of the viewer, and the motion prediction ability of the obtained computer model was verified using unlearned videos. We found that the computer model accurately predicted the magnitude and direction of motion of a rotating propeller in unlearned videos. Surprisingly, it also represented the rotational motion for illusion images that were not moving physically, much like human visual perception. While the trained network accurately reproduced the direction of illusory rotation, it did not detect motion components in negative control pictures wherein people do not perceive illusory motion. This research supports the exciting idea that the mechanism assumed by the predictive coding theory is one of basis of motion illusion generation. Using sensory illusions as indicators of human perception, deep neural networks are expected to contribute significantly to the development of brain research.

  11. Oscillatory motion based measurement method and sensor for measuring wall shear stress due to fluid flow

    DOEpatents

    Armstrong, William D [Laramie, WY; Naughton, Jonathan [Laramie, WY; Lindberg, William R [Laramie, WY

    2008-09-02

    A shear stress sensor for measuring fluid wall shear stress on a test surface is provided. The wall shear stress sensor is comprised of an active sensing surface and a sensor body. An elastic mechanism mounted between the active sensing surface and the sensor body allows movement between the active sensing surface and the sensor body. A driving mechanism forces the shear stress sensor to oscillate. A measuring mechanism measures displacement of the active sensing surface relative to the sensor body. The sensor may be operated under periodic excitation where changes in the nature of the fluid properties or the fluid flow over the sensor measurably changes the amplitude or phase of the motion of the active sensing surface, or changes the force and power required from a control system in order to maintain constant motion. The device may be operated under non-periodic excitation where changes in the nature of the fluid properties or the fluid flow over the sensor change the transient motion of the active sensor surface or change the force and power required from a control system to maintain a specified transient motion of the active sensor surface.

  12. Egomotion estimation with optic flow and air velocity sensors.

    PubMed

    Rutkowski, Adam J; Miller, Mikel M; Quinn, Roger D; Willis, Mark A

    2011-06-01

    We develop a method that allows a flyer to estimate its own motion (egomotion), the wind velocity, ground slope, and flight height using only inputs from onboard optic flow and air velocity sensors. Our artificial algorithm demonstrates how it could be possible for flying insects to determine their absolute egomotion using their available sensors, namely their eyes and wind sensitive hairs and antennae. Although many behaviors can be performed by only knowing the direction of travel, behavioral experiments indicate that odor tracking insects are able to estimate the wind direction and control their absolute egomotion (i.e., groundspeed). The egomotion estimation method that we have developed, which we call the opto-aeronautic algorithm, is tested in a variety of wind and ground slope conditions using a video recorded flight of a moth tracking a pheromone plume. Over all test cases that we examined, the algorithm achieved a mean absolute error in height of 7% or less. Furthermore, our algorithm is suitable for the navigation of aerial vehicles in environments where signals from the Global Positioning System are unavailable.

  13. Detection of Upscale-Crop and Partial Manipulation in Surveillance Video Based on Sensor Pattern Noise

    PubMed Central

    Hyun, Dai-Kyung; Ryu, Seung-Jin; Lee, Hae-Yeoun; Lee, Heung-Kyu

    2013-01-01

    In many court cases, surveillance videos are used as significant court evidence. As these surveillance videos can easily be forged, it may cause serious social issues, such as convicting an innocent person. Nevertheless, there is little research being done on forgery of surveillance videos. This paper proposes a forensic technique to detect forgeries of surveillance video based on sensor pattern noise (SPN). We exploit the scaling invariance of the minimum average correlation energy Mellin radial harmonic (MACE-MRH) correlation filter to reliably unveil traces of upscaling in videos. By excluding the high-frequency components of the investigated video and adaptively choosing the size of the local search window, the proposed method effectively localizes partially manipulated regions. Empirical evidence from a large database of test videos, including RGB (Red, Green, Blue)/infrared video, dynamic-/static-scene video and compressed video, indicates the superior performance of the proposed method. PMID:24051524

  14. Running wavelet archetype aids the determination of heart rate from the video photoplethysmogram during motion.

    PubMed

    Addison, Paul S; Foo, David M H; Jacquel, Dominique

    2017-07-01

    The extraction of heart rate from a video-based biosignal during motion using a novel wavelet-based ensemble averaging method is described. Running Wavelet Archetyping (RWA) allows for the enhanced extraction of pulse information from the time-frequency representation, from which a video-based heart rate (HRvid) can be derived. This compares favorably to a reference heart rate derived from a pulse oximeter.

  15. Report on Distance Learning Technologies.

    DTIC Science & Technology

    1995-09-01

    26 cities. The CSX system includes full-motion video, animations , audio, and interactive examples and testing to teach the use of a new computer...video. The change to all-digital media now permits the use of full-motion video, animation , and audio on networks. It is possible to have independent...is possible to download entire multimedia presentations from the network. To date there is not a great deal known about teaching courses using the

  16. Studying Upper-Limb Amputee Prosthesis Use to Inform Device Design

    DTIC Science & Technology

    2016-10-01

    study of the resulting videos led to a new prosthetics-use taxonomy that is generalizable to various levels of amputation and terminal devices. The...taxonomy was applied to classification of the recorded videos via custom tagging software with midi controller interface. The software creates...a motion capture studio and video cameras to record accurate and detailed upper body motion during a series of standardized tasks. These tasks are

  17. Classification and simulation of stereoscopic artifacts in mobile 3DTV content

    NASA Astrophysics Data System (ADS)

    Boev, Atanas; Hollosi, Danilo; Gotchev, Atanas; Egiazarian, Karen

    2009-02-01

    We identify, categorize and simulate artifacts which might occur during delivery stereoscopic video to mobile devices. We consider the stages of 3D video delivery dataflow: content creation, conversion to the desired format (multiview or source-plus-depth), coding/decoding, transmission, and visualization on 3D display. Human 3D vision works by assessing various depth cues - accommodation, binocular depth cues, pictorial cues and motion parallax. As a consequence any artifact which modifies these cues impairs the quality of a 3D scene. The perceptibility of each artifact can be estimated through subjective tests. The material for such tests needs to contain various artifacts with different amounts of impairment. We present a system for simulation of these artifacts. The artifacts are organized in groups with similar origins, and each group is simulated by a block in a simulation channel. The channel introduces the following groups of artifacts: sensor limitations, geometric distortions caused by camera optics, spatial and temporal misalignments between video channels, spatial and temporal artifacts caused by coding, transmission losses, and visualization artifacts. For the case of source-plus-depth representation, artifacts caused by format conversion are added as well.

  18. A functional video-based anthropometric measuring system

    NASA Technical Reports Server (NTRS)

    Nixon, J. H.; Cater, J. P.

    1982-01-01

    A high-speed anthropometric three dimensional measurement system using the Selcom Selspot motion tracking instrument for visual data acquisition is discussed. A three-dimensional scanning system was created which collects video, audio, and performance data on a single standard video cassette recorder. Recording rates of 1 megabit per second for periods of up to two hours are possible with the system design. A high-speed off-the-shelf motion analysis system for collecting optical information as used. The video recording adapter (VRA) is interfaced to the Selspot data acquisition system.

  19. Video sensor with range measurement capability

    NASA Technical Reports Server (NTRS)

    Howard, Richard T. (Inventor); Briscoe, Jeri M. (Inventor); Corder, Eric L. (Inventor); Broderick, David J. (Inventor)

    2008-01-01

    A video sensor device is provided which incorporates a rangefinder function. The device includes a single video camera and a fixed laser spaced a predetermined distance from the camera for, when activated, producing a laser beam. A diffractive optic element divides the beam so that multiple light spots are produced on a target object. A processor calculates the range to the object based on the known spacing and angles determined from the light spots on the video images produced by the camera.

  20. Detecting dominant motion patterns in crowds of pedestrians

    NASA Astrophysics Data System (ADS)

    Saqib, Muhammad; Khan, Sultan Daud; Blumenstein, Michael

    2017-02-01

    As the population of the world increases, urbanization generates crowding situations which poses challenges to public safety and security. Manual analysis of crowded situations is a tedious job and usually prone to errors. In this paper, we propose a novel technique of crowd analysis, the aim of which is to detect different dominant motion patterns in real-time videos. A motion field is generated by computing the dense optical flow. The motion field is then divided into blocks. For each block, we adopt an Intra-clustering algorithm for detecting different flows within the block. Later on, we employ Inter-clustering for clustering the flow vectors among different blocks. We evaluate the performance of our approach on different real-time videos. The experimental results show that our proposed method is capable of detecting distinct motion patterns in crowded videos. Moreover, our algorithm outperforms state-of-the-art methods.

  1. Turning on a dime: Asymmetric vortex formation in hummingbird maneuvering flight

    NASA Astrophysics Data System (ADS)

    Ren, Yan; Dong, Haibo; Deng, Xinyan; Tobalske, Bret

    2016-09-01

    This paper is associated with a video winner of a 2015 APS/DFD Gallery of Fluid Motion Award. The original video is available from the Gallery of Fluid Motion, http://dx.doi.org/10.1103/APS.DFD.2015.GFM.V0088

  2. Development and evaluation of low cost game-based balance rehabilitation tool using the Microsoft Kinect sensor.

    PubMed

    Lange, Belinda; Chang, Chien-Yen; Suma, Evan; Newman, Bradley; Rizzo, Albert Skip; Bolas, Mark

    2011-01-01

    The use of the commercial video games as rehabilitation tools, such as the Nintendo WiiFit, has recently gained much interest in the physical therapy arena. Motion tracking controllers such as the Nintendo Wiimote are not sensitive enough to accurately measure performance in all components of balance. Additionally, users can figure out how to "cheat" inaccurate trackers by performing minimal movement (e.g. wrist twisting a Wiimote instead of a full arm swing). Physical rehabilitation requires accurate and appropriate tracking and feedback of performance. To this end, we are developing applications that leverage recent advances in commercial video game technology to provide full-body control of animated virtual characters. A key component of our approach is the use of newly available low cost depth sensing camera technology that provides markerless full-body tracking on a conventional PC. The aim of this research was to develop and assess an interactive game-based rehabilitation tool for balance training of adults with neurological injury.

  3. MPEG-1 low-cost encoder solution

    NASA Astrophysics Data System (ADS)

    Grueger, Klaus; Schirrmeister, Frank; Filor, Lutz; von Reventlow, Christian; Schneider, Ulrich; Mueller, Gerriet; Sefzik, Nicolai; Fiedrich, Sven

    1995-02-01

    A solution for real-time compression of digital YCRCB video data to an MPEG-1 video data stream has been developed. As an additional option, motion JPEG and video telephone streams (H.261) can be generated. For MPEG-1, up to two bidirectional predicted images are supported. The required computational power for motion estimation and DCT/IDCT, memory size and memory bandwidth have been the main challenges. The design uses fast-page-mode memory accesses and requires only one single 80 ns EDO-DRAM with 256 X 16 organization for video encoding. This can be achieved only by using adequate access and coding strategies. The architecture consists of an input processing and filter unit, a memory interface, a motion estimation unit, a motion compensation unit, a DCT unit, a quantization control, a VLC unit and a bus interface. For using the available memory bandwidth by the processing tasks, a fixed schedule for memory accesses has been applied, that can be interrupted for asynchronous events. The motion estimation unit implements a highly sophisticated hierarchical search strategy based on block matching. The DCT unit uses a separated fast-DCT flowgraph realized by a switchable hardware unit for both DCT and IDCT operation. By appropriate multiplexing, only one multiplier is required for: DCT, quantization, inverse quantization, and IDCT. The VLC unit generates the video-stream up to the video sequence layer and is directly coupled with an intelligent bus-interface. Thus, the assembly of video, audio and system data can easily be performed by the host computer. Having a relatively low complexity and only small requirements for DRAM circuits, the developed solution can be applied to low-cost encoding products for consumer electronics.

  4. Commercialization of Australian advanced infrared technology

    NASA Astrophysics Data System (ADS)

    Redpath, John; Brown, Allen; Woods, William F.

    1995-09-01

    For several decades, the main thrust in infrared technology developments in Australia has been in two main sensor technologies: uncooled silicon chip printed bolometric sensors pioneered by DSTO's Kevin Liddiard, and precision engineered high quality Cadmium Mercury Telluride developed at DSTO under the guidance of Dr. Richard Hartley. In late 1993 a low cost infrared imaging device was developed at DSTO as a sensor for guided missiles. The combination of these three innovations made up a unique package that enabled Australian industry to break through the barriers of commercializing infrared technology. The privately owned company, R.J. Optronics Pty Ltd undertook the process of re-engineering a selection of these DSTO developments to be applicable to a wide range of infrared products. The first project was a novel infrared imager based on a Palmer scan (translated circle) mechanism. This device applies a spinning wedge and a single detector, it uses a video processor to convert the image into a standard rectangular format. Originally developed as an imaging seeker for a stand-off weapon, it is producing such high quality images at such a low cost that it is now also being adapted for a wide variety of other military and commercial applications. A technique for electronically stabilizing it has been developed which uses the inertial signals from co-mounted sensors to compensate for platform motions. This enables it to meet the requirements of aircraft, marine vessels and masthead sight applications without the use of gimbals. After tests on a three-axis motion table, several system configurations have now been successfully operated on a number of lightweight platforms, including a Cessna 172 and the Australian made Seabird Seeker aircraft.

  5. Motion-based video monitoring for early detection of livestock diseases: The case of African swine fever

    PubMed Central

    Martínez-Avilés, Marta; Ivorra, Benjamin; Martínez-López, Beatriz; Ramos, Ángel Manuel; Sánchez-Vizcaíno, José Manuel

    2017-01-01

    Early detection of infectious diseases can substantially reduce the health and economic impacts on livestock production. Here we describe a system for monitoring animal activity based on video and data processing techniques, in order to detect slowdown and weakening due to infection with African swine fever (ASF), one of the most significant threats to the pig industry. The system classifies and quantifies motion-based animal behaviour and daily activity in video sequences, allowing automated and non-intrusive surveillance in real-time. The aim of this system is to evaluate significant changes in animals’ motion after being experimentally infected with ASF virus. Indeed, pig mobility declined progressively and fell significantly below pre-infection levels starting at four days after infection at a confidence level of 95%. Furthermore, daily motion decreased in infected animals by approximately 10% before the detection of the disease by clinical signs. These results show the promise of video processing techniques for real-time early detection of livestock infectious diseases. PMID:28877181

  6. An unsupervised method for summarizing egocentric sport videos

    NASA Astrophysics Data System (ADS)

    Habibi Aghdam, Hamed; Jahani Heravi, Elnaz; Puig, Domenec

    2015-12-01

    People are getting more interested to record their sport activities using head-worn or hand-held cameras. This type of videos which is called egocentric sport videos has different motion and appearance patterns compared with life-logging videos. While a life-logging video can be defined in terms of well-defined human-object interactions, notwithstanding, it is not trivial to describe egocentric sport videos using well-defined activities. For this reason, summarizing egocentric sport videos based on human-object interaction might fail to produce meaningful results. In this paper, we propose an unsupervised method for summarizing egocentric videos by identifying the key-frames of the video. Our method utilizes both appearance and motion information and it automatically finds the number of the key-frames. Our blind user study on the new dataset collected from YouTube shows that in 93:5% cases, the users choose the proposed method as their first video summary choice. In addition, our method is within the top 2 choices of the users in 99% of studies.

  7. Robust video super-resolution with registration efficiency adaptation

    NASA Astrophysics Data System (ADS)

    Zhang, Xinfeng; Xiong, Ruiqin; Ma, Siwei; Zhang, Li; Gao, Wen

    2010-07-01

    Super-Resolution (SR) is a technique to construct a high-resolution (HR) frame by fusing a group of low-resolution (LR) frames describing the same scene. The effectiveness of the conventional super-resolution techniques, when applied on video sequences, strongly relies on the efficiency of motion alignment achieved by image registration. Unfortunately, such efficiency is limited by the motion complexity in the video and the capability of adopted motion model. In image regions with severe registration errors, annoying artifacts usually appear in the produced super-resolution video. This paper proposes a robust video super-resolution technique that adapts itself to the spatially-varying registration efficiency. The reliability of each reference pixel is measured by the corresponding registration error and incorporated into the optimization objective function of SR reconstruction. This makes the SR reconstruction highly immune to the registration errors, as outliers with higher registration errors are assigned lower weights in the objective function. In particular, we carefully design a mechanism to assign weights according to registration errors. The proposed superresolution scheme has been tested with various video sequences and experimental results clearly demonstrate the effectiveness of the proposed method.

  8. Development of high-speed video cameras

    NASA Astrophysics Data System (ADS)

    Etoh, Takeharu G.; Takehara, Kohsei; Okinaka, Tomoo; Takano, Yasuhide; Ruckelshausen, Arno; Poggemann, Dirk

    2001-04-01

    Presented in this paper is an outline of the R and D activities on high-speed video cameras, which have been done in Kinki University since more than ten years ago, and are currently proceeded as an international cooperative project with University of Applied Sciences Osnabruck and other organizations. Extensive marketing researches have been done, (1) on user's requirements on high-speed multi-framing and video cameras by questionnaires and hearings, and (2) on current availability of the cameras of this sort by search of journals and websites. Both of them support necessity of development of a high-speed video camera of more than 1 million fps. A video camera of 4,500 fps with parallel readout was developed in 1991. A video camera with triple sensors was developed in 1996. The sensor is the same one as developed for the previous camera. The frame rate is 50 million fps for triple-framing and 4,500 fps for triple-light-wave framing, including color image capturing. Idea on a video camera of 1 million fps with an ISIS, In-situ Storage Image Sensor, was proposed in 1993 at first, and has been continuously improved. A test sensor was developed in early 2000, and successfully captured images at 62,500 fps. Currently, design of a prototype ISIS is going on, and, hopefully, will be fabricated in near future. Epoch-making cameras in history of development of high-speed video cameras by other persons are also briefly reviewed.

  9. Multi-Sensory Features for Personnel Detection at Border Crossings

    DTIC Science & Technology

    2011-07-08

    challenging problem. Video sensors consume high amounts of power and require a large volume for storage. Hence, it is preferable to use non- imaging sensors...temporal distribution of gait beats [5]. At border crossings, animals such as mules, horses, or donkeys are often known to carry loads. Animal hoof...field, passive ultrasonic, sonar, and both infrared and visi- ble video sensors. Each sensor suite is placed along the path with a spacing of 40 to

  10. Video Analysis Verification of Head Impact Events Measured by Wearable Sensors.

    PubMed

    Cortes, Nelson; Lincoln, Andrew E; Myer, Gregory D; Hepburn, Lisa; Higgins, Michael; Putukian, Margot; Caswell, Shane V

    2017-08-01

    Wearable sensors are increasingly used to quantify the frequency and magnitude of head impact events in multiple sports. There is a paucity of evidence that verifies head impact events recorded by wearable sensors. To utilize video analysis to verify head impact events recorded by wearable sensors and describe the respective frequency and magnitude. Cohort study (diagnosis); Level of evidence, 2. Thirty male (mean age, 16.6 ± 1.2 years; mean height, 1.77 ± 0.06 m; mean weight, 73.4 ± 12.2 kg) and 35 female (mean age, 16.2 ± 1.3 years; mean height, 1.66 ± 0.05 m; mean weight, 61.2 ± 6.4 kg) players volunteered to participate in this study during the 2014 and 2015 lacrosse seasons. Participants were instrumented with GForceTracker (GFT; boys) and X-Patch sensors (girls). Simultaneous game video was recorded by a trained videographer using a single camera located at the highest midfield location. One-third of the field was framed and panned to follow the ball during games. Videographic and accelerometer data were time synchronized. Head impact counts were compared with video recordings and were deemed valid if (1) the linear acceleration was ≥20 g, (2) the player was identified on the field, (3) the player was in camera view, and (4) the head impact mechanism could be clearly identified. Descriptive statistics of peak linear acceleration (PLA) and peak rotational velocity (PRV) for all verified head impacts ≥20 g were calculated. For the boys, a total recorded 1063 impacts (2014: n = 545; 2015: n = 518) were logged by the GFT between game start and end times (mean PLA, 46 ± 31 g; mean PRV, 1093 ± 661 deg/s) during 368 player-games. Of these impacts, 690 were verified via video analysis (65%; mean PLA, 48 ± 34 g; mean PRV, 1242 ± 617 deg/s). The X-Patch sensors, worn by the girls, recorded a total 180 impacts during the course of the games, and 58 (2014: n = 33; 2015: n = 25) were verified via video analysis (32%; mean PLA, 39 ± 21 g; mean PRV, 1664 ± 619 rad/s). The current data indicate that existing wearable sensor technologies may substantially overestimate head impact events. Further, while the wearable sensors always estimated a head impact location, only 48% of the impacts were a result of direct contact to the head as characterized on video. Using wearable sensors and video to verify head impacts may decrease the inclusion of false-positive impacts during game activity in the analysis.

  11. Open architecture CMM motion controller

    NASA Astrophysics Data System (ADS)

    Chang, David; Spence, Allan D.; Bigg, Steve; Heslip, Joe; Peterson, John

    2001-12-01

    Although initially the only Coordinate Measuring Machine (CMM) sensor available was a touch trigger probe, technological advances in sensors and computing have greatly increased the variety of available inspection sensors. Non-contact laser digitizers and analog scanning touch probes require very well tuned CMM motion control, as well as an extensible, open architecture interface. This paper describes the implementation of a retrofit CMM motion controller designed for open architecture interface to a variety of sensors. The controller is based on an Intel Pentium microcomputer and a Servo To Go motion interface electronics card. Motor amplifiers, safety, and additional interface electronics are housed in a separate enclosure. Host Signal Processing (HSP) is used for the motion control algorithm. Compared to the usual host plus DSP architecture, single CPU HSP simplifies integration with the various sensors, and implementation of software geometric error compensation. Motion control tuning is accomplished using a remote computer via 100BaseTX Ethernet. A Graphical User Interface (GUI) is used to enter geometric error compensation data, and to optimize the motion control tuning parameters. It is shown that this architecture achieves the required real time motion control response, yet is much easier to extend to additional sensors.

  12. Inertial Sensor-Based Touch and Shake Metaphor for Expressive Control of 3D Virtual Avatars

    PubMed Central

    Patil, Shashidhar; Chintalapalli, Harinadha Reddy; Kim, Dubeom; Chai, Youngho

    2015-01-01

    In this paper, we present an inertial sensor-based touch and shake metaphor for expressive control of a 3D virtual avatar in a virtual environment. An intuitive six degrees-of-freedom wireless inertial motion sensor is used as a gesture and motion control input device with a sensor fusion algorithm. The algorithm enables user hand motions to be tracked in 3D space via magnetic, angular rate, and gravity sensors. A quaternion-based complementary filter is implemented to reduce noise and drift. An algorithm based on dynamic time-warping is developed for efficient recognition of dynamic hand gestures with real-time automatic hand gesture segmentation. Our approach enables the recognition of gestures and estimates gesture variations for continuous interaction. We demonstrate the gesture expressivity using an interactive flexible gesture mapping interface for authoring and controlling a 3D virtual avatar and its motion by tracking user dynamic hand gestures. This synthesizes stylistic variations in a 3D virtual avatar, producing motions that are not present in the motion database using hand gesture sequences from a single inertial motion sensor. PMID:26094629

  13. Video quality assessment based on correlation between spatiotemporal motion energies

    NASA Astrophysics Data System (ADS)

    Yan, Peng; Mou, Xuanqin

    2016-09-01

    Video quality assessment (VQA) has been a hot research topic because of rapid increase of huge demand of video communications. From the earliest PSNR metric to advanced models that are perceptual aware, researchers have made great progress in this field by introducing properties of human vision system (HVS) into VQA model design. Among various algorithms that model the property of HVS perceiving motion, the spatiotemporal energy model has been validated to be high consistent with psychophysical experiments. In this paper, we take the spatiotemporal energy model into VQA model design by the following steps. 1) According to the pristine spatiotemporal energy model proposed by Adelson et al, we apply the linear filters, which are oriented in space-time and tuned in spatial frequency, to filter the reference and test videos respectively. The outputs of quadrature pairs of above filters are then squared and summed to give two measures of motion energy, which are named rightward and leftward energy responses, respectively. 2) Based on the pristine model, we calculate summation of the rightward and leftward energy responses as spatiotemporal features to represent perceptual quality information for videos, named total spatiotemporal motion energy maps. 3) The proposed FR-VQA model, named STME, is calculated with statistics based on the pixel-wise correlation between the total spatiotemporal motion energy maps of the reference and distorted videos. The STME model was validated on the LIVE VQA Database by comparing with existing FR-VQA models. Experimental results show that STME performs with excellent prediction accuracy and stays in state-of-the-art VQA models.

  14. A web-based system for home monitoring of patients with Parkinson's disease using wearable sensors.

    PubMed

    Chen, Bor-Rong; Patel, Shyamal; Buckley, Thomas; Rednic, Ramona; McClure, Douglas J; Shih, Ludy; Tarsy, Daniel; Welsh, Matt; Bonato, Paolo

    2011-03-01

    This letter introduces MercuryLive, a platform to enable home monitoring of patients with Parkinson's disease (PD) using wearable sensors. MercuryLive contains three tiers: a resource-aware data collection engine that relies upon wearable sensors, web services for live streaming and storage of sensor data, and a web-based graphical user interface client with video conferencing capability. Besides, the platform has the capability of analyzing sensor (i.e., accelerometer) data to reliably estimate clinical scores capturing the severity of tremor, bradykinesia, and dyskinesia. Testing results showed an average data latency of less than 400 ms and video latency of about 200 ms with video frame rate of about 13 frames/s when 800 kb/s of bandwidth were available and we used a 40% video compression, and data feature upload requiring 1 min of extra time following a 10 min interactive session. These results indicate that the proposed platform is suitable to monitor patients with PD to facilitate the titration of medications in the late stages of the disease.

  15. Ami - The chemist's amanuensis.

    PubMed

    Brooks, Brian J; Thorn, Adam L; Smith, Matthew; Matthews, Peter; Chen, Shaoming; O'Steen, Ben; Adams, Sam E; Townsend, Joe A; Murray-Rust, Peter

    2011-10-14

    The Ami project was a six month Rapid Innovation project sponsored by JISC to explore the Virtual Research Environment space. The project brainstormed with chemists and decided to investigate ways to facilitate monitoring and collection of experimental data.A frequently encountered use-case was identified of how the chemist reaches the end of an experiment, but finds an unexpected result. The ability to replay events can significantly help make sense of how things progressed. The project therefore concentrated on collecting a variety of dimensions of ancillary data - data that would not normally be collected due to practicality constraints. There were three main areas of investigation: 1) Development of a monitoring tool using infrared and ultrasonic sensors; 2) Time-lapse motion video capture (for example, videoing 5 seconds in every 60); and 3) Activity-driven video monitoring of the fume cupboard environs.The Ami client application was developed to control these separate logging functions. The application builds up a timeline of the events in the experiment and around the fume cupboard. The videos and data logs can then be reviewed after the experiment in order to help the chemist determine the exact timings and conditions used.The project experimented with ways in which a Microsoft Kinect could be used in a laboratory setting. Investigations suggest that it would not be an ideal device for controlling a mouse, but it shows promise for usages such as manipulating virtual molecules.

  16. Real-time people counting system using a single video camera

    NASA Astrophysics Data System (ADS)

    Lefloch, Damien; Cheikh, Faouzi A.; Hardeberg, Jon Y.; Gouton, Pierre; Picot-Clemente, Romain

    2008-02-01

    There is growing interest in video-based solutions for people monitoring and counting in business and security applications. Compared to classic sensor-based solutions the video-based ones allow for more versatile functionalities, improved performance with lower costs. In this paper, we propose a real-time system for people counting based on single low-end non-calibrated video camera. The two main challenges addressed in this paper are: robust estimation of the scene background and the number of real persons in merge-split scenarios. The latter is likely to occur whenever multiple persons move closely, e.g. in shopping centers. Several persons may be considered to be a single person by automatic segmentation algorithms, due to occlusions or shadows, leading to under-counting. Therefore, to account for noises, illumination and static objects changes, a background substraction is performed using an adaptive background model (updated over time based on motion information) and automatic thresholding. Furthermore, post-processing of the segmentation results is performed, in the HSV color space, to remove shadows. Moving objects are tracked using an adaptive Kalman filter, allowing a robust estimation of the objects future positions even under heavy occlusion. The system is implemented in Matlab, and gives encouraging results even at high frame rates. Experimental results obtained based on the PETS2006 datasets are presented at the end of the paper.

  17. Ami - The chemist's amanuensis

    PubMed Central

    2011-01-01

    The Ami project was a six month Rapid Innovation project sponsored by JISC to explore the Virtual Research Environment space. The project brainstormed with chemists and decided to investigate ways to facilitate monitoring and collection of experimental data. A frequently encountered use-case was identified of how the chemist reaches the end of an experiment, but finds an unexpected result. The ability to replay events can significantly help make sense of how things progressed. The project therefore concentrated on collecting a variety of dimensions of ancillary data - data that would not normally be collected due to practicality constraints. There were three main areas of investigation: 1) Development of a monitoring tool using infrared and ultrasonic sensors; 2) Time-lapse motion video capture (for example, videoing 5 seconds in every 60); and 3) Activity-driven video monitoring of the fume cupboard environs. The Ami client application was developed to control these separate logging functions. The application builds up a timeline of the events in the experiment and around the fume cupboard. The videos and data logs can then be reviewed after the experiment in order to help the chemist determine the exact timings and conditions used. The project experimented with ways in which a Microsoft Kinect could be used in a laboratory setting. Investigations suggest that it would not be an ideal device for controlling a mouse, but it shows promise for usages such as manipulating virtual molecules. PMID:21999587

  18. 1 kHz 2D Visual Motion Sensor Using 20 × 20 Silicon Retina Optical Sensor and DSP Microcontroller.

    PubMed

    Liu, Shih-Chii; Yang, MinHao; Steiner, Andreas; Moeckel, Rico; Delbruck, Tobi

    2015-04-01

    Optical flow sensors have been a long running theme in neuromorphic vision sensors which include circuits that implement the local background intensity adaptation mechanism seen in biological retinas. This paper reports a bio-inspired optical motion sensor aimed towards miniature robotic and aerial platforms. It combines a 20 × 20 continuous-time CMOS silicon retina vision sensor with a DSP microcontroller. The retina sensor has pixels that have local gain control and adapt to background lighting. The system allows the user to validate various motion algorithms without building dedicated custom solutions. Measurements are presented to show that the system can compute global 2D translational motion from complex natural scenes using one particular algorithm: the image interpolation algorithm (I2A). With this algorithm, the system can compute global translational motion vectors at a sample rate of 1 kHz, for speeds up to ±1000 pixels/s, using less than 5 k instruction cycles (12 instructions per pixel) per frame. At 1 kHz sample rate the DSP is 12% occupied with motion computation. The sensor is implemented as a 6 g PCB consuming 170 mW of power.

  19. A triboelectric motion sensor in wearable body sensor network for human activity recognition.

    PubMed

    Hui Huang; Xian Li; Ye Sun

    2016-08-01

    The goal of this study is to design a novel triboelectric motion sensor in wearable body sensor network for human activity recognition. Physical activity recognition is widely used in well-being management, medical diagnosis and rehabilitation. Other than traditional accelerometers, we design a novel wearable sensor system based on triboelectrification. The triboelectric motion sensor can be easily attached to human body and collect motion signals caused by physical activities. The experiments are conducted to collect five common activity data: sitting and standing, walking, climbing upstairs, downstairs, and running. The k-Nearest Neighbor (kNN) clustering algorithm is adopted to recognize these activities and validate the feasibility of this new approach. The results show that our system can perform physical activity recognition with a successful rate over 80% for walking, sitting and standing. The triboelectric structure can also be used as an energy harvester for motion harvesting due to its high output voltage in random low-frequency motion.

  20. Camera Operator and Videographer

    ERIC Educational Resources Information Center

    Moore, Pam

    2007-01-01

    Television, video, and motion picture camera operators produce images that tell a story, inform or entertain an audience, or record an event. They use various cameras to shoot a wide range of material, including television series, news and sporting events, music videos, motion pictures, documentaries, and training sessions. Those who film or…

  1. 36 CFR 1237.4 - What definitions apply to this part?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... definitions apply to this part? (a) See § 1220.18 of this subchapter for definitions of terms used throughout... prints from these negatives. Also included are infrared, ultraviolet, multispectral, video, and radar... still photographs and motion media (i.e., moving images whether on motion picture film or as video...

  2. 36 CFR 1237.4 - What definitions apply to this part?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... definitions apply to this part? (a) See § 1220.18 of this subchapter for definitions of terms used throughout... prints from these negatives. Also included are infrared, ultraviolet, multispectral, video, and radar... still photographs and motion media (i.e., moving images whether on motion picture film or as video...

  3. 34 CFR 3.4 - Use of the seal.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... plaques; (4) For electronic media, motion picture film, video tape and other audiovisual media prepared by...) On electronic media, motion picture film, video tape, and other audiovisual media prepared by or for... seal, replica, reproduction or embossing seal is punishable under 18 U.S.C. 506. (g) Any person using...

  4. 34 CFR 3.4 - Use of the seal.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... plaques; (4) For electronic media, motion picture film, video tape and other audiovisual media prepared by...) On electronic media, motion picture film, video tape, and other audiovisual media prepared by or for... seal, replica, reproduction or embossing seal is punishable under 18 U.S.C. 506. (g) Any person using...

  5. 34 CFR 3.4 - Use of the seal.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... plaques; (4) For electronic media, motion picture film, video tape and other audiovisual media prepared by...) On electronic media, motion picture film, video tape, and other audiovisual media prepared by or for... seal, replica, reproduction or embossing seal is punishable under 18 U.S.C. 506. (g) Any person using...

  6. 36 CFR 1237.4 - What definitions apply to this part?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... definitions apply to this part? (a) See § 1220.18 of this subchapter for definitions of terms used throughout... prints from these negatives. Also included are infrared, ultraviolet, multispectral, video, and radar... still photographs and motion media (i.e., moving images whether on motion picture film or as video...

  7. 34 CFR 3.4 - Use of the seal.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... plaques; (4) For electronic media, motion picture film, video tape and other audiovisual media prepared by...) On electronic media, motion picture film, video tape, and other audiovisual media prepared by or for... seal, replica, reproduction or embossing seal is punishable under 18 U.S.C. 506. (g) Any person using...

  8. 34 CFR 3.4 - Use of the seal.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... plaques; (4) For electronic media, motion picture film, video tape and other audiovisual media prepared by...) On electronic media, motion picture film, video tape, and other audiovisual media prepared by or for... seal, replica, reproduction or embossing seal is punishable under 18 U.S.C. 506. (g) Any person using...

  9. 36 CFR 1237.4 - What definitions apply to this part?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... definitions apply to this part? (a) See § 1220.18 of this subchapter for definitions of terms used throughout... prints from these negatives. Also included are infrared, ultraviolet, multispectral, video, and radar... still photographs and motion media (i.e., moving images whether on motion picture film or as video...

  10. Omni-Purpose Stretchable Strain Sensor Based on a Highly Dense Nanocracking Structure for Whole-Body Motion Monitoring.

    PubMed

    Jeon, Hyungkook; Hong, Seong Kyung; Kim, Min Seo; Cho, Seong J; Lim, Geunbae

    2017-12-06

    Here, we report an omni-purpose stretchable strain sensor (OPSS sensor) based on a nanocracking structure for monitoring whole-body motions including both joint-level and skin-level motions. By controlling and optimizing the nanocracking structure, inspired by the spider sensory system, the OPSS sensor is endowed with both high sensitivity (gauge factor ≈ 30) and a wide working range (strain up to 150%) under great linearity (R 2 = 0.9814) and fast response time (<30 ms). Furthermore, the fabrication process of the OPSS sensor has advantages of being extremely simple, patternable, integrated circuit-compatible, and reliable in terms of reproducibility. Using the OPSS sensor, we detected various human body motions including both moving of joints and subtle deforming of skin such as pulsation. As specific medical applications of the sensor, we also successfully developed a glove-type hand motion detector and a real-time Morse code communication system for patients with general paralysis. Therefore, considering the outstanding sensing performances, great advantages of the fabrication process, and successful results from a variety of practical applications, we believe that the OPSS sensor is a highly suitable strain sensor for whole-body motion monitoring and has potential for a wide range of applications, such as medical robotics and wearable healthcare devices.

  11. New Integrated Video and Graphics Technology: Digital Video Interactive.

    ERIC Educational Resources Information Center

    Optical Information Systems, 1987

    1987-01-01

    Describes digital video interactive (DVI), a new technology which combines the interactivity of the graphics capabilities in personal computers with the realism of high-quality motion video and multitrack audio in an all-digital integrated system. (MES)

  12. Next Generation Advanced Video Guidance Sensor

    NASA Technical Reports Server (NTRS)

    Lee, Jimmy; Spencer, Susan; Bryan, Tom; Johnson, Jimmie; Robertson, Bryan

    2008-01-01

    The first autonomous rendezvous and docking in the history of the U.S. Space Program was successfully accomplished by Orbital Express, using the Advanced Video Guidance Sensor (AVGS) as the primary docking sensor. The United States now has a mature and flight proven sensor technology for supporting Crew Exploration Vehicles (CEV) and Commercial Orbital Transport. Systems (COTS) Automated Rendezvous and Docking (AR&D). AVGS has a proven pedigree, based on extensive ground testing and flight demonstrations. The AVGS on the Demonstration of Autonomous Rendezvous Technology (DART)mission operated successfully in "spot mode" out to 2 km. The first generation rendezvous and docking sensor, the Video Guidance Sensor (VGS), was developed and successfully flown on Space Shuttle flights in 1997 and 1998. Parts obsolescence issues prevent the construction of more AVGS. units, and the next generation sensor must be updated to support the CEV and COTS programs. The flight proven AR&D sensor is being redesigned to update parts and add additional. capabilities for CEV and COTS with the development of the Next, Generation AVGS (NGAVGS) at the Marshall Space Flight Center. The obsolete imager and processor are being replaced with new radiation tolerant parts. In addition, new capabilities might include greater sensor range, auto ranging, and real-time video output. This paper presents an approach to sensor hardware trades, use of highly integrated laser components, and addresses the needs of future vehicles that may rendezvous and dock with the International Space Station (ISS) and other Constellation vehicles. It will also discuss approaches for upgrading AVGS to address parts obsolescence, and concepts for minimizing the sensor footprint, weight, and power requirements. In addition, parts selection and test plans for the NGAVGS will be addressed to provide a highly reliable flight qualified sensor. Expanded capabilities through innovative use of existing capabilities will also be discussed.

  13. A computer vision framework for finger-tapping evaluation in Parkinson's disease.

    PubMed

    Khan, Taha; Nyholm, Dag; Westin, Jerker; Dougherty, Mark

    2014-01-01

    The rapid finger-tapping test (RFT) is an important method for clinical evaluation of movement disorders, including Parkinson's disease (PD). In clinical practice, the naked-eye evaluation of RFT results in a coarse judgment of symptom scores. We introduce a novel computer-vision (CV) method for quantification of tapping symptoms through motion analysis of index-fingers. The method is unique as it utilizes facial features to calibrate tapping amplitude for normalization of distance variation between the camera and subject. The study involved 387 video footages of RFT recorded from 13 patients diagnosed with advanced PD. Tapping performance in these videos was rated by two clinicians between the symptom severity levels ('0: normal' to '3: severe') using the unified Parkinson's disease rating scale motor examination of finger-tapping (UPDRS-FT). Another set of recordings in this study consisted of 84 videos of RFT recorded from 6 healthy controls. These videos were processed by a CV algorithm that tracks the index-finger motion between the video-frames to produce a tapping time-series. Different features were computed from this time series to estimate speed, amplitude, rhythm and fatigue in tapping. The features were trained in a support vector machine (1) to categorize the patient group between UPDRS-FT symptom severity levels, and (2) to discriminate between PD patients and healthy controls. A new representative feature of tapping rhythm, 'cross-correlation between the normalized peaks' showed strong Guttman correlation (μ2=-0.80) with the clinical ratings. The classification of tapping features using the support vector machine classifier and 10-fold cross validation categorized the patient samples between UPDRS-FT levels with an accuracy of 88%. The same classification scheme discriminated between RFT samples of healthy controls and PD patients with an accuracy of 95%. The work supports the feasibility of the approach, which is presumed suitable for PD monitoring in the home environment. The system offers advantages over other technologies (e.g. magnetic sensors, accelerometers, etc.) previously developed for objective assessment of tapping symptoms. Copyright © 2013 Elsevier B.V. All rights reserved.

  14. 47 CFR 101.141 - Microwave modulation.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ...-1 2.50 6.17 N/A 4 DS-1 3.75 12.3 N/A 8 DS-1 5.0 18.5 N/A 12 DS-1 10.0 44.7 3 50 1 DS-3/STS-1 20.0 89...) Transmitters carrying digital motion video motion material are exempt from the requirements specified in... video motion material and the minimum bit rate specified in paragraph (a)(1) of this section is met. In...

  15. 47 CFR 101.141 - Microwave modulation.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... N/A 4 DS-1 3.75 12.3 N/A 8 DS-1 5.0 18.5 N/A 12 DS-1 10.0 44.7 3 50 1 DS-3/STS-1 20.0 89.4 3 50 2 DS... digital motion video motion material are exempt from the requirements specified in paragraphs (a)(2) and (a)(3) of this section, provided that at least 50 percent of the payload is digital video motion...

  16. Air and Space Power Journal. Volume 24, Number 4, Winter 2010

    DTIC Science & Technology

    2010-01-01

    assessment of damage. In addition to still photos, Predator RPAs collected full-motion video during around- the-clock coverage of select areas in...Dissemination of the video col- lected by the Predators to a variety of users, both on the ground in Haiti and at locations outside the area of...links, and full-motion- video capability.29 The aircraft must operate from austere forward locations and provide a nominal five-hour endurance with a

  17. A novel multiple description scalable coding scheme for mobile wireless video transmission

    NASA Astrophysics Data System (ADS)

    Zheng, Haifeng; Yu, Lun; Chen, Chang Wen

    2005-03-01

    We proposed in this paper a novel multiple description scalable coding (MDSC) scheme based on in-band motion compensation temporal filtering (IBMCTF) technique in order to achieve high video coding performance and robust video transmission. The input video sequence is first split into equal-sized groups of frames (GOFs). Within a GOF, each frame is hierarchically decomposed by discrete wavelet transform. Since there is a direct relationship between wavelet coefficients and what they represent in the image content after wavelet decomposition, we are able to reorganize the spatial orientation trees to generate multiple bit-streams and employed SPIHT algorithm to achieve high coding efficiency. We have shown that multiple bit-stream transmission is very effective in combating error propagation in both Internet video streaming and mobile wireless video. Furthermore, we adopt the IBMCTF scheme to remove the redundancy for inter-frames along the temporal direction using motion compensated temporal filtering, thus high coding performance and flexible scalability can be provided in this scheme. In order to make compressed video resilient to channel error and to guarantee robust video transmission over mobile wireless channels, we add redundancy to each bit-stream and apply error concealment strategy for lost motion vectors. Unlike traditional multiple description schemes, the integration of these techniques enable us to generate more than two bit-streams that may be more appropriate for multiple antenna transmission of compressed video. Simulate results on standard video sequences have shown that the proposed scheme provides flexible tradeoff between coding efficiency and error resilience.

  18. A video-based system for hand-driven stop-motion animation.

    PubMed

    Han, Xiaoguang; Fu, Hongbo; Zheng, Hanlin; Liu, Ligang; Wang, Jue

    2013-01-01

    Stop-motion is a well-established animation technique but is often laborious and requires craft skills. A new video-based system can animate the vast majority of everyday objects in stop-motion style, more flexibly and intuitively. Animators can perform and capture motions continuously instead of breaking them into increments and shooting one still picture per increment. More important, the system permits direct hand manipulation without resorting to rigs, achieving more natural object control for beginners. The system's key component is two-phase keyframe-based capturing and processing, assisted by computer vision techniques. With this system, even amateurs can generate high-quality stop-motion animations.

  19. Triboelectrification based motion sensor for human-machine interfacing.

    PubMed

    Yang, Weiqing; Chen, Jun; Wen, Xiaonan; Jing, Qingshen; Yang, Jin; Su, Yuanjie; Zhu, Guang; Wu, Wenzuo; Wang, Zhong Lin

    2014-05-28

    We present triboelectrification based, flexible, reusable, and skin-friendly dry biopotential electrode arrays as motion sensors for tracking muscle motion and human-machine interfacing (HMI). The independently addressable, self-powered sensor arrays have been utilized to record the electric output signals as a mapping figure to accurately identify the degrees of freedom as well as directions and magnitude of muscle motions. A fast Fourier transform (FFT) technique was employed to analyse the frequency spectra of the obtained electric signals and thus to determine the motion angular velocities. Moreover, the motion sensor arrays produced a short-circuit current density up to 10.71 mA/m(2), and an open-circuit voltage as high as 42.6 V with a remarkable signal-to-noise ratio up to 1000, which enables the devices as sensors to accurately record and transform the motions of the human joints, such as elbow, knee, heel, and even fingers, and thus renders it a superior and unique invention in the field of HMI.

  20. A novel sensor for two-degree-of-freedom motion measurement of linear nanopositioning stage using knife edge displacement sensing technique

    NASA Astrophysics Data System (ADS)

    Zolfaghari, Abolfazl; Jeon, Seongkyul; Stepanick, Christopher K.; Lee, ChaBum

    2017-06-01

    This paper presents a novel method for measuring two-degree-of-freedom (DOF) motion of flexure-based nanopositioning systems based on optical knife-edge sensing (OKES) technology, which utilizes the interference of two superimposed waves: a geometrical wave from the primary source of light and a boundary diffraction wave from the secondary source. This technique allows for two-DOF motion measurement of the linear and pitch motions of nanopositioning systems. Two capacitive sensors (CSs) are used for a baseline comparison with the proposed sensor by simultaneously measuring the motions of the nanopositioning system. The experimental results show that the proposed sensor closely agrees with the fundamental linear motion of the CS. However, the two-DOF OKES technology was shown to be approximately three times more sensitive to the pitch motion than the CS. The discrepancy in the two sensor outputs is discussed in terms of measuring principle, linearity, bandwidth, control effectiveness, and resolution.

  1. Smoke regions extraction based on two steps segmentation and motion detection in early fire

    NASA Astrophysics Data System (ADS)

    Jian, Wenlin; Wu, Kaizhi; Yu, Zirong; Chen, Lijuan

    2018-03-01

    Aiming at the early problems of video-based smoke detection in fire video, this paper proposes a method to extract smoke suspected regions by combining two steps segmentation and motion characteristics. Early smoldering smoke can be seen as gray or gray-white regions. In the first stage, regions of interests (ROIs) with smoke are obtained by using two step segmentation methods. Then, suspected smoke regions are detected by combining the two step segmentation and motion detection. Finally, morphological processing is used for smoke regions extracting. The Otsu algorithm is used as segmentation method and the ViBe algorithm is used to detect the motion of smoke. The proposed method was tested on 6 test videos with smoke. The experimental results show the effectiveness of our proposed method over visual observation.

  2. Use of video-assisted intubation devices in the management of patients with trauma.

    PubMed

    Aziz, Michael

    2013-03-01

    Patients with trauma may have airways that are difficult to manage. Patients with blunt trauma are at increased risk of unrecognized cervical spine injury, especially patients with head trauma. Manual in-line stabilization reduces cervical motion and should be applied whenever a cervical collar is removed. All airway interventions cause some degree of cervical spine motion. Flexible fiberoptic intubation causes the least cervical motion of all intubation approaches, and rigid video laryngoscopy provides a good laryngeal view and eases intubation difficulty. In emergency medicine departments, video laryngoscopy use is growing and observational data suggest an improved success rate compared with direct laryngoscopy. Copyright © 2013 Elsevier Inc. All rights reserved.

  3. Development of esMOCA Biomechanic, Motion Capture Instrumentation for Biomechanics Analysis

    NASA Astrophysics Data System (ADS)

    Arendra, A.; Akhmad, S.

    2018-01-01

    This study aims to build motion capture instruments using inertial measurement unit sensors to assist in the analysis of biomechanics. Sensors used are accelerometer and gyroscope. Estimation of orientation sensors is done by digital motion processing in each sensor nodes. There are nine sensor nodes attached to the upper limbs. This sensor is connected to the pc via a wireless sensor network. The development of kinematics and inverse dynamamic models of the upper limb is done in simulink simmechanic. The kinematic model receives streaming data of sensor nodes mounted on the limbs. The output of the kinematic model is the pose of each limbs and visualized on display. The dynamic inverse model outputs the reaction force and reaction moment of each joint based on the limb motion input. Model validation in simulink with mathematical model of mechanical analysis showed results that did not differ significantly

  4. Informed Decision Making for In-Home Use of Motion Sensor-Based Monitoring Technologies

    ERIC Educational Resources Information Center

    Bruce, Courtenay R.

    2012-01-01

    Motion sensor-based monitoring technologies are designed to maintain independence and safety of older individuals living alone. These technologies use motion sensors that are placed throughout older individuals' homes in order to derive information about eating, sleeping, and leaving/returning home habits. Deviations from normal behavioral…

  5. Hydra Rendezvous and Docking Sensor

    NASA Technical Reports Server (NTRS)

    Roe, Fred; Carrington, Connie

    2007-01-01

    The U.S. technology to support a CEV AR&D activity is mature and was developed by NASA and supporting industry during an extensive research and development program conducted during the 1990's and early 2000 time frame at the Marshall Space Flight Center. Development and demonstration of a rendezvous/docking sensor was identified early in the AR&D Program as the critical enabling technology that allows automated proxinity operations and docking. A first generation rendezvous/docking sensor, the Video Guidance Sensor (VGS) was developed and successfully flown on STS 87 and again on STS 95, proving the concept of a video-based sensor. Advances in both video and signal processing technologies and the lessons learned from the two successful flight experiments provided a baseline for the development of a new generation of video based rendezvous/docking sensor. The Advanced Video Guidance Sensor (AVGS) has greatly increased performance and additional capability for longer-range operation. A Demonstration Automatic Rendezvous Technology (DART) flight experiment was flown in April 2005 using AVGS as the primary proximity operations sensor. Because of the absence of a docking mechanism on the target satellite, this mission did not demonstrate the ability of the sensor to coltrold ocking. Mission results indicate that the rendezvous sensor operated successfully in "spot mode" (2 km acquisition of the target, bearing data only) but was never commanded to "acquire and track" the docking target. Parts obsolescence issues prevent the construction of current design AVGS units to support the NASA Exploration initiative. This flight proven AR&D technology is being modularized and upgraded with additional capabilities through the Hydra project at the Marshall Space Flight Center. Hydra brings a unique engineering approach and sensor architecture to the table, to solve the continuing issues of parts obsolescence and multiple sensor integration. This paper presents an approach to sensor hardware trades, to address the needs of future vehicles that may rendezvous and dock with the International Space Station (ISS). It will also discuss approaches for upgrading AVGS to address parts obsolescence, and concepts for modularizing the sensor to provide configuration flexibility for multiple vehicle applications. Options for complementary sensors to be integrated into the multi-head Hydra system will also be presented. Complementary sensor options include ULTOR, a digital image correlator system that could provide relative six-degree-of-freedom information independently from AVGS, and time-of-flight sensors, which determine the range between vehicles by timing pulses that travel from the sensor to the target and back. Common targets and integrated targets, suitable for use with the multi-sensor options in Hydra, will also be addressed.

  6. Evaluation of a video-based head motion tracking system for dedicated brain PET

    NASA Astrophysics Data System (ADS)

    Anishchenko, S.; Beylin, D.; Stepanov, P.; Stepanov, A.; Weinberg, I. N.; Schaeffer, S.; Zavarzin, V.; Shaposhnikov, D.; Smith, M. F.

    2015-03-01

    Unintentional head motion during Positron Emission Tomography (PET) data acquisition can degrade PET image quality and lead to artifacts. Poor patient compliance, head tremor, and coughing are examples of movement sources. Head motion due to patient non-compliance can be an issue with the rise of amyloid brain PET in dementia patients. To preserve PET image resolution and quantitative accuracy, head motion can be tracked and corrected in the image reconstruction algorithm. While fiducial markers can be used, a contactless approach is preferable. A video-based head motion tracking system for a dedicated portable brain PET scanner was developed. Four wide-angle cameras organized in two stereo pairs are used for capturing video of the patient's head during the PET data acquisition. Facial points are automatically tracked and used to determine the six degree of freedom head pose as a function of time. The presented work evaluated the newly designed tracking system using a head phantom and a moving American College of Radiology (ACR) phantom. The mean video-tracking error was 0.99±0.90 mm relative to the magnetic tracking device used as ground truth. Qualitative evaluation with the ACR phantom shows the advantage of the motion tracking application. The developed system is able to perform tracking with accuracy close to millimeter and can help to preserve resolution of brain PET images in presence of movements.

  7. Engineering workstation: Sensor modeling

    NASA Technical Reports Server (NTRS)

    Pavel, M; Sweet, B.

    1993-01-01

    The purpose of the engineering workstation is to provide an environment for rapid prototyping and evaluation of fusion and image processing algorithms. Ideally, the algorithms are designed to optimize the extraction of information that is useful to a pilot for all phases of flight operations. Successful design of effective fusion algorithms depends on the ability to characterize both the information available from the sensors and the information useful to a pilot. The workstation is comprised of subsystems for simulation of sensor-generated images, image processing, image enhancement, and fusion algorithms. As such, the workstation can be used to implement and evaluate both short-term solutions and long-term solutions. The short-term solutions are being developed to enhance a pilot's situational awareness by providing information in addition to his direct vision. The long term solutions are aimed at the development of complete synthetic vision systems. One of the important functions of the engineering workstation is to simulate the images that would be generated by the sensors. The simulation system is designed to use the graphics modeling and rendering capabilities of various workstations manufactured by Silicon Graphics Inc. The workstation simulates various aspects of the sensor-generated images arising from phenomenology of the sensors. In addition, the workstation can be used to simulate a variety of impairments due to mechanical limitations of the sensor placement and due to the motion of the airplane. Although the simulation is currently not performed in real-time, sequences of individual frames can be processed, stored, and recorded in a video format. In that way, it is possible to examine the appearance of different dynamic sensor-generated and fused images.

  8. Intelligence Surveillance And Reconnaissance Full Motion Video Automatic Anomaly Detection Of Crowd Movements: System Requirements For Airborne Application

    DTIC Science & Technology

    The collection of Intelligence , Surveillance, and Reconnaissance (ISR) Full Motion Video (FMV) is growing at an exponential rate, and the manual... intelligence for the warfighter. This paper will address the question of how can automatic pattern extraction, based on computer vision, extract anomalies in

  9. 38 CFR 1.9 - Description, use, and display of VA seal and flag.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...) Official awards, certificates, medals, and plaques. (E) Motion picture film, video tape, and other... governments. (F) Official awards, certificates, and medals. (G) Motion picture film, video tape, and other... with this section shall be subject to the penalty provisions of 18 U.S.C. 506, 701, or 1017, providing...

  10. 38 CFR 1.9 - Description, use, and display of VA seal and flag.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...) Official awards, certificates, medals, and plaques. (E) Motion picture film, video tape, and other... governments. (F) Official awards, certificates, and medals. (G) Motion picture film, video tape, and other... with this section shall be subject to the penalty provisions of 18 U.S.C. 506, 701, or 1017, providing...

  11. 38 CFR 1.9 - Description, use, and display of VA seal and flag.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...) Official awards, certificates, medals, and plaques. (E) Motion picture film, video tape, and other... governments. (F) Official awards, certificates, and medals. (G) Motion picture film, video tape, and other... with this section shall be subject to the penalty provisions of 18 U.S.C. 506, 701, or 1017, providing...

  12. 10 CFR 1002.12 - Use of replicas, reproductions, and embossing seals.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ..., certificates, medals, and plaques. (5) Motion picture film, video tape and other audiovisual media prepared by...) Motion picture film, video tape, and other audiovisual media prepared by or for DOE and attributed... with this part shall be subject to the provisions of 18 U.S.C. 1017, providing penalties for the...

  13. 10 CFR 1002.12 - Use of replicas, reproductions, and embossing seals.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ..., certificates, medals, and plaques. (5) Motion picture film, video tape and other audiovisual media prepared by...) Motion picture film, video tape, and other audiovisual media prepared by or for DOE and attributed... with this part shall be subject to the provisions of 18 U.S.C. 1017, providing penalties for the...

  14. 10 CFR 1002.12 - Use of replicas, reproductions, and embossing seals.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ..., certificates, medals, and plaques. (5) Motion picture film, video tape and other audiovisual media prepared by...) Motion picture film, video tape, and other audiovisual media prepared by or for DOE and attributed... with this part shall be subject to the provisions of 18 U.S.C. 1017, providing penalties for the...

  15. 38 CFR 1.9 - Description, use, and display of VA seal and flag.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ...) Official awards, certificates, medals, and plaques. (E) Motion picture film, video tape, and other... governments. (F) Official awards, certificates, and medals. (G) Motion picture film, video tape, and other... with this section shall be subject to the penalty provisions of 18 U.S.C. 506, 701, or 1017, providing...

  16. 36 CFR § 1237.4 - What definitions apply to this part?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... definitions apply to this part? (a) See § 1220.18 of this subchapter for definitions of terms used throughout... prints from these negatives. Also included are infrared, ultraviolet, multispectral, video, and radar... still photographs and motion media (i.e., moving images whether on motion picture film or as video...

  17. 10 CFR 1002.12 - Use of replicas, reproductions, and embossing seals.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ..., certificates, medals, and plaques. (5) Motion picture film, video tape and other audiovisual media prepared by...) Motion picture film, video tape, and other audiovisual media prepared by or for DOE and attributed... with this part shall be subject to the provisions of 18 U.S.C. 1017, providing penalties for the...

  18. 10 CFR 1002.12 - Use of replicas, reproductions, and embossing seals.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ..., certificates, medals, and plaques. (5) Motion picture film, video tape and other audiovisual media prepared by...) Motion picture film, video tape, and other audiovisual media prepared by or for DOE and attributed... with this part shall be subject to the provisions of 18 U.S.C. 1017, providing penalties for the...

  19. 38 CFR 1.9 - Description, use, and display of VA seal and flag.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...) Official awards, certificates, medals, and plaques. (E) Motion picture film, video tape, and other... governments. (F) Official awards, certificates, and medals. (G) Motion picture film, video tape, and other... with this section shall be subject to the penalty provisions of 18 U.S.C. 506, 701, or 1017, providing...

  20. Tendon rupture associated with excessive smartphone gaming.

    PubMed

    Gilman, Luke; Cage, Dori N; Horn, Adam; Bishop, Frank; Klam, Warren P; Doan, Andrew P

    2015-06-01

    Excessive use of smartphones has been associated with injuries. A 29-year-old, right hand-dominant man presented with chronic left thumb pain and loss of active motion from playing a Match-3 puzzle video game on his smartphone all day for 6 to 8 weeks. On physical examination, the left extensor pollicis longus tendon was not palpable, and no tendon motion was noted with wrist tenodesis. The thumb metacarpophalangeal range of motion was 10° to 80°, and thumb interphalangeal range of motion was 30° to 70°. The clinical diagnosis was rupture of the left extensor pollicis longus tendon. The patient subsequently underwent an extensor indicis proprius (1 of 2 tendons that extend the index finger) to extensor pollicis longus tendon transfer. During surgery, rupture of the extensor pollicis longus tendon was seen between the metacarpophalangeal and wrist joints. The potential for video games to reduce pain perception raises clinical and social considerations about excessive use, abuse, and addiction. Future research should consider whether pain reduction is a reason some individuals play video games excessively, manifest addiction, or sustain injuries associated with video gaming.

  1. Seeing Eye Drones: How The DOD Can Transform CBM And Disaster Response In The Homeland

    DTIC Science & Technology

    2016-12-01

    thesis explores the possibility of integrating small unmanned aircraft systems (sUAS) with video capability and CBRN detection and identification sensors...small, unmanned aircraft systems (sUAS) with video capability and CBRN detection and identification sensors for use by National Guard civil support...CBRN) and hazardous material (HAZMAT) materials, as well as providing video to the incident commander. One of the primary benefits of providing

  2. CUQI: cardiac ultrasound video quality index

    PubMed Central

    Razaak, Manzoor; Martini, Maria G.

    2016-01-01

    Abstract. Medical images and videos are now increasingly part of modern telecommunication applications, including telemedicinal applications, favored by advancements in video compression and communication technologies. Medical video quality evaluation is essential for modern applications since compression and transmission processes often compromise the video quality. Several state-of-the-art video quality metrics used for quality evaluation assess the perceptual quality of the video. For a medical video, assessing quality in terms of “diagnostic” value rather than “perceptual” quality is more important. We present a diagnostic-quality–oriented video quality metric for quality evaluation of cardiac ultrasound videos. Cardiac ultrasound videos are characterized by rapid repetitive cardiac motions and distinct structural information characteristics that are explored by the proposed metric. Cardiac ultrasound video quality index, the proposed metric, is a full reference metric and uses the motion and edge information of the cardiac ultrasound video to evaluate the video quality. The metric was evaluated for its performance in approximating the quality of cardiac ultrasound videos by testing its correlation with the subjective scores of medical experts. The results of our tests showed that the metric has high correlation with medical expert opinions and in several cases outperforms the state-of-the-art video quality metrics considered in our tests. PMID:27014715

  3. High-Order Model and Dynamic Filtering for Frame Rate Up-Conversion.

    PubMed

    Bao, Wenbo; Zhang, Xiaoyun; Chen, Li; Ding, Lianghui; Gao, Zhiyong

    2018-08-01

    This paper proposes a novel frame rate up-conversion method through high-order model and dynamic filtering (HOMDF) for video pixels. Unlike the constant brightness and linear motion assumptions in traditional methods, the intensity and position of the video pixels are both modeled with high-order polynomials in terms of time. Then, the key problem of our method is to estimate the polynomial coefficients that represent the pixel's intensity variation, velocity, and acceleration. We propose to solve it with two energy objectives: one minimizes the auto-regressive prediction error of intensity variation by its past samples, and the other minimizes video frame's reconstruction error along the motion trajectory. To efficiently address the optimization problem for these coefficients, we propose the dynamic filtering solution inspired by video's temporal coherence. The optimal estimation of these coefficients is reformulated into a dynamic fusion of the prior estimate from pixel's temporal predecessor and the maximum likelihood estimate from current new observation. Finally, frame rate up-conversion is implemented using motion-compensated interpolation by pixel-wise intensity variation and motion trajectory. Benefited from the advanced model and dynamic filtering, the interpolated frame has much better visual quality. Extensive experiments on the natural and synthesized videos demonstrate the superiority of HOMDF over the state-of-the-art methods in both subjective and objective comparisons.

  4. Video Super-Resolution via Bidirectional Recurrent Convolutional Networks.

    PubMed

    Huang, Yan; Wang, Wei; Wang, Liang

    2018-04-01

    Super resolving a low-resolution video, namely video super-resolution (SR), is usually handled by either single-image SR or multi-frame SR. Single-Image SR deals with each video frame independently, and ignores intrinsic temporal dependency of video frames which actually plays a very important role in video SR. Multi-Frame SR generally extracts motion information, e.g., optical flow, to model the temporal dependency, but often shows high computational cost. Considering that recurrent neural networks (RNNs) can model long-term temporal dependency of video sequences well, we propose a fully convolutional RNN named bidirectional recurrent convolutional network for efficient multi-frame SR. Different from vanilla RNNs, 1) the commonly-used full feedforward and recurrent connections are replaced with weight-sharing convolutional connections. So they can greatly reduce the large number of network parameters and well model the temporal dependency in a finer level, i.e., patch-based rather than frame-based, and 2) connections from input layers at previous timesteps to the current hidden layer are added by 3D feedforward convolutions, which aim to capture discriminate spatio-temporal patterns for short-term fast-varying motions in local adjacent frames. Due to the cheap convolutional operations, our model has a low computational complexity and runs orders of magnitude faster than other multi-frame SR methods. With the powerful temporal dependency modeling, our model can super resolve videos with complex motions and achieve well performance.

  5. Control Program for an Optical-Calibration Robot

    NASA Technical Reports Server (NTRS)

    Johnston, Albert

    2005-01-01

    A computer program provides semiautomatic control of a moveable robot used to perform optical calibration of video-camera-based optoelectronic sensor systems that will be used to guide automated rendezvous maneuvers of spacecraft. The function of the robot is to move a target and hold it at specified positions. With the help of limit switches, the software first centers or finds the target. Then the target is moved to a starting position. Thereafter, with the help of an intuitive graphical user interface, an operator types in coordinates of specified positions, and the software responds by commanding the robot to move the target to the positions. The software has capabilities for correcting errors and for recording data from the guidance-sensor system being calibrated. The software can also command that the target be moved in a predetermined sequence of motions between specified positions and can be run in an advanced control mode in which, among other things, the target can be moved beyond the limits set by the limit switches.

  6. Automatic Construction of Wi-Fi Radio Map Using Smartphones

    NASA Astrophysics Data System (ADS)

    Liu, Tao; Li, Qingquan; Zhang, Xing

    2016-06-01

    Indoor positioning could provide interesting services and applications. As one of the most popular indoor positioning methods, location fingerprinting determines the location of mobile users by matching the received signal strength (RSS) which is location dependent. However, fingerprinting-based indoor positioning requires calibration and updating of the fingerprints which is labor-intensive and time-consuming. In this paper, we propose a visual-based approach for the construction of radio map for anonymous indoor environments without any prior knowledge. This approach collects multi-sensors data, e.g. video, accelerometer, gyroscope, Wi-Fi signals, etc., when people (with smartphones) walks freely in indoor environments. Then, it uses the multi-sensor data to restore the trajectories of people based on an integrated structure from motion (SFM) and image matching method, and finally estimates location of sampling points on the trajectories and construct Wi-Fi radio map. Experiment results show that the average location error of the fingerprints is about 0.53 m.

  7. Cardiopulmonary Response to Videogaming: Slaying Monsters Using Motion Sensor Versus Joystick Devices.

    PubMed

    Sherman, Jeffrey D; Sherman, Michael S; Heiman-Patterson, Terry

    2014-10-01

    Replacing physical activity with videogaming has been implicated in causing obesity. Studies have shown that using motion-sensing controllers with activity-promoting videogames expends energy comparable to aerobic exercise; however, effects of motion-sensing controllers have not been examined with traditional (non-exercise-promoting) videogames. We measured indirect calorimetry and heart rate in 14 subjects during rest and traditional videogaming using motion sensor and joystick controllers. Energy expenditure was higher while subjects were playing with the motion sensor (1.30±0.32 kcal/kg/hour) than with the joystick (1.07±0.26 kcal/kg/hour; P<0.01) or resting (0.91±0.24 kcal/kg/hour; P<0.01). Oxygen consumption during videogaming averaged 15.7 percent of predicted maximum for the motion sensor and 11.8 percent of maximum for the joystick. Minute ventilation was higher playing with the motion sensor (10.7±3.5 L/minute) than with the joystick (8.6±1.8 L/minute; P<0.02) or resting (6.7±1.4 L/minute; P<0.001), predominantly because of higher respiratory rates (15.2±4.3 versus 20.3±2.8 versus 20.4±4.2 beats/minute for resting, the joystick, and the motion sensor, respectively; P<0.001); tidal volume did not change significantly. Peak heart rate during gaming was 16.4 percent higher than resting (78.0±12.0) for joystick (90.1±15.0; P=0.002) and 17.4 percent higher for the motion sensor (91.6±14.1; P=0.002); mean heart rate did not differ significantly. Playing with a motion sensor burned significantly more calories than with a joystick, but the energy expended was modest. With both consoles, the increased respiratory rate without increasing tidal volume and the increased peak heart rate without increasing mean heart rate are consistent with psychological stimulation from videogaming, rather than a result of exercise. We conclude that using a motion sensor with traditional videogames does not provide adequate energy expenditure to provide cardiovascular conditioning.

  8. Manifolds for pose tracking from monocular video

    NASA Astrophysics Data System (ADS)

    Basu, Saurav; Poulin, Joshua; Acton, Scott T.

    2015-03-01

    We formulate a simple human-pose tracking theory from monocular video based on the fundamental relationship between changes in pose and image motion vectors. We investigate the natural embedding of the low-dimensional body pose space into a high-dimensional space of body configurations that behaves locally in a linear manner. The embedded manifold facilitates the decomposition of the image motion vectors into basis motion vector fields of the tangent space to the manifold. This approach benefits from the style invariance of image motion flow vectors, and experiments to validate the fundamental theory show reasonable accuracy (within 4.9 deg of the ground truth).

  9. Linear momentum, angular momentum and energy in the linear collision between two balls

    NASA Astrophysics Data System (ADS)

    Hanisch, C.; Hofmann, F.; Ziese, M.

    2018-01-01

    In an experiment of the basic physics laboratory, kinematical motion processes were analysed. The motion was recorded with a standard video camera having frame rates from 30 to 240 fps the videos were processed using video analysis software. Video detection was used to analyse the symmetric one-dimensional collision between two balls. Conservation of linear and angular momentum lead to a crossover from rolling to sliding directly after the collision. By variation of the rolling radius the system could be tuned from a regime in which the balls move away from each other after the collision to a situation in which they re-collide.

  10. Shaking video stabilization with content completion

    NASA Astrophysics Data System (ADS)

    Peng, Yi; Ye, Qixiang; Liu, Yanmei; Jiao, Jianbin

    2009-01-01

    A new stabilization algorithm to counterbalance the shaking motion in a video based on classical Kandade-Lucas- Tomasi (KLT) method is presented in this paper. Feature points are evaluated with law of large numbers and clustering algorithm to reduce the side effect of moving foreground. Analysis on the change of motion direction is also carried out to detect the existence of shaking. For video clips with detected shaking, an affine transformation is performed to warp the current frame to the reference one. In addition, the missing content of a frame during the stabilization is completed with optical flow analysis and mosaicking operation. Experiments on video clips demonstrate the effectiveness of the proposed algorithm.

  11. Miniature low-power inertial sensors: promising technology for implantable motion capture systems.

    PubMed

    Lambrecht, Joris M; Kirsch, Robert F

    2014-11-01

    Inertial and magnetic sensors are valuable for untethered, self-contained human movement analysis. Very recently, complete integration of inertial sensors, magnetic sensors, and processing into single packages, has resulted in miniature, low power devices that could feasibly be employed in an implantable motion capture system. We developed a wearable sensor system based on a commercially available system-in-package inertial and magnetic sensor. We characterized the accuracy of the system in measuring 3-D orientation-with and without magnetometer-based heading compensation-relative to a research grade optical motion capture system. The root mean square error was less than 4° in dynamic and static conditions about all axes. Using four sensors, recording from seven degrees-of-freedom of the upper limb (shoulder, elbow, wrist) was demonstrated in one subject during reaching motions. Very high correlation and low error was found across all joints relative to the optical motion capture system. Findings were similar to previous publications using inertial sensors, but at a fraction of the power consumption and size of the sensors. Such ultra-small, low power sensors provide exciting new avenues for movement monitoring for various movement disorders, movement-based command interfaces for assistive devices, and implementation of kinematic feedback systems for assistive interventions like functional electrical stimulation.

  12. The Use of Wearable Inertial Motion Sensors in Human Lower Limb Biomechanics Studies: A Systematic Review

    PubMed Central

    Fong, Daniel Tik-Pui; Chan, Yue-Yan

    2010-01-01

    Wearable motion sensors consisting of accelerometers, gyroscopes and magnetic sensors are readily available nowadays. The small size and low production costs of motion sensors make them a very good tool for human motions analysis. However, data processing and accuracy of the collected data are important issues for research purposes. In this paper, we aim to review the literature related to usage of inertial sensors in human lower limb biomechanics studies. A systematic search was done in the following search engines: ISI Web of Knowledge, Medline, SportDiscus and IEEE Xplore. Thirty nine full papers and conference abstracts with related topics were included in this review. The type of sensor involved, data collection methods, study design, validation methods and its applications were reviewed. PMID:22163542

  13. The use of wearable inertial motion sensors in human lower limb biomechanics studies: a systematic review.

    PubMed

    Fong, Daniel Tik-Pui; Chan, Yue-Yan

    2010-01-01

    Wearable motion sensors consisting of accelerometers, gyroscopes and magnetic sensors are readily available nowadays. The small size and low production costs of motion sensors make them a very good tool for human motions analysis. However, data processing and accuracy of the collected data are important issues for research purposes. In this paper, we aim to review the literature related to usage of inertial sensors in human lower limb biomechanics studies. A systematic search was done in the following search engines: ISI Web of Knowledge, Medline, SportDiscus and IEEE Xplore. Thirty nine full papers and conference abstracts with related topics were included in this review. The type of sensor involved, data collection methods, study design, validation methods and its applications were reviewed.

  14. Vestibulo-Ocular Responses to Vertical Translation using a Hand-Operated Chair as a Field Measure of Otolith Function

    NASA Technical Reports Server (NTRS)

    Wood, S. J.; Campbell, D. J.; Reschke, M. F.; Prather, L.; Clement, G.

    2016-01-01

    The translational Vestibulo-Ocular Reflex (tVOR) is an important otolith-mediated response to stabilize gaze during natural locomotion. One goal of this study was to develop a measure of the tVOR using a simple hand-operated chair that provided passive vertical motion. Binocular eye movements were recorded with a tight-fitting video mask in ten healthy subjects. Vertical motion was provided by a modified spring-powered chair (swopper.com) at approximately 2 Hz (+/- 2 cm displacement) to approximate the head motion during walking. Linear acceleration was measured with wireless inertial sensors (Xsens) mounted on the head and torso. Eye movements were recorded while subjects viewed near (0.5m) and far (approximately 4m) targets, and then imagined these targets in darkness. Subjects also provided perceptual estimates of target distances. Consistent with the kinematic properties shown in previous studies, the tVOR gain was greater with near targets, and greater with vision than in darkness. We conclude that this portable chair system can provide a field measure of otolith-ocular function at frequencies sufficient to elicit a robust tVOR.

  15. Shape-based human detection for threat assessment

    NASA Astrophysics Data System (ADS)

    Lee, Dah-Jye; Zhan, Pengcheng; Thomas, Aaron; Schoenberger, Robert B.

    2004-07-01

    Detection of intrusions for early threat assessment requires the capability of distinguishing whether the intrusion is a human, an animal, or other objects. Most low-cost security systems use simple electronic motion detection sensors to monitor motion or the location of objects within the perimeter. Although cost effective, these systems suffer from high rates of false alarm, especially when monitoring open environments. Any moving objects including animals can falsely trigger the security system. Other security systems that utilize video equipment require human interpretation of the scene in order to make real-time threat assessment. Shape-based human detection technique has been developed for accurate early threat assessments for open and remote environment. Potential threats are isolated from the static background scene using differential motion analysis and contours of the intruding objects are extracted for shape analysis. Contour points are simplified by removing redundant points connecting short and straight line segments and preserving only those with shape significance. Contours are represented in tangent space for comparison with shapes stored in database. Power cepstrum technique has been developed to search for the best matched contour in database and to distinguish a human from other objects from different viewing angles and distances.

  16. Vehicle tracking in wide area motion imagery from an airborne platform

    NASA Astrophysics Data System (ADS)

    van Eekeren, Adam W. M.; van Huis, Jasper R.; Eendebak, Pieter T.; Baan, Jan

    2015-10-01

    Airborne platforms, such as UAV's, with Wide Area Motion Imagery (WAMI) sensors can cover multiple square kilometers and produce large amounts of video data. Analyzing all data for information need purposes becomes increasingly labor-intensive for an image analyst. Furthermore, the capacity of the datalink in operational areas may be inadequate to transfer all data to the ground station. Automatic detection and tracking of people and vehicles enables to send only the most relevant footage to the ground station and assists the image analysts in effective data searches. In this paper, we propose a method for detecting and tracking vehicles in high-resolution WAMI images from a moving airborne platform. For the vehicle detection we use a cascaded set of classifiers, using an Adaboost training algorithm on Haar features. This detector works on individual images and therefore does not depend on image motion stabilization. For the vehicle tracking we use a local template matching algorithm. This approach has two advantages. In the first place, it does not depend on image motion stabilization and it counters the inaccuracy of the GPS data that is embedded in the video data. In the second place, it can find matches when the vehicle detector would miss a certain detection. This results in long tracks even when the imagery is of low frame-rate. In order to minimize false detections, we also integrate height information from a 3D reconstruction that is created from the same images. By using the locations of buildings and roads, we are able to filter out false detections and increase the performance of the tracker. In this paper we show that the vehicle tracks can also be used to detect more complex events, such as traffic jams and fast moving vehicles. This enables the image analyst to do a faster and more effective search of the data.

  17. Development of an artificial sensor for hydrodynamic detection inspired by a seal's whisker array.

    PubMed

    Eberhardt, William C; Wakefield, Brendan F; Murphy, Christin T; Casey, Caroline; Shakhsheer, Yousef; Calhoun, Benton H; Reichmuth, Colleen

    2016-08-31

    Nature has shaped effective biological sensory systems to receive complex stimuli generated by organisms moving through water. Similar abilities have not yet been fully developed in artificial systems for underwater detection and monitoring, but such technology would enable valuable applications for military, commercial, and scientific use. We set out to design a fluid motion sensor array inspired by the searching performance of seals, which use their whiskers to find and follow underwater wakes. This sensor prototype, called the Wake Information Detection and Tracking System (WIDTS), features multiple whisker-like elements that respond to hydrodynamic disturbances encountered while moving through water. To develop and test this system, we trained a captive harbor seal (Phoca vitulina) to wear a blindfold while tracking a remote-controlled, propeller-driven submarine. After mastering the tracking task, the seal learned to carry the WIDTS adjacent to its own vibrissal array during active pursuit of the target. Data from the WIDTS sensors describe changes in the deflection angles of the whisker elements as they pass through the hydrodynamic trail left by the submarine. Video performance data show that these detections coincide temporally with WIDTS-wake intersections. Deployment of the sensors on an actively searching seal allowed for the direct comparison of our instrument to the ability of the biological sensory system in a proof-of-concept demonstration. The creation of the WIDTS provides a foundation for instrument development in the field of biomimetic fluid sensor technology.

  18. A hardware-oriented concurrent TZ search algorithm for High-Efficiency Video Coding

    NASA Astrophysics Data System (ADS)

    Doan, Nghia; Kim, Tae Sung; Rhee, Chae Eun; Lee, Hyuk-Jae

    2017-12-01

    High-Efficiency Video Coding (HEVC) is the latest video coding standard, in which the compression performance is double that of its predecessor, the H.264/AVC standard, while the video quality remains unchanged. In HEVC, the test zone (TZ) search algorithm is widely used for integer motion estimation because it effectively searches the good-quality motion vector with a relatively small amount of computation. However, the complex computation structure of the TZ search algorithm makes it difficult to implement it in the hardware. This paper proposes a new integer motion estimation algorithm which is designed for hardware execution by modifying the conventional TZ search to allow parallel motion estimations of all prediction unit (PU) partitions. The algorithm consists of the three phases of zonal, raster, and refinement searches. At the beginning of each phase, the algorithm obtains the search points required by the original TZ search for all PU partitions in a coding unit (CU). Then, all redundant search points are removed prior to the estimation of the motion costs, and the best search points are then selected for all PUs. Compared to the conventional TZ search algorithm, experimental results show that the proposed algorithm significantly decreases the Bjøntegaard Delta bitrate (BD-BR) by 0.84%, and it also reduces the computational complexity by 54.54%.

  19. Method and System for Temporal Filtering in Video Compression Systems

    NASA Technical Reports Server (NTRS)

    Lu, Ligang; He, Drake; Jagmohan, Ashish; Sheinin, Vadim

    2011-01-01

    Three related innovations combine improved non-linear motion estimation, video coding, and video compression. The first system comprises a method in which side information is generated using an adaptive, non-linear motion model. This method enables extrapolating and interpolating a visual signal, including determining the first motion vector between the first pixel position in a first image to a second pixel position in a second image; determining a second motion vector between the second pixel position in the second image and a third pixel position in a third image; determining a third motion vector between the first pixel position in the first image and the second pixel position in the second image, the second pixel position in the second image, and the third pixel position in the third image using a non-linear model; and determining a position of the fourth pixel in a fourth image based upon the third motion vector. For the video compression element, the video encoder has low computational complexity and high compression efficiency. The disclosed system comprises a video encoder and a decoder. The encoder converts the source frame into a space-frequency representation, estimates the conditional statistics of at least one vector of space-frequency coefficients with similar frequencies, and is conditioned on previously encoded data. It estimates an encoding rate based on the conditional statistics and applies a Slepian-Wolf code with the computed encoding rate. The method for decoding includes generating a side-information vector of frequency coefficients based on previously decoded source data and encoder statistics and previous reconstructions of the source frequency vector. It also performs Slepian-Wolf decoding of a source frequency vector based on the generated side-information and the Slepian-Wolf code bits. The video coding element includes receiving a first reference frame having a first pixel value at a first pixel position, a second reference frame having a second pixel value at a second pixel position, and a third reference frame having a third pixel value at a third pixel position. It determines a first motion vector between the first pixel position and the second pixel position, a second motion vector between the second pixel position and the third pixel position, and a fourth pixel value for a fourth frame based upon a linear or nonlinear combination of the first pixel value, the second pixel value, and the third pixel value. A stationary filtering process determines the estimated pixel values. The parameters of the filter may be predetermined constants.

  20. The Advanced Video Guidance Sensor: Orbital Express and the Next Generation

    NASA Technical Reports Server (NTRS)

    Howard, Richard T.; Heaton, Andrew F.; Pinson, Robin M.; Carrington, Connie L.; Lee, James E.; Bryan, Thomas C.; Robertson, Bryan A.; Spencer, Susan H.; Johnson, Jimmie E.

    2008-01-01

    The Orbital Express (OE) mission performed the first autonomous rendezvous and docking in the history of the United States on May 5-6, 2007 with the Advanced Video Guidance Sensor (AVGS) acting as one of the primary docking sensors. Since that event, the OE spacecraft performed four more rendezvous and docking maneuvers, each time using the AVGS as one of the docking sensors. The Marshall Space Flight Center's (MSFC's) AVGS is a nearfield proximity operations sensor that was integrated into the Autonomous Rendezvous and Capture Sensor System (ARCSS) on OE. The ARCSS provided the relative state knowledge to allow the OE spacecraft to rendezvous and dock. The AVGS is a mature sensor technology designed to support Automated Rendezvous and Docking (AR&D) operations. It is a video-based laser-illuminated sensor that can determine the relative position and attitude between itself and its target. Due to parts obsolescence, the AVGS that was flown on OE can no longer be manufactured. MSFC has been working on the next generation of AVGS for application to future Constellation missions. This paper provides an overview of the performance of the AVGS on Orbital Express and discusses the work on the Next Generation AVGS (NGAVGS).

  1. Psychovisual masks and intelligent streaming RTP techniques for the MPEG-4 standard

    NASA Astrophysics Data System (ADS)

    Mecocci, Alessandro; Falconi, Francesco

    2003-06-01

    In today multimedia audio-video communication systems, data compression plays a fundamental role by reducing the bandwidth waste and the costs of the infrastructures and equipments. Among the different compression standards, the MPEG-4 is becoming more and more accepted and widespread. Even if one of the fundamental aspects of this standard is the possibility of separately coding video objects (i.e. to separate moving objects from the background and adapt the coding strategy to the video content), currently implemented codecs work only at the full-frame level. In this way, many advantages of the flexible MPEG-4 syntax are missed. This lack is due both to the difficulties in properly segmenting moving objects in real scenes (featuring an arbitrary motion of the objects and of the acquisition sensor), and to the current use of these codecs, that are mainly oriented towards the market of DVD backups (a full-frame approach is enough for these applications). In this paper we propose a codec for MPEG-4 real-time object streaming, that codes separately the moving objects and the scene background. The proposed codec is capable of adapting its strategy during the transmission, by analysing the video currently transmitted and setting the coder parameters and modalities accordingly. For example, the background can be transmitted as a whole or by dividing it into "slightly-detailed" and "highly detailed" zones that are coded in different ways to reduce the bit-rate while preserving the perceived quality. The coder can automatically switch in real-time, from one modality to the other during the transmission, depending on the current video content. Psychovisual masks and other video-content based measurements have been used as inputs for a Self Learning Intelligent Controller (SLIC) that changes the parameters and the transmission modalities. The current implementation is based on the ISO 14496 standard code that allows Video Objects (VO) transmission (other Open Source Codes like: DivX, Xvid, and Cisco"s Mpeg-4IP, have been analyzed but, as for today, they do not support VO). The original code has been deeply modified to integrate the SLIC and to adapt it for real-time streaming. A personal RTP (Real Time Protocol) has been defined and a Client-Server application has been developed. The viewer can decode and demultiplex the stream in real-time, while adapting to the changing modalities adopted by the Server according to the current video content. The proposed codec works as follows: the image background is separated by means of a segmentation module and it is transmitted by means of a wavelet compression scheme similar to that used in the JPEG2000. The VO are coded separately and multiplexed with the background stream. At the receiver the stream is demultiplexed to obtain the background and the VO that are subsequently pasted together. The final quality depends on many factors, in particular: the quantization parameters, the Group Of Video Object (GOV) length, the GOV structure (i.e. the number of I-P-B VOP), the search area for motion compensation. These factors are strongly related to the following measurement parameters (that have been defined during the development): the Objects Apparent Size (OAS) in the scene, the Video Object Incidence factor (VOI), the temporal correlation (measured through the Normalized Mean SAD, NMSAD). The SLIC module analyzes the currently transmitted video and selects the most appropriate settings by choosing from a predefined set of transmission modalities. For example, in the case of a highly temporal correlated sequence, the number of B-VOP is increased to improve the compression ratio. The strategy for the selection of the number of B-VOP turns out to be very different from those reported in the literature for B-frames (adopted for MPEG-1 and MPEG-2), due to the different behaviour of the temporal correlation when limited only to moving objects. The SLIC module also decides how to transmit the background. In our implementation we adopted the Visual Brain theory i.e. the study of what the "psychic eye" can get from a scene. According to this theory, a Psychomask Image Analysis (PIA) module has been developed to extract the visually homogeneous regions of the background. The PIA module produces two complementary masks one for the visually low variance zones and one for the higly variable zones; these zones are compressed with different strategies and encoded into two multiplexed streams. From practical experiments it turned out that the separate coding is advantageous only if the low variance zones exceed 50% of the whole background area (due to the overhead given by the need of transmitting the zone masks). The SLIC module takes care of deciding the appropriate transmission modality by analyzing the results produced by the PIA module. The main features of this codec are: low bitrate, good image quality and coding speed. The current implementation runs in real-time on standard PC platforms, the major limitation being the fixed position of the acquisition sensor. This limitation is due to the difficulties in separating moving objects from the background when the acquisition sensor moves. Our current real-time segmentation module does not produce suitable results if the acquisition sensor moves (only slight oscillatory movements are tolerated). In any case, the system is particularly suitable for tele surveillance applications at low bit-rates, where the camera is usually fixed or alternates among some predetermined positions (our segmentation module is capable of accurately separate moving objects from the static background when the acquisition sensor stops, even if different scenes are seen as a result of the sensor displacements). Moreover, the proposed architecture is general, in the sense that when real-time, robust segmentation systems (capable of separating objects in real-time from the background while the sensor itself is moving) will be available, they can be easily integrated while leaving the rest of the system unchanged. Experimental results related to real sequences for traffic monitoring and for people tracking and afety control are reported and deeply discussed in the paper. The whole system has been implemented in standard ANSI C code and currently runs on standard PCs under Microsoft Windows operating system (Windows 2000 pro and Windows XP).

  2. The Systems Engineering Design of a Smart Forward Operating Base Surveillance System for Forward Operating Base Protection

    DTIC Science & Technology

    2013-06-01

    fixed sensors located along the perimeter of the FOB. The video is analyzed for facial recognition to alert the Network Operations Center (NOC...the UAV is processed on board for facial recognition and video for behavior analysis is sent directly to the Network Operations Center (NOC). Video...captured by the fixed sensors are sent directly to the NOC for facial recognition and behavior analysis processing. The multi- directional signal

  3. Distance Learning Using Digital Fiber Optics: Applications, Technologies, and Benefits.

    ERIC Educational Resources Information Center

    Currer, Joanne M.

    Distance learning provides special or advanced classes in rural schools where declining population has led to decreased funding and fewer classes. With full-motion video using digital fiber, two or more sites are connected into a two-way, full-motion, video conference. The teacher can see and hear the students, and the students can see and hear…

  4. The lucky image-motion prediction for simple scene observation based soft-sensor technology

    NASA Astrophysics Data System (ADS)

    Li, Yan; Su, Yun; Hu, Bin

    2015-08-01

    High resolution is important to earth remote sensors, while the vibration of the platforms of the remote sensors is a major factor restricting high resolution imaging. The image-motion prediction and real-time compensation are key technologies to solve this problem. For the reason that the traditional autocorrelation image algorithm cannot meet the demand for the simple scene image stabilization, this paper proposes to utilize soft-sensor technology in image-motion prediction, and focus on the research of algorithm optimization in imaging image-motion prediction. Simulations results indicate that the improving lucky image-motion stabilization algorithm combining the Back Propagation Network (BP NN) and support vector machine (SVM) is the most suitable for the simple scene image stabilization. The relative error of the image-motion prediction based the soft-sensor technology is below 5%, the training computing speed of the mathematical predication model is as fast as the real-time image stabilization in aerial photography.

  5. Motion and ranging sensor system for through-the-wall surveillance system

    NASA Astrophysics Data System (ADS)

    Black, Jeffrey D.

    2002-08-01

    A portable Through-the-Wall Surveillance System is being developed for law enforcement, counter-terrorism, and military use. The Motion and Ranging Sensor is a radar that operates in a frequency band that allows for surveillance penetration of most non-metallic walls. Changes in the sensed radar returns are analyzed to detect the human motion that would typically be present during a hostage or barricaded suspect scenario. The system consists of a Sensor Unit, a handheld Remote Display Unit, and an optional laptop computer Command Display Console. All units are battery powered and a wireless link provides command and data communication between units. The Sensor Unit is deployed close to the wall or door through which the surveillance is to occur. After deploying the sensor the operator may move freely as required by the scenario. Up to five Sensor Units may be deployed at a single location. A software upgrade to the Command Display Console is also being developed. This software upgrade will combine the motion detected by multiple Sensor Units and determine and track the location of detected motion in two dimensions.

  6. Spatial correlation-based side information refinement for distributed video coding

    NASA Astrophysics Data System (ADS)

    Taieb, Mohamed Haj; Chouinard, Jean-Yves; Wang, Demin

    2013-12-01

    Distributed video coding (DVC) architecture designs, based on distributed source coding principles, have benefitted from significant progresses lately, notably in terms of achievable rate-distortion performances. However, a significant performance gap still remains when compared to prediction-based video coding schemes such as H.264/AVC. This is mainly due to the non-ideal exploitation of the video sequence temporal correlation properties during the generation of side information (SI). In fact, the decoder side motion estimation provides only an approximation of the true motion. In this paper, a progressive DVC architecture is proposed, which exploits the spatial correlation of the video frames to improve the motion-compensated temporal interpolation (MCTI). Specifically, Wyner-Ziv (WZ) frames are divided into several spatially correlated groups that are then sent progressively to the receiver. SI refinement (SIR) is performed as long as these groups are being decoded, thus providing more accurate SI for the next groups. It is shown that the proposed progressive SIR method leads to significant improvements over the Discover DVC codec as well as other SIR schemes recently introduced in the literature.

  7. Investigation of kinematic features for dismount detection and tracking

    NASA Astrophysics Data System (ADS)

    Narayanaswami, Ranga; Tyurina, Anastasia; Diel, David; Mehra, Raman K.; Chinn, Janice M.

    2012-05-01

    With recent changes in threats and methods of warfighting and the use of unmanned aircrafts, ISR (Intelligence, Surveillance and Reconnaissance) activities have become critical to the military's efforts to maintain situational awareness and neutralize the enemy's activities. The identification and tracking of dismounts from surveillance video is an important step in this direction. Our approach combines advanced ultra fast registration techniques to identify moving objects with a classification algorithm based on both static and kinematic features of the objects. Our objective was to push the acceptable resolution beyond the capability of industry standard feature extraction methods such as SIFT (Scale Invariant Feature Transform) based features and inspired by it, SURF (Speeded-Up Robust Feature). Both of these methods utilize single frame images. We exploited the temporal component of the video signal to develop kinematic features. Of particular interest were the easily distinguishable frequencies characteristic of bipedal human versus quadrupedal animal motion. We examine limits of performance, frame rates and resolution required for human, animal and vehicles discrimination. A few seconds of video signal with the acceptable frame rate allow us to lower resolution requirements for individual frames as much as by a factor of five, which translates into the corresponding increase of the acceptable standoff distance between the sensor and the object of interest.

  8. A wearable strain sensor based on a carbonized nano-sponge/silicone composite for human motion detection.

    PubMed

    Yu, Xiao-Guang; Li, Yuan-Qing; Zhu, Wei-Bin; Huang, Pei; Wang, Tong-Tong; Hu, Ning; Fu, Shao-Yun

    2017-05-25

    Melamine sponge, also known as nano-sponge, is widely used as an abrasive cleaner in our daily life. In this work, the fabrication of a wearable strain sensor for human motion detection is first demonstrated with a commercially available nano-sponge as a starting material. The key resistance sensitive material in the wearable strain sensor is obtained by the encapsulation of a carbonized nano-sponge (CNS) with silicone resin. The as-fabricated CNS/silicone sensor is highly sensitive to strain with a maximum gauge factor of 18.42. In addition, the CNS/silicone sensor exhibits a fast and reliable response to various cyclic loading within a strain range of 0-15% and a loading frequency range of 0.01-1 Hz. Finally, the CNS/silicone sensor as a wearable device for human motion detection including joint motion, eye blinking, blood pulse and breathing is demonstrated by attaching the sensor to the corresponding parts of the human body. In consideration of the simple fabrication technique, low material cost and excellent strain sensing performance, the CNS/silicone sensor is believed to have great potential in the next-generation of wearable devices for human motion detection.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leary, T.J.; Lamb, A.

    The Department of Energy`s Office of Arms Control and Non-Proliferation (NN-20) has developed a suite of airborne remote sensing systems that simultaneously collect coincident data from a US Navy P-3 aircraft. The primary objective of the Airborne Multisensor Pod System (AMPS) Program is {open_quotes}to collect multisensor data that can be used for data research, both to reduce interpretation problems associated with data overload and to develop information products more complete than can be obtained from any single sensor.{close_quotes} The sensors are housed in wing-mounted pods and include: a Ku-Band Synthetic Aperture Radar; a CASI Hyperspectral Imager; a Daedalus 3600 Airbornemore » Multispectral Scanner; a Wild Heerbrugg RC-30 motion compensated large format camera; various high resolution, light intensified and thermal video cameras; and several experimental sensors (e.g. the Portable Hyperspectral Imager of Low-Light Spectroscopy (PHILLS)). Over the past year or so, the Coastal Marine Resource Assessment (CAMRA) group at the Florida Department of Environmental Protection`s Marine Research Institute (FMRI) has been working with the Department of Energy through the Naval Research Laboratory to develop applications and products from existing data. Considerable effort has been spent identifying image formats integration parameters. 2 refs., 3 figs., 2 tabs.« less

  10. Study of Submicron Particle Size Distributions by Laser Doppler Measurement of Brownian Motion.

    DTIC Science & Technology

    1984-10-29

    o ..... . 5-1 A.S *6NEW DISCOVERIES OR INVENTIONS .. o......... ......... 6-1 APPENDIX: COMPUTER SIMULATION OF THE BROWNIAN MOTION SENSOR SIGNALS...scattering regime by analysis of the scattered light intensity and particle mass (size) obtained using the Brownian motion sensor . 9 Task V - By application...of the Brownian motion sensor in a flat-flame burner, the contractor shall assess the application of this technique for In-situ sizing of submicron

  11. On-Line Detection and Segmentation of Sports Motions Using a Wearable Sensor.

    PubMed

    Kim, Woosuk; Kim, Myunggyu

    2018-03-19

    In sports motion analysis, observation is a prerequisite for understanding the quality of motions. This paper introduces a novel approach to detect and segment sports motions using a wearable sensor for supporting systematic observation. The main goal is, for convenient analysis, to automatically provide motion data, which are temporally classified according to the phase definition. For explicit segmentation, a motion model is defined as a sequence of sub-motions with boundary states. A sequence classifier based on deep neural networks is designed to detect sports motions from continuous sensor inputs. The evaluation on two types of motions (soccer kicking and two-handed ball throwing) verifies that the proposed method is successful for the accurate detection and segmentation of sports motions. By developing a sports motion analysis system using the motion model and the sequence classifier, we show that the proposed method is useful for observation of sports motions by automatically providing relevant motion data for analysis.

  12. Vital sign monitoring for elderly at home: development of a compound sensor for pulse rate and motion.

    PubMed

    Sum, K W; Zheng, Y P; Mak, A F T

    2005-01-01

    This paper describes the development of a miniaturized wearable vital sign monitor which is aimed for use by elderly at home. The development of a compound sensor for pulse rate, motion, and skin temperature is reported. A pair of infrared sensor working in reflection mode was used to detect the pulse rate from various sites over the body including the wrist and finger. Meanwhile, a motion sensor was used to detect the motion of the body. In addition, the temperature on the skin surface was sensed by a semiconductor temperature sensor. A prototype has been built into a box with a dimension of 2 x 2.5 x 4 cm3. The device includes the sensors, microprocessor, circuits, battery, and a wireless transceiver for communicating data with a data terminal.

  13. Sensitive and Flexible Polymeric Strain Sensor for Accurate Human Motion Monitoring

    PubMed Central

    Khan, Hassan; Kottapalli, Ajay; Asadnia, Mohsen

    2018-01-01

    Flexible electronic devices offer the capability to integrate and adapt with human body. These devices are mountable on surfaces with various shapes, which allow us to attach them to clothes or directly onto the body. This paper suggests a facile fabrication strategy via electrospinning to develop a stretchable, and sensitive poly (vinylidene fluoride) nanofibrous strain sensor for human motion monitoring. A complete characterization on the single PVDF nano fiber has been performed. The charge generated by PVDF electrospun strain sensor changes was employed as a parameter to control the finger motion of the robotic arm. As a proof of concept, we developed a smart glove with five sensors integrated into it to detect the fingers motion and transfer it to a robotic hand. Our results shows that the proposed strain sensors are able to detect tiny motion of fingers and successfully run the robotic hand. PMID:29389851

  14. A reaction-diffusion-based coding rate control mechanism for camera sensor networks.

    PubMed

    Yamamoto, Hiroshi; Hyodo, Katsuya; Wakamiya, Naoki; Murata, Masayuki

    2010-01-01

    A wireless camera sensor network is useful for surveillance and monitoring for its visibility and easy deployment. However, it suffers from the limited capacity of wireless communication and a network is easily overflown with a considerable amount of video traffic. In this paper, we propose an autonomous video coding rate control mechanism where each camera sensor node can autonomously determine its coding rate in accordance with the location and velocity of target objects. For this purpose, we adopted a biological model, i.e., reaction-diffusion model, inspired by the similarity of biological spatial patterns and the spatial distribution of video coding rate. Through simulation and practical experiments, we verify the effectiveness of our proposal.

  15. Development and evaluation of a SUAS perching system

    NASA Astrophysics Data System (ADS)

    Reynolds, Ryan

    Perching has been proposed as a possible landing technique for Small Unmanned Aircraft Systems (SUAS). The current research study develops an onboard open loop perching system for a fixed-wing SUAS and examines the impact of initial flight speed and sensor placement on the perching dynamics. A catapult launcher and modified COTS aircraft were used for the experiments, while an ultrasonic sensor on the aircraft was used to detect the perching target. Thirty tests were conducted varying the initial launch speed and ultrasonic sensor placement to see if they affected the time the aircraft reaches its maximum pitch angle, since the maximum pitch angle is the optimum perching point for the aircraft. High-speed video was analyzed to obtain flight data, along with data from an onboard inertial measuring unit. The data were analyzed using a model 1, two-way ANOVA to determine if launch speed and sensor placement affect the optimum perching point where the aircraft reaches its maximum pitch angle during the maneuver. The results show the launch speed does affect the time at which the maximum pitch angle occurs, but sensor placement does not. This means a closed loop system will need to adjust its perching distance based on its initial velocity. The sensor placement not having any noticeable effect means the ultrasonic sensor can be placed on the nose or the wing of the aircraft as needed for the design. There was also no noticeable interaction between the two variables. Aerodynamic parameters such as lift, drag, and moment coefficients were derived from the dynamic equations of motion for use in numerical simulations and dynamic perching models.

  16. 47 CFR 74.870 - Wireless video assist devices.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 4 2012-10-01 2012-10-01 false Wireless video assist devices. 74.870 Section... Stations § 74.870 Wireless video assist devices. Television broadcast auxiliary licensees and motion picture and television producers, as defined in § 74.801 may operate wireless video assist devices on a...

  17. 47 CFR 74.870 - Wireless video assist devices.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 4 2011-10-01 2011-10-01 false Wireless video assist devices. 74.870 Section... Stations § 74.870 Wireless video assist devices. Television broadcast auxiliary licensees and motion picture and television producers, as defined in § 74.801 may operate wireless video assist devices on a...

  18. 47 CFR 74.870 - Wireless video assist devices.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 4 2014-10-01 2014-10-01 false Wireless video assist devices. 74.870 Section... Stations § 74.870 Wireless video assist devices. Television broadcast auxiliary licensees and motion picture and television producers, as defined in § 74.801 may operate wireless video assist devices on a...

  19. 47 CFR 74.870 - Wireless video assist devices.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 4 2013-10-01 2013-10-01 false Wireless video assist devices. 74.870 Section... Stations § 74.870 Wireless video assist devices. Television broadcast auxiliary licensees and motion picture and television producers, as defined in § 74.801 may operate wireless video assist devices on a...

  20. Multimedia Instruction Puts Teachers in the Director's Chair.

    ERIC Educational Resources Information Center

    Trotter, Andrew

    1990-01-01

    Teachers can produce and direct their own instructional videos using computer-driven multimedia. Outlines the basics in combining audio and video technologies to produce videotapes that mix animated and still graphics, sound, and full-motion video. (MLF)

  1. High-Speed Video Analysis in a Conceptual Physics Class

    NASA Astrophysics Data System (ADS)

    Desbien, Dwain M.

    2011-09-01

    The use of probe ware and computers has become quite common in introductory physics classrooms. Video analysis is also becoming more popular and is available to a wide range of students through commercially available and/or free software.2,3 Video analysis allows for the study of motions that cannot be easily measured in the traditional lab setting and also allows real-world situations to be analyzed. Many motions are too fast to easily be captured at the standard video frame rate of 30 frames per second (fps) employed by most video cameras. This paper will discuss using a consumer camera that can record high-frame-rate video in a college-level conceptual physics class. In particular this will involve the use of model rockets to determine the acceleration during the boost period right at launch and compare it to a simple model of the expected acceleration.

  2. Semantic Shot Classification in Sports Video

    NASA Astrophysics Data System (ADS)

    Duan, Ling-Yu; Xu, Min; Tian, Qi

    2003-01-01

    In this paper, we present a unified framework for semantic shot classification in sports videos. Unlike previous approaches, which focus on clustering by aggregating shots with similar low-level features, the proposed scheme makes use of domain knowledge of a specific sport to perform a top-down video shot classification, including identification of video shot classes for each sport, and supervised learning and classification of the given sports video with low-level and middle-level features extracted from the sports video. It is observed that for each sport we can predefine a small number of semantic shot classes, about 5~10, which covers 90~95% of sports broadcasting video. With the supervised learning method, we can map the low-level features to middle-level semantic video shot attributes such as dominant object motion (a player), camera motion patterns, and court shape, etc. On the basis of the appropriate fusion of those middle-level shot classes, we classify video shots into the predefined video shot classes, each of which has a clear semantic meaning. The proposed method has been tested over 4 types of sports videos: tennis, basketball, volleyball and soccer. Good classification accuracy of 85~95% has been achieved. With correctly classified sports video shots, further structural and temporal analysis, such as event detection, video skimming, table of content, etc, will be greatly facilitated.

  3. Real-time weigh-in-motion measurement using fiber Bragg grating sensors

    NASA Astrophysics Data System (ADS)

    Huang, Ying; Palek, Leonard; Strommen, Robert; Worel, Ben; Chen, Genda

    2014-03-01

    Overloading truck loads have long been one of the key reasons for accelerating road damage, especially in rural regions where the design loads are expected to be small and in the cold regions where the wet-and-dry cycle places a significant role. To control the designed traffic loads and further guide the road design in future, periodical weight stations have been implemented for double check of the truck loads. The weight stations give chances for missing measurement of overloaded vehicles, slow down the traffic, and require additional labors. Infrastructure weight-in-motion sensors, on the other hand, keep consistent traffic flow and monitor all types of vehicles on roads. However, traditional electrical weight-in-motion sensors showed high electromagnetic interference (EMI), high dependence on environmental conditions such as moisture, and relatively short life cycle, which are unreliable for long-term weigh-inmotion measurements. Fiber Bragg grating (FBG) sensors, with unique advantages of compactness, immune to EMI and moisture, capability of quasi-distributed sensing, and long life cycle, will be a perfect candidate for long-term weigh-in-motion measurements. However, the FBG sensors also surfer from their frangible nature of glass materials for a good survive rate during sensor installation. In this study, the FBG based weight-in-motion sensors were packaged by fiber reinforced polymer (FRP) materials and further validated at MnROAD facility, Minnesota DOT (MnDOT). The design and layout of the FRP-FBG weight-in-motion sensors, their field test setup, data acquisition, and data analysis will be presented. Upon validation, the FRP-FBG sensors can be applied weigh-in-motion measurement to assistant road managements.

  4. Distributed Sensing and Processing for Multi-Camera Networks

    NASA Astrophysics Data System (ADS)

    Sankaranarayanan, Aswin C.; Chellappa, Rama; Baraniuk, Richard G.

    Sensor networks with large numbers of cameras are becoming increasingly prevalent in a wide range of applications, including video conferencing, motion capture, surveillance, and clinical diagnostics. In this chapter, we identify some of the fundamental challenges in designing such systems: robust statistical inference, computationally efficiency, and opportunistic and parsimonious sensing. We show that the geometric constraints induced by the imaging process are extremely useful for identifying and designing optimal estimators for object detection and tracking tasks. We also derive pipelined and parallelized implementations of popular tools used for statistical inference in non-linear systems, of which multi-camera systems are examples. Finally, we highlight the use of the emerging theory of compressive sensing in reducing the amount of data sensed and communicated by a camera network.

  5. Anchor Node Localization for Wireless Sensor Networks Using Video and Compass Information Fusion

    PubMed Central

    Pescaru, Dan; Curiac, Daniel-Ioan

    2014-01-01

    Distributed sensing, computing and communication capabilities of wireless sensor networks require, in most situations, an efficient node localization procedure. In the case of random deployments in harsh or hostile environments, a general localization process within global coordinates is based on a set of anchor nodes able to determine their own position using GPS receivers. In this paper we propose another anchor node localization technique that can be used when GPS devices cannot accomplish their mission or are considered to be too expensive. This novel technique is based on the fusion of video and compass data acquired by the anchor nodes and is especially suitable for video- or multimedia-based wireless sensor networks. For these types of wireless networks the presence of video cameras is intrinsic, while the presence of digital compasses is also required for identifying the cameras' orientations. PMID:24594614

  6. High frequency mode shapes characterisation using Digital Image Correlation and phase-based motion magnification

    NASA Astrophysics Data System (ADS)

    Molina-Viedma, A. J.; Felipe-Sesé, L.; López-Alba, E.; Díaz, F.

    2018-03-01

    High speed video cameras provide valuable information in dynamic events. Mechanical characterisation has been improved by the interpretation of the behaviour in slow-motion visualisations. In modal analysis, videos contribute to the evaluation of mode shapes but, generally, the motion is too subtle to be interpreted. In latest years, image treatment algorithms have been developed to generate a magnified version of the motion that could be interpreted by naked eye. Nevertheless, optical techniques such as Digital Image Correlation (DIC) are able to provide quantitative information of the motion with higher sensitivity than naked eye. For vibration analysis, mode shapes characterisation is one of the most interesting DIC performances. Full-field measurements provide higher spatial density than classical instrumentations or Scanning Laser Doppler Vibrometry. However, the accurateness of DIC is reduced at high frequencies as a consequence of the low displacements and hence it is habitually employed in low frequency spectra. In the current work, the combination of DIC and motion magnification is explored in order to provide numerical information in magnified videos and perform DIC mode shapes characterisation at unprecedented high frequencies through increasing the amplitude of displacements.

  7. Multi-scale AM-FM motion analysis of ultrasound videos of carotid artery plaques

    NASA Astrophysics Data System (ADS)

    Murillo, Sergio; Murray, Victor; Loizou, C. P.; Pattichis, C. S.; Pattichis, Marios; Barriga, E. Simon

    2012-03-01

    An estimated 82 million American adults have one or more type of cardiovascular diseases (CVD). CVD is the leading cause of death (1 of every 3 deaths) in the United States. When considered separately from other CVDs, stroke ranks third among all causes of death behind diseases of the heart and cancer. Stroke accounts for 1 out of every 18 deaths and is the leading cause of serious long-term disability in the United States. Motion estimation of ultrasound videos (US) of carotid artery (CA) plaques provides important information regarding plaque deformation that should be considered for distinguishing between symptomatic and asymptomatic plaques. In this paper, we present the development of verifiable methods for the estimation of plaque motion. Our methodology is tested on a set of 34 (5 symptomatic and 29 asymptomatic) ultrasound videos of carotid artery plaques. Plaque and wall motion analysis provides information about plaque instability and is used in an attempt to differentiate between symptomatic and asymptomatic cases. The final goal for motion estimation and analysis is to identify pathological conditions that can be detected from motion changes due to changes in tissue stiffness.

  8. Comparative Evaluation of Background Subtraction Algorithms in Remote Scene Videos Captured by MWIR Sensors

    PubMed Central

    Yao, Guangle; Lei, Tao; Zhong, Jiandan; Jiang, Ping; Jia, Wenwu

    2017-01-01

    Background subtraction (BS) is one of the most commonly encountered tasks in video analysis and tracking systems. It distinguishes the foreground (moving objects) from the video sequences captured by static imaging sensors. Background subtraction in remote scene infrared (IR) video is important and common to lots of fields. This paper provides a Remote Scene IR Dataset captured by our designed medium-wave infrared (MWIR) sensor. Each video sequence in this dataset is identified with specific BS challenges and the pixel-wise ground truth of foreground (FG) for each frame is also provided. A series of experiments were conducted to evaluate BS algorithms on this proposed dataset. The overall performance of BS algorithms and the processor/memory requirements were compared. Proper evaluation metrics or criteria were employed to evaluate the capability of each BS algorithm to handle different kinds of BS challenges represented in this dataset. The results and conclusions in this paper provide valid references to develop new BS algorithm for remote scene IR video sequence, and some of them are not only limited to remote scene or IR video sequence but also generic for background subtraction. The Remote Scene IR dataset and the foreground masks detected by each evaluated BS algorithm are available online: https://github.com/JerryYaoGl/BSEvaluationRemoteSceneIR. PMID:28837112

  9. Automated Production of Movies on a Cluster of Computers

    NASA Technical Reports Server (NTRS)

    Nail, Jasper; Le, Duong; Nail, William L.; Nail, William

    2008-01-01

    A method of accelerating and facilitating production of video and film motion-picture products, and software and generic designs of computer hardware to implement the method, are undergoing development. The method provides for automation of most of the tedious and repetitive tasks involved in editing and otherwise processing raw digitized imagery into final motion-picture products. The method was conceived to satisfy requirements, in industrial and scientific testing, for rapid processing of multiple streams of simultaneously captured raw video imagery into documentation in the form of edited video imagery and video derived data products for technical review and analysis. In the production of such video technical documentation, unlike in production of motion-picture products for entertainment, (1) it is often necessary to produce multiple video derived data products, (2) there are usually no second chances to repeat acquisition of raw imagery, (3) it is often desired to produce final products within minutes rather than hours, days, or months, and (4) consistency and quality, rather than aesthetics, are the primary criteria for judging the products. In the present method, the workflow has both serial and parallel aspects: processing can begin before all the raw imagery has been acquired, each video stream can be subjected to different stages of processing simultaneously on different computers that may be grouped into one or more cluster(s), and the final product may consist of multiple video streams. Results of processing on different computers are shared, so that workers can collaborate effectively.

  10. DexterNet: An Open Platform for Heterogeneous Body Sensor Networks and Its Applications

    DTIC Science & Technology

    2008-12-19

    motion, ECG PC, PDA 802.15.4 No No ALARM-NET pulse oximetry STARGATE Bluetooth No Yes [19] motion, ECG PDA, PC 802.11 (temperature, light, PIR) DexterNet...motion, ECG PDA 802.15.4 Yes Possible via SPINE EIP, GPS PC (e.g., air pollution sensor) MICAz, SHIMMER uses MICAz sensors and STARGATE to relay the

  11. Automatic Docking System Sensor Design, Test, and Mission Performance

    NASA Technical Reports Server (NTRS)

    Jackson, John L.; Howard, Richard T.; Cole, Helen J.

    1998-01-01

    The Video Guidance Sensor is a key element of an automatic rendezvous and docking program administered by NASA that was flown on STS-87 in November of 1997. The system used laser illumination of a passive target in the field of view of an on-board camera and processed the video image to determine the relative position and attitude between the target and the sensor. Comparisons of mission results with theoretical models and laboratory measurements will be discussed.

  12. Motion camera based on a custom vision sensor and an FPGA architecture

    NASA Astrophysics Data System (ADS)

    Arias-Estrada, Miguel

    1998-09-01

    A digital camera for custom focal plane arrays was developed. The camera allows the test and development of analog or mixed-mode arrays for focal plane processing. The camera is used with a custom sensor for motion detection to implement a motion computation system. The custom focal plane sensor detects moving edges at the pixel level using analog VLSI techniques. The sensor communicates motion events using the event-address protocol associated to a temporal reference. In a second stage, a coprocessing architecture based on a field programmable gate array (FPGA) computes the time-of-travel between adjacent pixels. The FPGA allows rapid prototyping and flexible architecture development. Furthermore, the FPGA interfaces the sensor to a compact PC computer which is used for high level control and data communication to the local network. The camera could be used in applications such as self-guided vehicles, mobile robotics and smart surveillance systems. The programmability of the FPGA allows the exploration of further signal processing like spatial edge detection or image segmentation tasks. The article details the motion algorithm, the sensor architecture, the use of the event- address protocol for velocity vector computation and the FPGA architecture used in the motion camera system.

  13. Using Motion-Sensor Games to Encourage Physical Activity for Adults with Intellectual Disability.

    PubMed

    Taylor, Michael J; Taylor, David; Gamboa, Patricia; Vlaev, Ivo; Darzi, Ara

    2016-01-01

    Adults with Intellectual Disability (ID) are at high risk of being in poor health as a result of exercising infrequently; recent evidence indicates this is often due to there being a lack of opportunities to exercise. This pilot study involved an investigation of the use of motion-sensor game technology to enable and encourage exercise for this population. Five adults (two female; 3 male, aged 34-74 [M = 55.20, SD = 16.71] with ID used motion-sensor games to conduct exercise at weekly sessions at a day-centre. Session attendees reported to have enjoyed using the games, and that they would like to use the games in future. Interviews were conducted with six (four female; two male, aged 27-51 [M = 40.20, SD = 11.28]) day-centre staff, which indicated ways in which the motion-sensor games could be improved for use by adults with ID, and barriers to consider in relation to their possible future implementation. Findings indicate motion-sensor games provide a useful, enjoyable and accessible way for adults with ID to exercise. Future research could investigate implementation of motion-sensor games as a method for exercise promotion for this population on a larger scale.

  14. Evaluation of a HDR image sensor with logarithmic response for mobile video-based applications

    NASA Astrophysics Data System (ADS)

    Tektonidis, Marco; Pietrzak, Mateusz; Monnin, David

    2017-10-01

    The performance of mobile video-based applications using conventional LDR (Low Dynamic Range) image sensors highly depends on the illumination conditions. As an alternative, HDR (High Dynamic Range) image sensors with logarithmic response are capable to acquire illumination-invariant HDR images in a single shot. We have implemented a complete image processing framework for a HDR sensor, including preprocessing methods (nonuniformity correction (NUC), cross-talk correction (CTC), and demosaicing) as well as tone mapping (TM). We have evaluated the HDR sensor for video-based applications w.r.t. the display of images and w.r.t. image analysis techniques. Regarding the display we have investigated the image intensity statistics over time, and regarding image analysis we assessed the number of feature correspondences between consecutive frames of temporal image sequences. For the evaluation we used HDR image data recorded from a vehicle on outdoor or combined outdoor/indoor itineraries, and we performed a comparison with corresponding conventional LDR image data.

  15. Ferroelectric Zinc Oxide Nanowire Embedded Flexible Sensor for Motion and Temperature Sensing.

    PubMed

    Shin, Sung-Ho; Park, Dae Hoon; Jung, Joo-Yun; Lee, Min Hyung; Nah, Junghyo

    2017-03-22

    We report a simple method to realize multifunctional flexible motion sensor using ferroelectric lithium-doped ZnO-PDMS. The ferroelectric layer enables piezoelectric dynamic sensing and provides additional motion information to more precisely discriminate different motions. The PEDOT:PSS-functionalized AgNWs, working as electrode layers for the piezoelectric sensing layer, resistively detect a change of both movement or temperature. Thus, through the optimal integration of both elements, the sensing limit, accuracy, and functionality can be further expanded. The method introduced here is a simple and effective route to realize a high-performance flexible motion sensor with integrated multifunctionalities.

  16. The Texas Production Manual: A Source Book for the Motion Picture and Video Industry. Fourth Edition.

    ERIC Educational Resources Information Center

    Kuttruff, Alma J., Ed.

    This manual is a cross-referenced directory to film industry personnel and services available in the State of Texas. The Who's Who section contains an alphabetical listing of companies and individuals in the state engaged in some aspect of motion picture or video production. These listings include brief summaries of each company and individuals'…

  17. Using DVI To Teach Physics: Making the Abstract More Concrete.

    ERIC Educational Resources Information Center

    Knupfer, Nancy Nelson; Zollman, Dean

    The ways in which Digital Video Interactive (DVI), a new video technology, can help students learn concepts of physics were studied in a project that included software design and production as well as formative and summative evaluation. DVI provides real-time motion, with the full-motion image contained to a window on part of the screen so that…

  18. Measuring perceived video quality of MPEG enhancement by people with impaired vision

    PubMed Central

    Fullerton, Matthew; Woods, Russell L.; Vera-Diaz, Fuensanta A.; Peli, Eli

    2007-01-01

    We used a new method to measure the perceived quality of contrast-enhanced motion video. Patients with impaired vision (n = 24) and normally-sighted subjects (n = 6) adjusted the level of MPEG-based enhancement of 8 videos (4 minutes each) drawn from 4 categories. They selected the level of enhancement that provided the preferred view of the videos, using a reducing-step-size staircase procedure. Most patients made consistent selections of the preferred level of enhancement, indicating an appreciation of and a perceived benefit from the MPEG-based enhancement. The selections varied between patients and were correlated with letter contrast sensitivity, but the selections were not affected by training, experience or video category. We measured just noticeable differences (JNDs) directly for videos, and mapped the image manipulation (enhancement in our case) onto an approximately linear perceptual space. These tools and approaches will be of value in other evaluations of the image quality of motion video manipulations. PMID:18059909

  19. Privacy-protecting video surveillance

    NASA Astrophysics Data System (ADS)

    Wickramasuriya, Jehan; Alhazzazi, Mohanned; Datt, Mahesh; Mehrotra, Sharad; Venkatasubramanian, Nalini

    2005-02-01

    Forms of surveillance are very quickly becoming an integral part of crime control policy, crisis management, social control theory and community consciousness. In turn, it has been used as a simple and effective solution to many of these problems. However, privacy-related concerns have been expressed over the development and deployment of this technology. Used properly, video cameras help expose wrongdoing but typically come at the cost of privacy to those not involved in any maleficent activity. This work describes the design and implementation of a real-time, privacy-protecting video surveillance infrastructure that fuses additional sensor information (e.g. Radio-frequency Identification) with video streams and an access control framework in order to make decisions about how and when to display the individuals under surveillance. This video surveillance system is a particular instance of a more general paradigm of privacy-protecting data collection. In this paper we describe in detail the video processing techniques used in order to achieve real-time tracking of users in pervasive spaces while utilizing the additional sensor data provided by various instrumented sensors. In particular, we discuss background modeling techniques, object tracking and implementation techniques that pertain to the overall development of this system.

  20. Motion interactive video games in home training for children with cerebral palsy: parents' perceptions.

    PubMed

    Sandlund, Marlene; Dock, Katarina; Häger, Charlotte K; Waterworth, Eva Lindh

    2012-01-01

    To explore parents' perceptions of using low-cost motion interactive video games as home training for their children with mild/moderate cerebral palsy. Semi-structured interviews were carried out with parents from 15 families after participation in an intervention where motion interactive games were used daily in home training for their child. A qualitative content analysis approach was applied. The parents' perception of the training was very positive. They expressed the view that motion interactive video games may promote positive experiences of physical training in rehabilitation, where the social aspects of gaming were especially valued. Further, the parents experienced less need to take on coaching while gaming stimulated independent training. However, there was a desire for more controlled and individualized games to better challenge the specific rehabilitative need of each child. Low-cost motion interactive games may provide increased motivation and social interaction to home training and promote independent training with reduced coaching efforts for the parents. In future designs of interactive games for rehabilitation purposes, it is important to preserve the motivational and social features of games while optimizing the individualized physical exercise.

  1. A Soft Sensor-Based Three-Dimensional (3-D) Finger Motion Measurement System

    PubMed Central

    Park, Wookeun; Ro, Kyongkwan; Kim, Suin; Bae, Joonbum

    2017-01-01

    In this study, a soft sensor-based three-dimensional (3-D) finger motion measurement system is proposed. The sensors, made of the soft material Ecoflex, comprise embedded microchannels filled with a conductive liquid metal (EGaln). The superior elasticity, light weight, and sensitivity of soft sensors allows them to be embedded in environments in which conventional sensors cannot. Complicated finger joints, such as the carpometacarpal (CMC) joint of the thumb are modeled to specify the location of the sensors. Algorithms to decouple the signals from soft sensors are proposed to extract the pure flexion, extension, abduction, and adduction joint angles. The performance of the proposed system and algorithms are verified by comparison with a camera-based motion capture system. PMID:28241414

  2. Adaptive correlation filter-based video stabilization without accumulative global motion estimation

    NASA Astrophysics Data System (ADS)

    Koh, Eunjin; Lee, Chanyong; Jeong, Dong Gil

    2014-12-01

    We present a digital video stabilization approach that provides both robustness and efficiency for practical applications. In this approach, we adopt a stabilization model that maintains spatio-temporal information of past input frames efficiently and can track original stabilization position. Because of the stabilization model, the proposed method does not need accumulative global motion estimation and can recover the original position even if there is a failure in interframe motion estimation. It can also intelligently overcome the situation of damaged or interrupted video sequences. Moreover, because it is simple and suitable to parallel scheme, we implement it on a commercial field programmable gate array and a graphics processing unit board with compute unified device architecture in a breeze. Experimental results show that the proposed approach is both fast and robust.

  3. Shape Distributions of Nonlinear Dynamical Systems for Video-Based Inference.

    PubMed

    Venkataraman, Vinay; Turaga, Pavan

    2016-12-01

    This paper presents a shape-theoretic framework for dynamical analysis of nonlinear dynamical systems which appear frequently in several video-based inference tasks. Traditional approaches to dynamical modeling have included linear and nonlinear methods with their respective drawbacks. A novel approach we propose is the use of descriptors of the shape of the dynamical attractor as a feature representation of nature of dynamics. The proposed framework has two main advantages over traditional approaches: a) representation of the dynamical system is derived directly from the observational data, without any inherent assumptions, and b) the proposed features show stability under different time-series lengths where traditional dynamical invariants fail. We illustrate our idea using nonlinear dynamical models such as Lorenz and Rossler systems, where our feature representations (shape distribution) support our hypothesis that the local shape of the reconstructed phase space can be used as a discriminative feature. Our experimental analyses on these models also indicate that the proposed framework show stability for different time-series lengths, which is useful when the available number of samples are small/variable. The specific applications of interest in this paper are: 1) activity recognition using motion capture and RGBD sensors, 2) activity quality assessment for applications in stroke rehabilitation, and 3) dynamical scene classification. We provide experimental validation through action and gesture recognition experiments on motion capture and Kinect datasets. In all these scenarios, we show experimental evidence of the favorable properties of the proposed representation.

  4. Demosaicking for full motion video 9-band SWIR sensor

    NASA Astrophysics Data System (ADS)

    Kanaev, Andrey V.; Rawhouser, Marjorie; Kutteruf, Mary R.; Yetzbacher, Michael K.; DePrenger, Michael J.; Novak, Kyle M.; Miller, Corey A.; Miller, Christopher W.

    2014-05-01

    Short wave infrared (SWIR) spectral imaging systems are vital for Intelligence, Surveillance, and Reconnaissance (ISR) applications because of their abilities to autonomously detect targets and classify materials. Typically the spectral imagers are incapable of providing Full Motion Video (FMV) because of their reliance on line scanning. We enable FMV capability for a SWIR multi-spectral camera by creating a repeating pattern of 3x3 spectral filters on a staring focal plane array (FPA). In this paper we present the imagery from an FMV SWIR camera with nine discrete bands and discuss image processing algorithms necessary for its operation. The main task of image processing in this case is demosaicking of the spectral bands i.e. reconstructing full spectral images with original FPA resolution from spatially subsampled and incomplete spectral data acquired with the choice of filter array pattern. To the best of author's knowledge, the demosaicking algorithms for nine or more equally sampled bands have not been reported before. Moreover all existing algorithms developed for demosaicking visible color filter arrays with less than nine colors assume either certain relationship between the visible colors, which are not valid for SWIR imaging, or presence of one color band with higher sampling rate compared to the rest of the bands, which does not conform to our spectral filter pattern. We will discuss and present results for two novel approaches to demosaicking: interpolation using multi-band edge information and application of multi-frame super-resolution to a single frame resolution enhancement of multi-spectral spatially multiplexed images.

  5. The Vestibular System and Human Dynamic Space Orientation

    NASA Technical Reports Server (NTRS)

    Meiry, J. L.

    1966-01-01

    The motion sensors of the vestibular system are studied to determine their role in human dynamic space orientation and manual vehicle control. The investigation yielded control models for the sensors, descriptions of the subsystems for eye stabilization, and demonstrations of the effects of motion cues on closed loop manual control. Experiments on the abilities of subjects to perceive a variety of linear motions provided data on the dynamic characteristics of the otoliths, the linear motion sensors. Angular acceleration threshold measurements supplemented knowledge of the semicircular canals, the angular motion sensors. Mathematical models are presented to describe the known control characteristics of the vestibular sensors, relating subjective perception of motion to objective motion of a vehicle. The vestibular system, the neck rotation proprioceptors and the visual system form part of the control system which maintains the eye stationary relative to a target or a reference. The contribution of each of these systems was identified through experiments involving head and body rotations about a vertical axis. Compensatory eye movements in response to neck rotation were demonstrated and their dynamic characteristics described by a lag-lead model. The eye motions attributable to neck rotations and vestibular stimulation obey superposition when both systems are active. Human operator compensatory tracking is investigated in simple vehicle orientation control system with stable and unstable controlled elements. Control of vehicle orientation to a reference is simulated in three modes: visual, motion and combined. Motion cues sensed by the vestibular system through tactile sensation enable the operator to generate more lead compensation than in fixed base simulation with only visual input. The tracking performance of the human in an unstable control system near the limits of controllability is shown to depend heavily upon the rate information provided by the vestibular sensors.

  6. Thermal Property Analysis of Axle Load Sensors for Weighing Vehicles in Weigh-in-Motion System

    PubMed Central

    Burnos, Piotr; Gajda, Janusz

    2016-01-01

    Systems which permit the weighing of vehicles in motion are called dynamic Weigh-in-Motion scales. In such systems, axle load sensors are embedded in the pavement. Among the influencing factors that negatively affect weighing accuracy is the pavement temperature. This paper presents a detailed analysis of this phenomenon and describes the properties of polymer, quartz and bending plate load sensors. The studies were conducted in two ways: at roadside Weigh-in-Motion sites and at a laboratory using a climate chamber. For accuracy assessment of roadside systems, the reference vehicle method was used. The pavement temperature influence on the weighing error was experimentally investigated as well as a non-uniform temperature distribution along and across the Weigh-in-Motion site. Tests carried out in the climatic chamber allowed the influence of temperature on the sensor intrinsic error to be determined. The results presented clearly show that all kinds of sensors are temperature sensitive. This is a new finding, as up to now the quartz and bending plate sensors were considered insensitive to this factor. PMID:27983704

  7. Self-adapted and tunable graphene strain sensors for detecting both subtle and large human motions.

    PubMed

    Tao, Lu-Qi; Wang, Dan-Yang; Tian, He; Ju, Zhen-Yi; Liu, Ying; Pang, Yu; Chen, Yuan-Quan; Yang, Yi; Ren, Tian-Ling

    2017-06-22

    Conventional strain sensors rarely have both a high gauge factor and a large strain range simultaneously, so they can only be used in specific situations where only a high sensitivity or a large strain range is required. However, for detecting human motions that include both subtle and large motions, these strain sensors can't meet the diverse demands simultaneously. Here, we come up with laser patterned graphene strain sensors with self-adapted and tunable performance for the first time. A series of strain sensors with either an ultrahigh gauge factor or a preferable strain range can be fabricated simultaneously via one-step laser patterning, and are suitable for detecting all human motions. The strain sensors have a GF of up to 457 with a strain range of 35%, or have a strain range of up to 100% with a GF of 268. Most importantly, the performance of the strain sensors can be easily tuned by adjusting the patterns of the graphene, so that the sensors can meet diverse demands in both subtle and large motion situations. The graphene strain sensors show significant potential in applications such as wearable electronics, health monitoring and intelligent robots. Furthermore, the facile, fast and low-cost fabrication method will make them possible and practical to be used for commercial applications in the future.

  8. Migrating EO/IR sensors to cloud-based infrastructure as service architectures

    NASA Astrophysics Data System (ADS)

    Berglie, Stephen T.; Webster, Steven; May, Christopher M.

    2014-06-01

    The Night Vision Image Generator (NVIG), a product of US Army RDECOM CERDEC NVESD, is a visualization tool used widely throughout Army simulation environments to provide fully attributed synthesized, full motion video using physics-based sensor and environmental effects. The NVIG relies heavily on contemporary hardware-based acceleration and GPU processing techniques, which push the envelope of both enterprise and commodity-level hypervisor support for providing virtual machines with direct access to hardware resources. The NVIG has successfully been integrated into fully virtual environments where system architectures leverage cloudbased technologies to various extents in order to streamline infrastructure and service management. This paper details the challenges presented to engineers seeking to migrate GPU-bound processes, such as the NVIG, to virtual machines and, ultimately, Cloud-Based IAS architectures. In addition, it presents the path that led to success for the NVIG. A brief overview of Cloud-Based infrastructure management tool sets is provided, and several virtual desktop solutions are outlined. A discrimination is made between general purpose virtual desktop technologies compared to technologies that expose GPU-specific capabilities, including direct rendering and hard ware-based video encoding. Candidate hypervisor/virtual machine configurations that nominally satisfy the virtualized hardware-level GPU requirements of the NVIG are presented , and each is subsequently reviewed in light of its implications on higher-level Cloud management techniques. Implementation details are included from the hardware level, through the operating system, to the 3D graphics APls required by the NVIG and similar GPU-bound tools.

  9. Geometrical and optical calibration of a vehicle-mounted IR imager for land mine localization

    NASA Astrophysics Data System (ADS)

    Aitken, Victor C.; Russell, Kevin L.; McFee, John E.

    2000-08-01

    Many present day vehicle-mounted landmine detection systems use IR imagers. Information furnished by these imaging systems usually consists of video and the location of targets within the video. In multisensor systems employing data fusion, there is a need to convert sensor information to a common coordinate system that all sensors share.

  10. Biomechanics of the Sensor–Tissue Interface—Effects of Motion, Pressure, and Design on Sensor Performance and Foreign Body Response—Part II: Examples and Application

    PubMed Central

    Helton, Kristen L; Ratner, Buddy D; Wisniewski, Natalie A

    2011-01-01

    This article is the second part of a two-part review in which we explore the biomechanics of the sensor–tissue interface as an important aspect of continuous glucose sensor biocompatibility. Part I, featured in this issue of Journal of Diabetes Science and Technology, describes a theoretical framework of how biomechanical factors such as motion and pressure (typically micromotion and micropressure) affect tissue physiology around a sensor and in turn, impact sensor performance. Here in Part II, a literature review is presented that summarizes examples of motion or pressure affecting sensor performance. Data are presented that show how both acute and chronic forces can impact continuous glucose monitor signals. Also presented are potential strategies for countering the ill effects of motion and pressure on glucose sensors. Improved engineering and optimized chemical biocompatibility have advanced sensor design and function, but we believe that mechanical biocompatibility, a rarely considered factor, must also be optimized in order to achieve an accurate, long-term, implantable sensor. PMID:21722579

  11. Video stereolization: combining motion analysis with user interaction.

    PubMed

    Liao, Miao; Gao, Jizhou; Yang, Ruigang; Gong, Minglun

    2012-07-01

    We present a semiautomatic system that converts conventional videos into stereoscopic videos by combining motion analysis with user interaction, aiming to transfer as much as possible labeling work from the user to the computer. In addition to the widely used structure from motion (SFM) techniques, we develop two new methods that analyze the optical flow to provide additional qualitative depth constraints. They remove the camera movement restriction imposed by SFM so that general motions can be used in scene depth estimation-the central problem in mono-to-stereo conversion. With these algorithms, the user's labeling task is significantly simplified. We further developed a quadratic programming approach to incorporate both quantitative depth and qualitative depth (such as these from user scribbling) to recover dense depth maps for all frames, from which stereoscopic view can be synthesized. In addition to visual results, we present user study results showing that our approach is more intuitive and less labor intensive, while producing 3D effect comparable to that from current state-of-the-art interactive algorithms.

  12. The Effect of Motion Analysis Activities in a Video-Based Laboratory in Students' Understanding of Position, Velocity and Frames of Reference

    ERIC Educational Resources Information Center

    Koleza, Eugenia; Pappas, John

    2008-01-01

    In this article, we present the results of a qualitative research project on the effect of motion analysis activities in a Video-Based Laboratory (VBL) on students' understanding of position, velocity and frames of reference. The participants in our research were 48 pre-service teachers enrolled in Education Departments with no previous strong…

  13. Balance rehabilitation: promoting the role of virtual reality in patients with diabetic peripheral neuropathy.

    PubMed

    Grewal, Gurtej S; Sayeed, Rashad; Schwenk, Michael; Bharara, Manish; Menzies, Robert; Talal, Talal K; Armstrong, David G; Najafi, Bijan

    2013-01-01

    Individuals with diabetic peripheral neuropathy frequently experience concomitant impaired proprioception and postural instability. Conventional exercise training has been demonstrated to be effective in improving balance but does not incorporate visual feedback targeting joint perception, which is an integral mechanism that helps compensate for impaired proprioception in diabetic peripheral neuropathy. This prospective cohort study recruited 29 participants (mean ± SD: age, 57 ± 10 years; body mass index [calculated as weight in kilograms divided by height in meters squared], 26.9 ± 3.1). Participants satisfying the inclusion criteria performed predefined ankle exercises through reaching tasks, with visual feedback from the ankle joint projected on a screen. Ankle motion in the mediolateral and anteroposterior directions was captured using wearable sensors attached to the participant's shank. Improvements in postural stability were quantified by measuring center of mass sway area and the reciprocal compensatory index before and after training using validated body-worn sensor technology. Findings revealed a significant reduction in center of mass sway after training (mean, 22%; P = .02). A higher postural stability deficit (high body sway) at baseline was associated with higher training gains in postural balance (reduction in center of mass sway) (r = -0.52, P < .05). In addition, significant improvement was observed in postural coordination between the ankle and hip joints (mean, 10.4%; P = .04). The present research implemented a novel balance rehabilitation strategy based on virtual reality technology. The method included wearable sensors and an interactive user interface for real-time visual feedback based on ankle joint motion, similar to a video gaming environment, for compensating impaired joint proprioception. These findings support that visual feedback generated from the ankle joint coupled with motor learning may be effective in improving postural stability in patients with diabetic peripheral neuropathy.

  14. Ultra-wideband radar motion sensor

    DOEpatents

    McEwan, Thomas E.

    1994-01-01

    A motion sensor is based on ultra-wideband (UWB) radar. UWB radar range is determined by a pulse-echo interval. For motion detection, the sensors operate by staring at a fixed range and then sensing any change in the averaged radar reflectivity at that range. A sampling gate is opened at a fixed delay after the emission of a transmit pulse. The resultant sampling gate output is averaged over repeated pulses. Changes in the averaged sampling gate output represent changes in the radar reflectivity at a particular range, and thus motion.

  15. Ultra-wideband radar motion sensor

    DOEpatents

    McEwan, T.E.

    1994-11-01

    A motion sensor is based on ultra-wideband (UWB) radar. UWB radar range is determined by a pulse-echo interval. For motion detection, the sensors operate by staring at a fixed range and then sensing any change in the averaged radar reflectivity at that range. A sampling gate is opened at a fixed delay after the emission of a transmit pulse. The resultant sampling gate output is averaged over repeated pulses. Changes in the averaged sampling gate output represent changes in the radar reflectivity at a particular range, and thus motion. 15 figs.

  16. Validation of a new method for finding the rotational axes of the knee using both marker-based roentgen stereophotogrammetric analysis and 3D video-based motion analysis for kinematic measurements.

    PubMed

    Roland, Michelle; Hull, M L; Howell, S M

    2011-05-01

    In a previous paper, we reported the virtual axis finder, which is a new method for finding the rotational axes of the knee. The virtual axis finder was validated through simulations that were subject to limitations. Hence, the objective of the present study was to perform a mechanical validation with two measurement modalities: 3D video-based motion analysis and marker-based roentgen stereophotogrammetric analysis (RSA). A two rotational axis mechanism was developed, which simulated internal-external (or longitudinal) and flexion-extension (FE) rotations. The actual axes of rotation were known with respect to motion analysis and RSA markers within ± 0.0006 deg and ± 0.036 mm and ± 0.0001 deg and ± 0.016 mm, respectively. The orientation and position root mean squared errors for identifying the longitudinal rotation (LR) and FE axes with video-based motion analysis (0.26 deg, 0.28 m, 0.36 deg, and 0.25 mm, respectively) were smaller than with RSA (1.04 deg, 0.84 mm, 0.82 deg, and 0.32 mm, respectively). The random error or precision in the orientation and position was significantly better (p=0.01 and p=0.02, respectively) in identifying the LR axis with video-based motion analysis (0.23 deg and 0.24 mm) than with RSA (0.95 deg and 0.76 mm). There was no significant difference in the bias errors between measurement modalities. In comparing the mechanical validations to virtual validations, the virtual validations produced comparable errors to those of the mechanical validation. The only significant difference between the errors of the mechanical and virtual validations was the precision in the position of the LR axis while simulating video-based motion analysis (0.24 mm and 0.78 mm, p=0.019). These results indicate that video-based motion analysis with the equipment used in this study is the superior measurement modality for use with the virtual axis finder but both measurement modalities produce satisfactory results. The lack of significant differences between validation techniques suggests that the virtual sensitivity analysis previously performed was appropriately modeled. Thus, the virtual axis finder can be applied with a thorough understanding of its errors in a variety of test conditions.

  17. A fuzzy measure approach to motion frame analysis for scene detection. M.S. Thesis - Houston Univ.

    NASA Technical Reports Server (NTRS)

    Leigh, Albert B.; Pal, Sankar K.

    1992-01-01

    This paper addresses a solution to the problem of scene estimation of motion video data in the fuzzy set theoretic framework. Using fuzzy image feature extractors, a new algorithm is developed to compute the change of information in each of two successive frames to classify scenes. This classification process of raw input visual data can be used to establish structure for correlation. The algorithm attempts to fulfill the need for nonlinear, frame-accurate access to video data for applications such as video editing and visual document archival/retrieval systems in multimedia environments.

  18. Ultra-wideband radar sensors and networks

    DOEpatents

    Leach, Jr., Richard R; Nekoogar, Faranak; Haugen, Peter C

    2013-08-06

    Ultra wideband radar motion sensors strategically placed in an area of interest communicate with a wireless ad hoc network to provide remote area surveillance. Swept range impulse radar and a heart and respiration monitor combined with the motion sensor further improves discrimination.

  19. Enhancing physics demos using iPhone slow motion

    NASA Astrophysics Data System (ADS)

    Lincoln, James

    2017-12-01

    Slow motion video enhances our ability to perceive and experience the physical world. This can help students and teachers especially in cases of fast moving objects or detailed events that happen too quickly for the eye to follow. As often as possible, demonstrations should be performed by the students themselves and luckily many of them will already have this technology in their pockets. The "S" series of iPhone has the slow motion video feature standard, which also includes simultaneous sound recording (somewhat unusual among slow motion cameras). In this article I share some of my experiences using this feature and provide advice on how to successfully use this technology in the classroom.

  20. Non-contact and noise tolerant heart rate monitoring using microwave doppler sensor and range imagery.

    PubMed

    Matsunag, Daichi; Izumi, Shintaro; Okuno, Keisuke; Kawaguchi, Hiroshi; Yoshimoto, Masahiko

    2015-01-01

    This paper describes a non-contact and noise-tolerant heart beat monitoring system. The proposed system comprises a microwave Doppler sensor and range imagery using Microsoft Kinect™. The possible application of the proposed system is a driver health monitoring. We introduce the sensor fusion approach to minimize the heart beat detection error. The proposed algorithm can subtract a body motion artifact from Doppler sensor output using time-frequency analysis. The body motion artifact is a crucially important problem for biosignal monitoring using microwave Doppler sensor. The body motion speed is obtainable from range imagery, which has 5-mm resolution at 30-cm distance. Measurement results show that the success rate of the heart beat detection is improved about 75% on average when the Doppler wave is degraded by the body motion artifact.

  1. Distinguishing the causes of falls in humans using an array of wearable tri-axial accelerometers.

    PubMed

    Aziz, Omar; Park, Edward J; Mori, Greg; Robinovitch, Stephen N

    2014-01-01

    Falls are the number one cause of injury in older adults. Lack of objective evidence on the cause and circumstances of falls is often a barrier to effective prevention strategies. Previous studies have established the ability of wearable miniature inertial sensors (accelerometers and gyroscopes) to automatically detect falls, for the purpose of delivering medical assistance. In the current study, we extend the applications of this technology, by developing and evaluating the accuracy of wearable sensor systems for determining the cause of falls. Twelve young adults participated in experimental trials involving falls due to seven causes: slips, trips, fainting, and incorrect shifting/transfer of body weight while sitting down, standing up from sitting, reaching and turning. Features (means and variances) of acceleration data acquired from four tri-axial accelerometers during the falling trials were input to a linear discriminant analysis technique. Data from an array of three sensors (left ankle+right ankle+sternum) provided at least 83% sensitivity and 89% specificity in classifying falls due to slips, trips, and incorrect shift of body weight during sitting, reaching and turning. Classification of falls due to fainting and incorrect shift during rising was less successful across all sensor combinations. Furthermore, similar classification accuracy was observed with data from wearable sensors and a video-based motion analysis system. These results establish a basis for the development of sensor-based fall monitoring systems that provide information on the cause and circumstances of falls, to direct fall prevention strategies at a patient or population level. Copyright © 2013 Elsevier B.V. All rights reserved.

  2. Artifact Noise Removal Techniques on Seismocardiogram Using Two Tri-Axial Accelerometers

    PubMed Central

    Luu, Loc; Dinh, Anh

    2018-01-01

    The aim of this study is on the investigation of motion noise removal techniques using two-accelerometer sensor system and various placements of the sensors on gentle movement and walking of the patients. A Wi-Fi based data acquisition system and a framework on Matlab are developed to collect and process data while the subjects are in motion. The tests include eight volunteers who have no record of heart disease. The walking and running data on the subjects are analyzed to find the minimal-noise bandwidth of the SCG signal. This bandwidth is used to design filters in the motion noise removal techniques and peak signal detection. There are two main techniques of combining signals from the two sensors to mitigate the motion artifact: analog processing and digital processing. The analog processing comprises analog circuits performing adding or subtracting functions and bandpass filter to remove artifact noises before entering the data acquisition system. The digital processing processes all the data using combinations of total acceleration and z-axis only acceleration. The two techniques are tested on three placements of accelerometer sensors including horizontal, vertical, and diagonal on gentle motion and walking. In general, the total acceleration and z-axis acceleration are the best techniques to deal with gentle motion on all sensor placements which improve average systolic signal-noise-ratio (SNR) around 2 times and average diastolic SNR around 3 times comparing to traditional methods using only one accelerometer. With walking motion, ADDER and z-axis acceleration are the best techniques on all placements of the sensors on the body which enhance about 7 times of average systolic SNR and about 11 times of average diastolic SNR comparing to only one accelerometer method. Among the sensor placements, the performance of horizontal placement of the sensors is outstanding comparing with other positions on all motions. PMID:29614821

  3. High-Speed Video Analysis of Damped Harmonic Motion

    ERIC Educational Resources Information Center

    Poonyawatpornkul, J.; Wattanakasiwich, P.

    2013-01-01

    In this paper, we acquire and analyse high-speed videos of a spring-mass system oscillating in glycerin at different temperatures. Three cases of damped harmonic oscillation are investigated and analysed by using high-speed video at a rate of 120 frames s[superscript -1] and Tracker Video Analysis (Tracker) software. We present empirical data for…

  4. Avoiding space robot collisions utilizing the NASA/GSFC tri-mode skin sensor

    NASA Technical Reports Server (NTRS)

    Prinz, F. B. S.; Mahalingam, S.

    1992-01-01

    A capacitance based proximity sensor, the 'Capaciflector' (Vranish 92), has been developed at the Goddard Space Flight Center of NASA. We had investigated the use of this sensor for avoiding and maneuvering around unexpected objects (Mahalingam 92). The approach developed there would help in executing collision-free gross motions. Another important aspect of robot motion planning is fine motion planning. Let us classify manipulator robot motion planning into two groups at the task level: gross motion planning and fine motion planning. We use the term 'gross planning' where the major degrees of freedom of the robot execute large motions, for example, the motion of a robot in a pick and place type operation. We use the term 'fine motion' to indicate motions of the robot where the large dofs do not move much, and move far less than the mirror dofs, such as in inserting a peg in a hole. In this report we describe our experiments and experiences in this area.

  5. Two-terminal video coding.

    PubMed

    Yang, Yang; Stanković, Vladimir; Xiong, Zixiang; Zhao, Wei

    2009-03-01

    Following recent works on the rate region of the quadratic Gaussian two-terminal source coding problem and limit-approaching code designs, this paper examines multiterminal source coding of two correlated, i.e., stereo, video sequences to save the sum rate over independent coding of both sequences. Two multiterminal video coding schemes are proposed. In the first scheme, the left sequence of the stereo pair is coded by H.264/AVC and used at the joint decoder to facilitate Wyner-Ziv coding of the right video sequence. The first I-frame of the right sequence is successively coded by H.264/AVC Intracoding and Wyner-Ziv coding. An efficient stereo matching algorithm based on loopy belief propagation is then adopted at the decoder to produce pixel-level disparity maps between the corresponding frames of the two decoded video sequences on the fly. Based on the disparity maps, side information for both motion vectors and motion-compensated residual frames of the right sequence are generated at the decoder before Wyner-Ziv encoding. In the second scheme, source splitting is employed on top of classic and Wyner-Ziv coding for compression of both I-frames to allow flexible rate allocation between the two sequences. Experiments with both schemes on stereo video sequences using H.264/AVC, LDPC codes for Slepian-Wolf coding of the motion vectors, and scalar quantization in conjunction with LDPC codes for Wyner-Ziv coding of the residual coefficients give a slightly lower sum rate than separate H.264/AVC coding of both sequences at the same video quality.

  6. Optical tweezers with 2.5 kHz bandwidth video detection for single-colloid electrophoresis

    NASA Astrophysics Data System (ADS)

    Otto, Oliver; Gutsche, Christof; Kremer, Friedrich; Keyser, Ulrich F.

    2008-02-01

    We developed an optical tweezers setup to study the electrophoretic motion of colloids in an external electric field. The setup is based on standard components for illumination and video detection. Our video based optical tracking of the colloid motion has a time resolution of 0.2ms, resulting in a bandwidth of 2.5kHz. This enables calibration of the optical tweezers by Brownian motion without applying a quadrant photodetector. We demonstrate that our system has a spatial resolution of 0.5nm and a force sensitivity of 20fN using a Fourier algorithm to detect periodic oscillations of the trapped colloid caused by an external ac field. The electrophoretic mobility and zeta potential of a single colloid can be extracted in aqueous solution avoiding screening effects common for usual bulk measurements.

  7. Statistical modelling of subdiffusive dynamics in the cytoplasm of living cells: A FARIMA approach

    NASA Astrophysics Data System (ADS)

    Burnecki, K.; Muszkieta, M.; Sikora, G.; Weron, A.

    2012-04-01

    Golding and Cox (Phys. Rev. Lett., 96 (2006) 098102) tracked the motion of individual fluorescently labelled mRNA molecules inside live E. coli cells. They found that in the set of 23 trajectories from 3 different experiments, the automatically recognized motion is subdiffusive and published an intriguing microscopy video. Here, we extract the corresponding time series from this video by image segmentation method and present its detailed statistical analysis. We find that this trajectory was not included in the data set already studied and has different statistical properties. It is best fitted by a fractional autoregressive integrated moving average (FARIMA) process with the normal-inverse Gaussian (NIG) noise and the negative memory. In contrast to earlier studies, this shows that the fractional Brownian motion is not the best model for the dynamics documented in this video.

  8. Infrasound and Seismic Observation of Hayabusa Reentry as An Artificial Meteorite Fall

    NASA Astrophysics Data System (ADS)

    Ishihara, Y.; Hiramatsu, Y.; Yamamoto, M.; Furumoto, M.; Fujita, K.

    2011-12-01

    The Hayabusa, the world first sample-return minor body explorer, came back to the Earth, and reentered into the Earth's atmosphere on June 13, 2010. Following the reentries of the Genesis in 2004 and the Stardust in 2006, the return of the Hayabusa Sample Return Capsule (H-SRC) was the third direct reentry event from the interplanetary transfer orbit to the Earth at a velocity of over 11.2 km/s. In addition, it was the world first case of the direct reentry of the spacecraft (H-S/C) itself from the interplanetary transfer orbit. The H-SRC and the H-S/C reentries are very good analogue for studying bolide size meteors and meteorite falls. We, therefore, conducted a ground observation campaign for aspects of meteor sciences. We carried out multi-site ground observations of the Hayabusa reentry in the Woomera Prohibited Area (WPA), Australia. The observations were configured with optical imaging with still and video recordings, spectroscopies, and shockwave detection with infrasound and seismic sensors. In this study, we report details of the infrasound/seismic observations and those results. To detect shockwaves from the H-SRC and the H-S/C, we installed three small aperture infrasound/seismic arrays as the main stations. In addition, we also installed three single component seismic sub stations and an audible sound recorder. The infrasound and seismic sensors clearly recorded sonic boom type shockwaves from the H-SRC and disrupted fragments of the H-S/C itself. The audible recording also detected those shockwave sounds in the human audible band. Positive overpressure values of shockwaves (corresponding to the H-SRC) recorded at three main stations are 1.3 Pa, 1.0 Pa, and 0.7 Pa with the slant distance of 36.9 km, 54.9 km, and 67.8 km (i.e., the source altitude of 36.5 km, 38.9km, and 40.6 km), respectively. These amplitudes of shockwave overpressures are systematically smaller than those of theoretical predictions. We tried to identify the sources of shockwaves signals from the disrupted fragments as optically identified fragments of the H-S/C. In comparison between the infrasonic pressure waves and the video image analyses, the generation of sonic boom type shockwaves by the both of the H-SRC and fragmented parts of the H-S/C at an altitude of 40±1 km was confirmed with one-to-one correspondence with each other. The incident vectors of the shockwave from the H-SRC at all the three arrays are estimated by F-K spectrum and agree well with predicted ones. Particle motions of ground motions excited by the shockwave from the H-SRC show characteristics of typical Rayleigh wave. In addition, we examine the relationship between amplitudes of those ground motions and overpressure values correspond to the H-SRC. We compare amplitudes of ground motions detected by seismometers to theoretical estimations of air-to-ground coupling. In calculations, we have used amplitudes of observed overpressures by infrasound sensors as incident pressure waves and elastic moduli of each site are obtained by H/V spectrum analysis. The observed amplitudes of the ground motions are almost consistent with the theoretical estimations.

  9. Motion perception: behavior and neural substrate.

    PubMed

    Mather, George

    2011-05-01

    Visual motion perception is vital for survival. Single-unit recordings in primate primary visual cortex (V1) have revealed the existence of specialized motion sensing neurons; perceptual effects such as the motion after-effect demonstrate their importance for motion perception. Human psychophysical data on motion detection can be explained by a computational model of cortical motion sensors. Both psychophysical and physiological data reveal at least two classes of motion sensor capable of sensing motion in luminance-defined and texture-defined patterns, respectively. Psychophysical experiments also reveal that motion can be seen independently of motion sensor output, based on attentive tracking of visual features. Sensor outputs are inherently ambiguous, due to the problem of univariance in neural responses. In order to compute stimulus direction and speed, the visual system must compare the responses of many different sensors sensitive to different directions and speeds. Physiological data show that this computation occurs in the visual middle temporal (MT) area. Recent psychophysical studies indicate that information about spatial form may also play a role in motion computations. Adaptation studies show that the human visual system is selectively sensitive to large-scale optic flow patterns, and physiological studies indicate that cells in the middle superior temporal (MST) area derive this sensitivity from the combined responses of many MT cells. Extraretinal signals used to control eye movements are an important source of signals to cancel out the retinal motion responses generated by eye movements, though visual information also plays a role. A number of issues remain to be resolved at all levels of the motion-processing hierarchy. WIREs Cogni Sci 2011 2 305-314 DOI: 10.1002/wcs.110 For further resources related to this article, please visit the WIREs website Additional Supporting Information may be found in http://www.lifesci.sussex.ac.uk/home/George_Mather/Motion/index.html. Copyright © 2010 John Wiley & Sons, Ltd.

  10. A Study on the Performance of Low Cost MEMS Sensors in Strong Motion Studies

    NASA Astrophysics Data System (ADS)

    Tanırcan, Gulum; Alçık, Hakan; Kaya, Yavuz; Beyen, Kemal

    2017-04-01

    Recent advances in sensors have helped the growth of local networks. In recent years, many Micro Electro Mechanical System (MEMS)-based accelerometers have been successfully used in seismology and earthquake engineering projects. This is basically due to the increased precision obtained in these downsized instruments. Moreover, they are cheaper alternatives to force-balance type accelerometers. In Turkey, though MEMS-based accelerometers have been used in various individual applications such as magnitude and location determination of earthquakes, structural health monitoring, earthquake early warning systems, MEMS-based strong motion networks are not currently available in other populated areas of the country. Motivation of this study comes from the fact that, if MEMS sensors are qualified to record strong motion parameters of large earthquakes, a dense network can be formed in an affordable price at highly populated areas. The goals of this study are 1) to test the performance of MEMS sensors, which are available in the inventory of the Institute through shake table tests, and 2) to setup a small scale network for observing online data transfer speed to a trusted in-house routine. In order to evaluate the suitability of sensors in strong motion related studies, MEMS sensors and a reference sensor are tested under excitations of sweeping waves as well as scaled earthquake recordings. Amplitude response and correlation coefficients versus frequencies are compared. As for earthquake recordings, comparisons are carried out in terms of strong motion(SM) parameters (PGA, PGV, AI, CAV) and elastic response of structures (Sa). Furthermore, this paper also focuses on sensitivity and selectivity for sensor performances in time-frequency domain to compare different sensing characteristics and analyzes the basic strong motion parameters that influence the design majors. Results show that the cheapest MEMS sensors under investigation are able to record the mid-frequency dominant SM parameters PGV and CAV with high correlation. PGA and AI, the high frequency components of the ground motion, are underestimated. Such a difference, on the other hand, does not manifest itself on intensity estimations. PGV and CAV values from the reference and MEMS sensors converge to the same seismic intensity level. Hence a strong motion network with MEMS sensors could be a modest option to produce PGV-based damage impact of an urban area under large magnitude earthquake threats in the immediate vicinity.

  11. Joint modality fusion and temporal context exploitation for semantic video analysis

    NASA Astrophysics Data System (ADS)

    Papadopoulos, Georgios Th; Mezaris, Vasileios; Kompatsiaris, Ioannis; Strintzis, Michael G.

    2011-12-01

    In this paper, a multi-modal context-aware approach to semantic video analysis is presented. Overall, the examined video sequence is initially segmented into shots and for every resulting shot appropriate color, motion and audio features are extracted. Then, Hidden Markov Models (HMMs) are employed for performing an initial association of each shot with the semantic classes that are of interest separately for each modality. Subsequently, a graphical modeling-based approach is proposed for jointly performing modality fusion and temporal context exploitation. Novelties of this work include the combined use of contextual information and multi-modal fusion, and the development of a new representation for providing motion distribution information to HMMs. Specifically, an integrated Bayesian Network is introduced for simultaneously performing information fusion of the individual modality analysis results and exploitation of temporal context, contrary to the usual practice of performing each task separately. Contextual information is in the form of temporal relations among the supported classes. Additionally, a new computationally efficient method for providing motion energy distribution-related information to HMMs, which supports the incorporation of motion characteristics from previous frames to the currently examined one, is presented. The final outcome of this overall video analysis framework is the association of a semantic class with every shot. Experimental results as well as comparative evaluation from the application of the proposed approach to four datasets belonging to the domains of tennis, news and volleyball broadcast video are presented.

  12. Pilot study on real-time motion detection in UAS video data by human observer and image exploitation algorithm

    NASA Astrophysics Data System (ADS)

    Hild, Jutta; Krüger, Wolfgang; Brüstle, Stefan; Trantelle, Patrick; Unmüßig, Gabriel; Voit, Michael; Heinze, Norbert; Peinsipp-Byma, Elisabeth; Beyerer, Jürgen

    2017-05-01

    Real-time motion video analysis is a challenging and exhausting task for the human observer, particularly in safety and security critical domains. Hence, customized video analysis systems providing functions for the analysis of subtasks like motion detection or target tracking are welcome. While such automated algorithms relieve the human operators from performing basic subtasks, they impose additional interaction duties on them. Prior work shows that, e.g., for interaction with target tracking algorithms, a gaze-enhanced user interface is beneficial. In this contribution, we present an investigation on interaction with an independent motion detection (IDM) algorithm. Besides identifying an appropriate interaction technique for the user interface - again, we compare gaze-based and traditional mouse-based interaction - we focus on the benefit an IDM algorithm might provide for an UAS video analyst. In a pilot study, we exposed ten subjects to the task of moving target detection in UAS video data twice, once performing with automatic support, once performing without it. We compare the two conditions considering performance in terms of effectiveness (correct target selections). Additionally, we report perceived workload (measured using the NASA-TLX questionnaire) and user satisfaction (measured using the ISO 9241-411 questionnaire). The results show that a combination of gaze input and automated IDM algorithm provides valuable support for the human observer, increasing the number of correct target selections up to 62% and reducing workload at the same time.

  13. iTRAC : intelligent video compression for automated traffic surveillance systems.

    DOT National Transportation Integrated Search

    2010-08-01

    Non-intrusive video imaging sensors are commonly used in traffic monitoring : and surveillance. For some applications it is necessary to transmit the video : data over communication links. However, due to increased requirements of : bitrate this mean...

  14. Slow motion in films and video clips: Music influences perceived duration and emotion, autonomic physiological activation and pupillary responses.

    PubMed

    Wöllner, Clemens; Hammerschmidt, David; Albrecht, Henning

    2018-01-01

    Slow motion scenes are ubiquitous in screen-based audiovisual media and are typically accompanied by emotional music. The strong effects of slow motion on observers are hypothetically related to heightened emotional states in which time seems to pass more slowly. These states are simulated in films and video clips, and seem to resemble such experiences in daily life. The current study investigated time perception and emotional response to media clips containing decelerated human motion, with or without music using psychometric and psychophysiological testing methods. Participants were presented with slow-motion scenes taken from commercial films, ballet and sports footage, as well as the same scenes converted to real-time. Results reveal that slow-motion scenes, compared to adapted real-time scenes, led to systematic underestimations of duration, lower perceived arousal but higher valence, lower respiration rates and smaller pupillary diameters. The presence of music compared to visual-only presentations strongly affected results in terms of higher accuracy in duration estimates, higher perceived arousal and valence, higher physiological activation and larger pupillary diameters, indicating higher arousal. Video genre affected responses in addition. These findings suggest that perceiving slow motion is not related to states of high arousal, but rather affects cognitive dimensions of perceived time and valence. Music influences these experiences profoundly, thus strengthening the impact of stretched time in audiovisual media.

  15. Electro-Optic Segment-Segment Sensors for Radio and Optical Telescopes

    NASA Technical Reports Server (NTRS)

    Abramovici, Alex

    2012-01-01

    A document discusses an electro-optic sensor that consists of a collimator, attached to one segment, and a quad diode, attached to an adjacent segment. Relative segment-segment motion causes the beam from the collimator to move across the quad diode, thus generating a measureable electric signal. This sensor type, which is relatively inexpensive, can be configured as an edge sensor, or as a remote segment-segment motion sensor.

  16. Virtual Sensors for Advanced Controllers in Rehabilitation Robotics.

    PubMed

    Mancisidor, Aitziber; Zubizarreta, Asier; Cabanes, Itziar; Portillo, Eva; Jung, Je Hyung

    2018-03-05

    In order to properly control rehabilitation robotic devices, the measurement of interaction force and motion between patient and robot is an essential part. Usually, however, this is a complex task that requires the use of accurate sensors which increase the cost and the complexity of the robotic device. In this work, we address the development of virtual sensors that can be used as an alternative of actual force and motion sensors for the Universal Haptic Pantograph (UHP) rehabilitation robot for upper limbs training. These virtual sensors estimate the force and motion at the contact point where the patient interacts with the robot using the mathematical model of the robotic device and measurement through low cost position sensors. To demonstrate the performance of the proposed virtual sensors, they have been implemented in an advanced position/force controller of the UHP rehabilitation robot and experimentally evaluated. The experimental results reveal that the controller based on the virtual sensors has similar performance to the one using direct measurement (less than 0.005 m and 1.5 N difference in mean error). Hence, the developed virtual sensors to estimate interaction force and motion can be adopted to replace actual precise but normally high-priced sensors which are fundamental components for advanced control of rehabilitation robotic devices.

  17. 3D Measurement of Forearm and Upper Arm during Throwing Motion using Body Mounted Sensor

    NASA Astrophysics Data System (ADS)

    Koda, Hideharu; Sagawa, Koichi; Kuroshima, Kouta; Tsukamoto, Toshiaki; Urita, Kazutaka; Ishibashi, Yasuyuki

    The aim of this study is to propose the measurement method of three-dimensional (3D) movement of forearm and upper arm during pitching motion of baseball using inertial sensors without serious consideration of sensor installation. Although high accuracy measurement of sports motion is achieved by using optical motion capture system at present, it has some disadvantages such as the calibration of cameras and limitation of measurement place. Whereas the proposed method for 3D measurement of pitching motion using body mounted sensors provides trajectory and orientation of upper arm by the integration of acceleration and angular velocity measured on upper limb. The trajectory of forearm is derived so that the elbow joint axis of forearm corresponds to that of upper arm. Spatial relation between upper limb and sensor system is obtained by performing predetermined movements of upper limb and utilizing angular velocity and gravitational acceleration. The integration error is modified so that the estimated final position, velocity and posture of upper limb agree with the actual ones. The experimental results of the measurement of pitching motion show that trajectories of shoulder, elbow and wrist estimated by the proposed method are highly correlated to those from the motion capture system within the estimation error of about 10 [%].

  18. Source-Adaptation-Based Wireless Video Transport: A Cross-Layer Approach

    NASA Astrophysics Data System (ADS)

    Qu, Qi; Pei, Yong; Modestino, James W.; Tian, Xusheng

    2006-12-01

    Real-time packet video transmission over wireless networks is expected to experience bursty packet losses that can cause substantial degradation to the transmitted video quality. In wireless networks, channel state information is hard to obtain in a reliable and timely manner due to the rapid change of wireless environments. However, the source motion information is always available and can be obtained easily and accurately from video sequences. Therefore, in this paper, we propose a novel cross-layer framework that exploits only the motion information inherent in video sequences and efficiently combines a packetization scheme, a cross-layer forward error correction (FEC)-based unequal error protection (UEP) scheme, an intracoding rate selection scheme as well as a novel intraframe interleaving scheme. Our objective and subjective results demonstrate that the proposed approach is very effective in dealing with the bursty packet losses occurring on wireless networks without incurring any additional implementation complexity or delay. Thus, the simplicity of our proposed system has important implications for the implementation of a practical real-time video transmission system.

  19. 76 FR 60931 - Records Schedules; Availability and Request for Comments

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-30

    ..., fact sheets, slogans, posters, publications, videos, and public service announcements. 2. Department of... publications, directives, technical advisories, photographs, posters, motion pictures, video, and sound...

  20. Movement Behaviour of Traditionally Managed Cattle in the Eastern Province of Zambia Captured Using Two-Dimensional Motion Sensors.

    PubMed

    Lubaba, Caesar H; Hidano, Arata; Welburn, Susan C; Revie, Crawford W; Eisler, Mark C

    2015-01-01

    Two-dimensional motion sensors use electronic accelerometers to record the lying, standing and walking activity of cattle. Movement behaviour data collected automatically using these sensors over prolonged periods of time could be of use to stakeholders making management and disease control decisions in rural sub-Saharan Africa leading to potential improvements in animal health and production. Motion sensors were used in this study with the aim of monitoring and quantifying the movement behaviour of traditionally managed Angoni cattle in Petauke District in the Eastern Province of Zambia. This study was designed to assess whether motion sensors were suitable for use on traditionally managed cattle in two veterinary camps in Petauke District in the Eastern Province of Zambia. In each veterinary camp, twenty cattle were selected for study. Each animal had a motion sensor placed on its hind leg to continuously measure and record its movement behaviour over a two week period. Analysing the sensor data using principal components analysis (PCA) revealed that the majority of variability in behaviour among studied cattle could be attributed to their behaviour at night and in the morning. The behaviour at night was markedly different between veterinary camps; while differences in the morning appeared to reflect varying behaviour across all animals. The study results validate the use of such motion sensors in the chosen setting and highlight the importance of appropriate data summarisation techniques to adequately describe and compare animal movement behaviours if association to other factors, such as location, breed or health status are to be assessed.

  1. Privacy enabling technology for video surveillance

    NASA Astrophysics Data System (ADS)

    Dufaux, Frédéric; Ouaret, Mourad; Abdeljaoued, Yousri; Navarro, Alfonso; Vergnenègre, Fabrice; Ebrahimi, Touradj

    2006-05-01

    In this paper, we address the problem privacy in video surveillance. We propose an efficient solution based on transformdomain scrambling of regions of interest in a video sequence. More specifically, the sign of selected transform coefficients is flipped during encoding. We address more specifically the case of Motion JPEG 2000. Simulation results show that the technique can be successfully applied to conceal information in regions of interest in the scene while providing with a good level of security. Furthermore, the scrambling is flexible and allows adjusting the amount of distortion introduced. This is achieved with a small impact on coding performance and negligible computational complexity increase. In the proposed video surveillance system, heterogeneous clients can remotely access the system through the Internet or 2G/3G mobile phone network. Thanks to the inherently scalable Motion JPEG 2000 codestream, the server is able to adapt the resolution and bandwidth of the delivered video depending on the usage environment of the client.

  2. Estimation of heart rate variability using a compact radiofrequency motion sensor.

    PubMed

    Sugita, Norihiro; Matsuoka, Narumi; Yoshizawa, Makoto; Abe, Makoto; Homma, Noriyasu; Otake, Hideharu; Kim, Junghyun; Ohtaki, Yukio

    2015-12-01

    Physiological indices that reflect autonomic nervous activity are considered useful for monitoring peoples' health on a daily basis. A number of such indices are derived from heart rate variability, which is obtained by a radiofrequency (RF) motion sensor without making physical contact with the user's body. However, the bulkiness of RF motion sensors used in previous studies makes them unsuitable for home use. In this study, a new method to measure heart rate variability using a compact RF motion sensor that is sufficiently small to fit in a user's shirt pocket is proposed. To extract a heart rate related component from the sensor signal, an algorithm that optimizes a digital filter based on the power spectral density of the signal is proposed. The signals of the RF motion sensor were measured for 29 subjects during the resting state and their heart rate variability was estimated from the measured signals using the proposed method and a conventional method. A correlation coefficient between true heart rate and heart rate estimated from the proposed method was 0.69. Further, the experimental results showed the viability of the RF sensor for monitoring autonomic nervous activity. However, some improvements such as controlling the direction of sensing were necessary for stable measurement. Copyright © 2015 IPEM. Published by Elsevier Ltd. All rights reserved.

  3. Advanced Video Guidance Sensor (AVGS) Development Testing

    NASA Technical Reports Server (NTRS)

    Howard, Richard T.; Johnston, Albert S.; Bryan, Thomas C.; Book, Michael L.

    2004-01-01

    NASA's Marshall Space Flight Center was the driving force behind the development of the Advanced Video Guidance Sensor, an active sensor system that provides near-range sensor data as part of an automatic rendezvous and docking system. The sensor determines the relative positions and attitudes between the active sensor and the passive target at ranges up to 300 meters. The AVGS uses laser diodes to illuminate retro-reflectors in the target, a solid-state camera to detect the return from the target, and image capture electronics and a digital signal processor to convert the video information into the relative positions and attitudes. The AVGS will fly as part of the Demonstration of Autonomous Rendezvous Technologies (DART) in October, 2004. This development effort has required a great deal of testing of various sorts at every phase of development. Some of the test efforts included optical characterization of performance with the intended target, thermal vacuum testing, performance tests in long range vacuum facilities, EMI/EMC tests, and performance testing in dynamic situations. The sensor has been shown to track a target at ranges of up to 300 meters, both in vacuum and ambient conditions, to survive and operate during the thermal vacuum cycling specific to the DART mission, to handle EM1 well, and to perform well in dynamic situations.

  4. Highly stretchable and wearable graphene strain sensors with controllable sensitivity for human motion monitoring.

    PubMed

    Park, Jung Jin; Hyun, Woo Jin; Mun, Sung Cik; Park, Yong Tae; Park, O Ok

    2015-03-25

    Because of their outstanding electrical and mechanical properties, graphene strain sensors have attracted extensive attention for electronic applications in virtual reality, robotics, medical diagnostics, and healthcare. Although several strain sensors based on graphene have been reported, the stretchability and sensitivity of these sensors remain limited, and also there is a pressing need to develop a practical fabrication process. This paper reports the fabrication and characterization of new types of graphene strain sensors based on stretchable yarns. Highly stretchable, sensitive, and wearable sensors are realized by a layer-by-layer assembly method that is simple, low-cost, scalable, and solution-processable. Because of the yarn structures, these sensors exhibit high stretchability (up to 150%) and versatility, and can detect both large- and small-scale human motions. For this study, wearable electronics are fabricated with implanted sensors that can monitor diverse human motions, including joint movement, phonation, swallowing, and breathing.

  5. Japanese Science Films; a Descriptive and Evaluative Catalog of: 16mm Motion Pictures, 8mm Cartridges, and Video Tapes.

    ERIC Educational Resources Information Center

    Newren, Edward F., Ed.

    One hundred and eighty Japanese 16mm motion pictures, 8mm cartridges, and video tapes produced and judged appropriate for a variety of audience levels are listed in alphabetical order by title with descriptive and evaluative information. A subject heading list and a subject index to the film titles are included, as well as a sample of the…

  6. System of launchable mesoscale robots for distributed sensing

    NASA Astrophysics Data System (ADS)

    Yesin, Kemal B.; Nelson, Bradley J.; Papanikolopoulos, Nikolaos P.; Voyles, Richard M.; Krantz, Donald G.

    1999-08-01

    A system of launchable miniature mobile robots with various sensors as payload is used for distributed sensing. The robots are projected to areas of interest either by a robot launcher or by a human operator using standard equipment. A wireless communication network is used to exchange information with the robots. Payloads such as a MEMS sensor for vibration detection, a microphone and an active video module are used mainly to detect humans. The video camera provides live images through a wireless video transmitter and a pan-tilt mechanism expands the effective field of view. There are strict restrictions on total volume and power consumption of the payloads due to the small size of the robot. Emerging technologies are used to address these restrictions. In this paper, we describe the use of microrobotic technologies to develop active vision modules for the mesoscale robot. A single chip CMOS video sensor is used along with a miniature lens that is approximately the size of a sugar cube. The device consumes 100 mW; about 5 times less than the power consumption of a comparable CCD camera. Miniature gearmotors 3 mm in diameter are used to drive the pan-tilt mechanism. A miniature video transmitter is used to transmit analog video signals from the camera.

  7. Multi-modal highlight generation for sports videos using an information-theoretic excitability measure

    NASA Astrophysics Data System (ADS)

    Hasan, Taufiq; Bořil, Hynek; Sangwan, Abhijeet; L Hansen, John H.

    2013-12-01

    The ability to detect and organize `hot spots' representing areas of excitement within video streams is a challenging research problem when techniques rely exclusively on video content. A generic method for sports video highlight selection is presented in this study which leverages both video/image structure as well as audio/speech properties. Processing begins where the video is partitioned into small segments and several multi-modal features are extracted from each segment. Excitability is computed based on the likelihood of the segmental features residing in certain regions of their joint probability density function space which are considered both exciting and rare. The proposed measure is used to rank order the partitioned segments to compress the overall video sequence and produce a contiguous set of highlights. Experiments are performed on baseball videos based on signal processing advancements for excitement assessment in the commentators' speech, audio energy, slow motion replay, scene cut density, and motion activity as features. Detailed analysis on correlation between user excitability and various speech production parameters is conducted and an effective scheme is designed to estimate the excitement level of commentator's speech from the sports videos. Subjective evaluation of excitability and ranking of video segments demonstrate a higher correlation with the proposed measure compared to well-established techniques indicating the effectiveness of the overall approach.

  8. A motion compensation technique using sliced blocks and its application to hybrid video coding

    NASA Astrophysics Data System (ADS)

    Kondo, Satoshi; Sasai, Hisao

    2005-07-01

    This paper proposes a new motion compensation method using "sliced blocks" in DCT-based hybrid video coding. In H.264 ? MPEG-4 Advance Video Coding, a brand-new international video coding standard, motion compensation can be performed by splitting macroblocks into multiple square or rectangular regions. In the proposed method, on the other hand, macroblocks or sub-macroblocks are divided into two regions (sliced blocks) by an arbitrary line segment. The result is that the shapes of the segmented regions are not limited to squares or rectangles, allowing the shapes of the segmented regions to better match the boundaries between moving objects. Thus, the proposed method can improve the performance of the motion compensation. In addition, adaptive prediction of the shape according to the region shape of the surrounding macroblocks can reduce overheads to describe shape information in the bitstream. The proposed method also has the advantage that conventional coding techniques such as mode decision using rate-distortion optimization can be utilized, since coding processes such as frequency transform and quantization are performed on a macroblock basis, similar to the conventional coding methods. The proposed method is implemented in an H.264-based P-picture codec and an improvement in bit rate of 5% is confirmed in comparison with H.264.

  9. Spatial constraints of stereopsis in video displays

    NASA Technical Reports Server (NTRS)

    Schor, Clifton

    1989-01-01

    Recent development in video technology, such as the liquid crystal displays and shutters, have made it feasible to incorporate stereoscopic depth into the 3-D representations on 2-D displays. However, depth has already been vividly portrayed in video displays without stereopsis using the classical artists' depth cues described by Helmholtz (1866) and the dynamic depth cues described in detail by Ittleson (1952). Successful static depth cues include overlap, size, linear perspective, texture gradients, and shading. Effective dynamic cues include looming (Regan and Beverly, 1979) and motion parallax (Rogers and Graham, 1982). Stereoscopic depth is superior to the monocular distance cues under certain circumstances. It is most useful at portraying depth intervals as small as 5 to 10 arc secs. For this reason it is extremely useful in user-video interactions such as telepresence. Objects can be manipulated in 3-D space, for example, while a person who controls the operations views a virtual image of the manipulated object on a remote 2-D video display. Stereopsis also provides structure and form information in camouflaged surfaces such as tree foliage. Motion parallax also reveals form; however, without other monocular cues such as overlap, motion parallax can yield an ambiguous perception. For example, a turning sphere, portrayed as solid by parallax can appear to rotate either leftward or rightward. However, only one direction of rotation is perceived when stereo-depth is included. If the scene is static, then stereopsis is the principal cue for revealing the camouflaged surface structure. Finally, dynamic stereopsis provides information about the direction of motion in depth (Regan and Beverly, 1979). Clearly there are many spatial constraints, including spatial frequency content, retinal eccentricity, exposure duration, target spacing, and disparity gradient, which - when properly adjusted - can greatly enhance stereodepth in video displays.

  10. A Motion Tracking and Sensor Fusion Module for Medical Simulation.

    PubMed

    Shen, Yunhe; Wu, Fan; Tseng, Kuo-Shih; Ye, Ding; Raymond, John; Konety, Badrinath; Sweet, Robert

    2016-01-01

    Here we introduce a motion tracking or navigation module for medical simulation systems. Our main contribution is a sensor fusion method for proximity or distance sensors integrated with inertial measurement unit (IMU). Since IMU rotation tracking has been widely studied, we focus on the position or trajectory tracking of the instrument moving freely within a given boundary. In our experiments, we have found that this module reliably tracks instrument motion.

  11. Scalable sensing electronics towards a motion capture suit

    NASA Astrophysics Data System (ADS)

    Xu, Daniel; Gisby, Todd A.; Xie, Shane; Anderson, Iain A.

    2013-04-01

    Being able to accurately record body motion allows complex movements to be characterised and studied. This is especially important in the film or sport coaching industry. Unfortunately, the human body has over 600 skeletal muscles, giving rise to multiple degrees of freedom. In order to accurately capture motion such as hand gestures, elbow or knee flexion and extension, vast numbers of sensors are required. Dielectric elastomer (DE) sensors are an emerging class of electroactive polymer (EAP) that is soft, lightweight and compliant. These characteristics are ideal for a motion capture suit. One challenge is to design sensing electronics that can simultaneously measure multiple sensors. This paper describes a scalable capacitive sensing device that can measure up to 8 different sensors with an update rate of 20Hz.

  12. Second Interim Report on the Installation and Evaluation of Weigh-In-Motion Utilizing Quartz-Piezo Sensor Technology

    DOT National Transportation Integrated Search

    1999-11-01

    The objective of this study is to determine the sensor survivability, accuracy and reliability of quartz-piezoelectric weigh-in-motion (WIM) sensors under actual traffic conditions in Connecticut's environment. This second interim report provides a s...

  13. Integrated multisensor perimeter detection systems

    NASA Astrophysics Data System (ADS)

    Kent, P. J.; Fretwell, P.; Barrett, D. J.; Faulkner, D. A.

    2007-10-01

    The report describes the results of a multi-year programme of research aimed at the development of an integrated multi-sensor perimeter detection system capable of being deployed at an operational site. The research was driven by end user requirements in protective security, particularly in threat detection and assessment, where effective capability was either not available or prohibitively expensive. Novel video analytics have been designed to provide robust detection of pedestrians in clutter while new radar detection and tracking algorithms provide wide area day/night surveillance. A modular integrated architecture based on commercially available components has been developed. A graphical user interface allows intuitive interaction and visualisation with the sensors. The fusion of video, radar and other sensor data provides the basis of a threat detection capability for real life conditions. The system was designed to be modular and extendable in order to accommodate future and legacy surveillance sensors. The current sensor mix includes stereoscopic video cameras, mmWave ground movement radar, CCTV and a commercially available perimeter detection cable. The paper outlines the development of the system and describes the lessons learnt after deployment in a pilot trial.

  14. Visual Attention Modeling for Stereoscopic Video: A Benchmark and Computational Model.

    PubMed

    Fang, Yuming; Zhang, Chi; Li, Jing; Lei, Jianjun; Perreira Da Silva, Matthieu; Le Callet, Patrick

    2017-10-01

    In this paper, we investigate the visual attention modeling for stereoscopic video from the following two aspects. First, we build one large-scale eye tracking database as the benchmark of visual attention modeling for stereoscopic video. The database includes 47 video sequences and their corresponding eye fixation data. Second, we propose a novel computational model of visual attention for stereoscopic video based on Gestalt theory. In the proposed model, we extract the low-level features, including luminance, color, texture, and depth, from discrete cosine transform coefficients, which are used to calculate feature contrast for the spatial saliency computation. The temporal saliency is calculated by the motion contrast from the planar and depth motion features in the stereoscopic video sequences. The final saliency is estimated by fusing the spatial and temporal saliency with uncertainty weighting, which is estimated by the laws of proximity, continuity, and common fate in Gestalt theory. Experimental results show that the proposed method outperforms the state-of-the-art stereoscopic video saliency detection models on our built large-scale eye tracking database and one other database (DML-ITRACK-3D).

  15. Video quality assessment using motion-compensated temporal filtering and manifold feature similarity

    PubMed Central

    Yu, Mei; Jiang, Gangyi; Shao, Feng; Peng, Zongju

    2017-01-01

    Well-performed Video quality assessment (VQA) method should be consistent with human visual systems for better prediction accuracy. In this paper, we propose a VQA method using motion-compensated temporal filtering (MCTF) and manifold feature similarity. To be more specific, a group of frames (GoF) is first decomposed into a temporal high-pass component (HPC) and a temporal low-pass component (LPC) by MCTF. Following this, manifold feature learning (MFL) and phase congruency (PC) are used to predict the quality of temporal LPC and temporal HPC respectively. The quality measures of the LPC and the HPC are then combined as GoF quality. A temporal pooling strategy is subsequently used to integrate GoF qualities into an overall video quality. The proposed VQA method appropriately processes temporal information in video by MCTF and temporal pooling strategy, and simulate human visual perception by MFL. Experiments on publicly available video quality database showed that in comparison with several state-of-the-art VQA methods, the proposed VQA method achieves better consistency with subjective video quality and can predict video quality more accurately. PMID:28445489

  16. Assessment of Fall Characteristics From Depth Sensor Videos.

    PubMed

    O'Connor, Jennifer J; Phillips, Lorraine J; Folarinde, Bunmi; Alexander, Gregory L; Rantz, Marilyn

    2017-07-01

    Falls are a major source of death and disability in older adults; little data, however, are available about the etiology of falls in community-dwelling older adults. Sensor systems installed in independent and assisted living residences of 105 older adults participating in an ongoing technology study were programmed to record live videos of probable fall events. Sixty-four fall video segments from 19 individuals were viewed and rated using the Falls Video Assessment Questionnaire. Raters identified that 56% (n = 36) of falls were due to an incorrect shift of body weight and 27% (n = 17) from losing support of an external object, such as an unlocked wheelchair or rolling walker. In 60% of falls, mobility aids were in the room or in use at the time of the fall. Use of environmentally embedded sensors provides a mechanism for real-time fall detection and, ultimately, may supply information to clinicians for fall prevention interventions. [Journal of Gerontological Nursing, 43(7), 13-19.]. Copyright 2017, SLACK Incorporated.

  17. Extraction and Analysis of Respiratory Motion Using Wearable Inertial Sensor System during Trunk Motion

    PubMed Central

    Gaidhani, Apoorva; Moon, Kee S.; Ozturk, Yusuf; Lee, Sung Q.; Youm, Woosub

    2017-01-01

    Respiratory activity is an essential vital sign of life that can indicate changes in typical breathing patterns and irregular body functions such as asthma and panic attacks. Many times, there is a need to monitor breathing activity while performing day-to-day functions such as standing, bending, trunk stretching or during yoga exercises. A single IMU (inertial measurement unit) can be used in measuring respiratory motion; however, breathing motion data may be influenced by a body trunk movement that occurs while recording respiratory activity. This research employs a pair of wireless, wearable IMU sensors custom-made by the Department of Electrical Engineering at San Diego State University. After appropriate sensor placement for data collection, this research applies principles of robotics, using the Denavit-Hartenberg convention, to extract relative angular motion between the two sensors. One of the obtained relative joint angles in the “Sagittal” plane predominantly yields respiratory activity. An improvised version of the proposed method and wearable, wireless sensors can be suitable to extract respiratory information while performing sports or exercises, as they do not restrict body motion or the choice of location to gather data. PMID:29258214

  18. Inertial navigation sensor integrated motion analysis for autonomous vehicle navigation

    NASA Technical Reports Server (NTRS)

    Roberts, Barry; Bhanu, Bir

    1992-01-01

    Recent work on INS integrated motion analysis is described. Results were obtained with a maximally passive system of obstacle detection (OD) for ground-based vehicles and rotorcraft. The OD approach involves motion analysis of imagery acquired by a passive sensor in the course of vehicle travel to generate range measurements to world points within the sensor FOV. INS data and scene analysis results are used to enhance interest point selection, the matching of the interest points, and the subsequent motion-based computations, tracking, and OD. The most important lesson learned from the research described here is that the incorporation of inertial data into the motion analysis program greatly improves the analysis and makes the process more robust.

  19. Effectiveness of Serious Games for Leap Motion on the Functionality of the Upper Limb in Parkinson's Disease: A Feasibility Study.

    PubMed

    Oña, Edwin Daniel; Balaguer, Carlos; Cano-de la Cuerda, Roberto; Collado-Vázquez, Susana; Jardón, Alberto

    2018-01-01

    The design and application of Serious Games (SG) based on the Leap Motion sensor are presented as a tool to support the rehabilitation therapies for upper limbs. Initially, the design principles and their implementation are described, focusing on improving both unilateral and bilateral manual dexterity and coordination. The design of the games has been supervised by specialized therapists. To assess the therapeutic effectiveness of the proposed system, a protocol of trials with Parkinson's patients has been defined. Evaluations of the physical condition of the participants in the study, at the beginning and at the end of the treatment, are carried out using standard tests. The specific measurements of each game give the therapist more detailed information about the patients' evolution after finishing the planned protocol. The obtained results support the fact that the set of developed video games can be combined to define different therapy protocols and that the information obtained is richer than the one obtained through current clinical metrics, serving as method of motor function assessment.

  20. Effectiveness of Serious Games for Leap Motion on the Functionality of the Upper Limb in Parkinson's Disease: A Feasibility Study

    PubMed Central

    Balaguer, Carlos; Collado-Vázquez, Susana; Jardón, Alberto

    2018-01-01

    The design and application of Serious Games (SG) based on the Leap Motion sensor are presented as a tool to support the rehabilitation therapies for upper limbs. Initially, the design principles and their implementation are described, focusing on improving both unilateral and bilateral manual dexterity and coordination. The design of the games has been supervised by specialized therapists. To assess the therapeutic effectiveness of the proposed system, a protocol of trials with Parkinson's patients has been defined. Evaluations of the physical condition of the participants in the study, at the beginning and at the end of the treatment, are carried out using standard tests. The specific measurements of each game give the therapist more detailed information about the patients' evolution after finishing the planned protocol. The obtained results support the fact that the set of developed video games can be combined to define different therapy protocols and that the information obtained is richer than the one obtained through current clinical metrics, serving as method of motor function assessment. PMID:29849550

  1. Validation of enhanced kinect sensor based motion capturing for gait assessment

    PubMed Central

    Müller, Björn; Ilg, Winfried; Giese, Martin A.

    2017-01-01

    Optical motion capturing systems are expensive and require substantial dedicated space to be set up. On the other hand, they provide unsurpassed accuracy and reliability. In many situations however flexibility is required and the motion capturing system can only temporarily be placed. The Microsoft Kinect v2 sensor is comparatively cheap and with respect to gait analysis promising results have been published. We here present a motion capturing system that is easy to set up, flexible with respect to the sensor locations and delivers high accuracy in gait parameters comparable to a gold standard motion capturing system (VICON). Further, we demonstrate that sensor setups which track the person only from one-side are less accurate and should be replaced by two-sided setups. With respect to commonly analyzed gait parameters, especially step width, our system shows higher agreement with the VICON system than previous reports. PMID:28410413

  2. Using the Scroll Wheel on a Wireless Mouse as a Motion Sensor

    NASA Astrophysics Data System (ADS)

    Taylor, Richard S.; Wilson, William R.

    2010-12-01

    Since its inception in the mid-80s, the computer mouse has undergone several design changes. As the mouse has evolved, physicists have found new ways to utilize it as a motion sensor. For example, the rollers in a mechanical mouse have been used as pulleys to study the motion of a magnet moving through a copper tube as a quantitative demonstration of Lenz's law and to study mechanical oscillators (e.g., mass-spring system and compound pendulum).1-3 Additionally, the optical system in an optical mouse has been used to study a mechanical oscillator (e.g., mass-spring system).4 The argument for using a mouse as a motion sensor has been and continues to be availability and cost. This paper continues this tradition by detailing the use of the scroll wheel on a wireless mouse as a motion sensor.

  3. Highly Sensitive Flexible Human Motion Sensor Based on ZnSnO3/PVDF Composite

    NASA Astrophysics Data System (ADS)

    Yang, Young Jin; Aziz, Shahid; Mehdi, Syed Murtuza; Sajid, Memoon; Jagadeesan, Srikanth; Choi, Kyung Hyun

    2017-07-01

    A highly sensitive body motion sensor has been fabricated based on a composite active layer of zinc stannate (ZnSnO3) nano-cubes and poly(vinylidene fluoride) (PVDF) polymer. The thin film-based active layer was deposited on polyethylene terephthalate flexible substrate through D-bar coating technique. Electrical and morphological characterizations of the films and sensors were carried out to discover the physical characteristics and the output response of the devices. The synergistic effect between piezoelectric ZnSnO3 nanocubes and β phase PVDF provides the composite with a desirable electrical conductivity, remarkable bend sensitivity, and excellent stability, ideal for the fabrication of a motion sensor. The recorded resistance of the sensor towards the bending angles of -150° to 0° to 150° changed from 20 MΩ to 55 MΩ to 100 MΩ, respectively, showing the composite to be a very good candidate for motion sensing applications.

  4. A depth video sensor-based life-logging human activity recognition system for elderly care in smart indoor environments.

    PubMed

    Jalal, Ahmad; Kamal, Shaharyar; Kim, Daijin

    2014-07-02

    Recent advancements in depth video sensors technologies have made human activity recognition (HAR) realizable for elderly monitoring applications. Although conventional HAR utilizes RGB video sensors, HAR could be greatly improved with depth video sensors which produce depth or distance information. In this paper, a depth-based life logging HAR system is designed to recognize the daily activities of elderly people and turn these environments into an intelligent living space. Initially, a depth imaging sensor is used to capture depth silhouettes. Based on these silhouettes, human skeletons with joint information are produced which are further used for activity recognition and generating their life logs. The life-logging system is divided into two processes. Firstly, the training system includes data collection using a depth camera, feature extraction and training for each activity via Hidden Markov Models. Secondly, after training, the recognition engine starts to recognize the learned activities and produces life logs. The system was evaluated using life logging features against principal component and independent component features and achieved satisfactory recognition rates against the conventional approaches. Experiments conducted on the smart indoor activity datasets and the MSRDailyActivity3D dataset show promising results. The proposed system is directly applicable to any elderly monitoring system, such as monitoring healthcare problems for elderly people, or examining the indoor activities of people at home, office or hospital.

  5. A Depth Video Sensor-Based Life-Logging Human Activity Recognition System for Elderly Care in Smart Indoor Environments

    PubMed Central

    Jalal, Ahmad; Kamal, Shaharyar; Kim, Daijin

    2014-01-01

    Recent advancements in depth video sensors technologies have made human activity recognition (HAR) realizable for elderly monitoring applications. Although conventional HAR utilizes RGB video sensors, HAR could be greatly improved with depth video sensors which produce depth or distance information. In this paper, a depth-based life logging HAR system is designed to recognize the daily activities of elderly people and turn these environments into an intelligent living space. Initially, a depth imaging sensor is used to capture depth silhouettes. Based on these silhouettes, human skeletons with joint information are produced which are further used for activity recognition and generating their life logs. The life-logging system is divided into two processes. Firstly, the training system includes data collection using a depth camera, feature extraction and training for each activity via Hidden Markov Models. Secondly, after training, the recognition engine starts to recognize the learned activities and produces life logs. The system was evaluated using life logging features against principal component and independent component features and achieved satisfactory recognition rates against the conventional approaches. Experiments conducted on the smart indoor activity datasets and the MSRDailyActivity3D dataset show promising results. The proposed system is directly applicable to any elderly monitoring system, such as monitoring healthcare problems for elderly people, or examining the indoor activities of people at home, office or hospital. PMID:24991942

  6. Sensor and Video Monitoring of Water Quality at Bristol Floating Harbour

    NASA Astrophysics Data System (ADS)

    Chen, Yiheng; Han, Dawei

    2017-04-01

    Water system is an essential component in a smart city for its sustainability and resilience. The harbourside is a focal area of​ ​Bristol with new buildings and features redeveloped in the last ten years, attracting numerous visitors by the diversity of attractions and beautiful views. There is a strong​ ​relationship between the satisfactory of the visitors and local people with the water quality in the Harbour. The freshness and beauty of the water body would please people as well as benefit the aquatic ecosystems. As we are entering a data-rich era, this pilot project aims to explore the concept of using​ ​ video cameras and smart sensors to collect and monitor water quality condition at the Bristol harbourside. The video cameras and smart sensors are connected to the Bristol Is Open network, an open programmable city platform. This will be the​ first​ attempt to collect water quality data in real time in the​ ​Bristol urban area with the wireless network. The videos and images of the water body collected by the cameras will be correlated with the in-situ water quality parameters for research​ ​purposes. The successful implementation of the sensors can attract more academic researchers and industrial partners to expand the sensor network to multiple locations​ ​around the city covering the other parts of the Harbour and River Avon, leading to a new generation of urban system infrastructure model.

  7. Effect of tilt on strong motion data processing

    USGS Publications Warehouse

    Graizer, V.M.

    2005-01-01

    In the near-field of an earthquake the effects of the rotational components of ground motion may not be negligible compared to the effects of translational motions. Analyses of the equations of motion of horizontal and vertical pendulums show that horizontal sensors are sensitive not only to translational motion but also to tilts. Ignoring this tilt sensitivity may produce unreliable results, especially in calculations of permanent displacements and long-period calculations. In contrast to horizontal sensors, vertical sensors do not have these limitations, since they are less sensitive to tilts. In general, only six-component systems measuring rotations and accelerations, or three-component systems similar to systems used in inertial navigation assuring purely translational motion of accelerometers can be used to calculate residual displacements. ?? 2004 Elsevier Ltd. All rights reserved.

  8. Recognition using gait.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koch, Mark William

    2007-09-01

    Gait or an individual's manner of walking, is one approach for recognizing people at a distance. Studies in psychophysics and medicine indicate that humans can recognize people by their gait and have found twenty-four different components to gait that taken together make it a unique signature. Besides not requiring close sensor contact, gait also does not necessarily require a cooperative subject. Using video data of people walking in different scenarios and environmental conditions we develop and test an algorithm that uses shape and motion to identify people from their gait. The algorithm uses dynamic time warping to match stored templatesmore » against an unknown sequence of silhouettes extracted from a person walking. While results under similar constraints and conditions are very good, the algorithm quickly degrades with varying conditions such as surface and clothing.« less

  9. Self-evaluation on Motion Adaptation for Service Robots

    NASA Astrophysics Data System (ADS)

    Funabora, Yuki; Yano, Yoshikazu; Doki, Shinji; Okuma, Shigeru

    We suggest self motion evaluation method to adapt to environmental changes for service robots. Several motions such as walking, dancing, demonstration and so on are described with time series patterns. These motions are optimized with the architecture of the robot and under certain surrounding environment. Under unknown operating environment, robots cannot accomplish their tasks. We propose autonomous motion generation techniques based on heuristic search with histories of internal sensor values. New motion patterns are explored under unknown operating environment based on self-evaluation. Robot has some prepared motions which realize the tasks under the designed environment. Internal sensor values observed under the designed environment with prepared motions show the interaction results with the environment. Self-evaluation is composed of difference of internal sensor values between designed environment and unknown operating environment. Proposed method modifies the motions to synchronize the interaction results on both environment. New motion patterns are generated to maximize self-evaluation function without external information, such as run length, global position of robot, human observation and so on. Experimental results show that the possibility to adapt autonomously patterned motions to environmental changes.

  10. An automated data exploitation system for airborne sensors

    NASA Astrophysics Data System (ADS)

    Chen, Hai-Wen; McGurr, Mike

    2014-06-01

    Advanced wide area persistent surveillance (WAPS) sensor systems on manned or unmanned airborne vehicles are essential for wide-area urban security monitoring in order to protect our people and our warfighter from terrorist attacks. Currently, human (imagery) analysts process huge data collections from full motion video (FMV) for data exploitation and analysis (real-time and forensic), providing slow and inaccurate results. An Automated Data Exploitation System (ADES) is urgently needed. In this paper, we present a recently developed ADES for airborne vehicles under heavy urban background clutter conditions. This system includes four processes: (1) fast image registration, stabilization, and mosaicking; (2) advanced non-linear morphological moving target detection; (3) robust multiple target (vehicles, dismounts, and human) tracking (up to 100 target tracks); and (4) moving or static target/object recognition (super-resolution). Test results with real FMV data indicate that our ADES can reliably detect, track, and recognize multiple vehicles under heavy urban background clutters. Furthermore, our example shows that ADES as a baseline platform can provide capability for vehicle abnormal behavior detection to help imagery analysts quickly trace down potential threats and crimes.

  11. Determination of pitch rotation in a spherical birefringent microparticle

    NASA Astrophysics Data System (ADS)

    Roy, Basudev; Ramaiya, Avin; Schäffer, Erik

    2018-03-01

    Rotational motion of a three dimensional spherical microscopic object can happen either in pitch, yaw or roll fashion. Among these, the yaw motion has been conventionally studied using the intensity of scattered light from birefringent microspheres through crossed polarizers. Up until now, however, there is no way to study the pitch motion in spherical microspheres. Here, we suggest a new method to study the pitch motion of birefringent microspheres under crossed polarizers by measuring the 2-fold asymmetry in the scattered signal either using video microscopy or with optical tweezers. We show a couple of simple examples of pitch rotation determination using video microscopy for a microsphere attached with a kinesin molecule while moving along a microtubule and of a particle diffusing freely in water.

  12. Optoelectronic Sensor System for Guidance in Docking

    NASA Technical Reports Server (NTRS)

    Howard, Richard T.; Bryan, Thomas C.; Book, Michael L.; Jackson, John L.

    2004-01-01

    The Video Guidance Sensor (VGS) system is an optoelectronic sensor that provides automated guidance between two vehicles. In the original intended application, the two vehicles would be spacecraft docking together, but the basic principles of design and operation of the sensor are applicable to aircraft, robots, vehicles, or other objects that may be required to be aligned for docking, assembly, resupply, or precise separation. The system includes a sensor head containing a monochrome charge-coupled- device video camera and pulsed laser diodes mounted on the tracking vehicle, and passive reflective targets on the tracked vehicle. The lasers illuminate the targets, and the resulting video images of the targets are digitized. Then, from the positions of the digitized target images and known geometric relationships among the targets, the relative position and orientation of the vehicles are computed. As described thus far, the VGS system is based on the same principles as those of the system described in "Improved Video Sensor System for Guidance in Docking" (MFS-31150), NASA Tech Briefs, Vol. 21, No. 4 (April 1997), page 9a. However, the two systems differ in the details of design and operation. The VGS system is designed to operate with the target completely visible within a relative-azimuth range of +/-10.5deg and a relative-elevation range of +/-8deg. The VGS acquires and tracks the target within that field of view at any distance from 1.0 to 110 m and at any relative roll, pitch, and/or yaw angle within +/-10deg. The VGS produces sets of distance and relative-orientation data at a repetition rate of 5 Hz. The software of this system also accommodates the simultaneous operation of two sensors for redundancy

  13. Visual Odometry Based on Structural Matching of Local Invariant Features Using Stereo Camera Sensor

    PubMed Central

    Núñez, Pedro; Vázquez-Martín, Ricardo; Bandera, Antonio

    2011-01-01

    This paper describes a novel sensor system to estimate the motion of a stereo camera. Local invariant image features are matched between pairs of frames and linked into image trajectories at video rate, providing the so-called visual odometry, i.e., motion estimates from visual input alone. Our proposal conducts two matching sessions: the first one between sets of features associated to the images of the stereo pairs and the second one between sets of features associated to consecutive frames. With respect to previously proposed approaches, the main novelty of this proposal is that both matching algorithms are conducted by means of a fast matching algorithm which combines absolute and relative feature constraints. Finding the largest-valued set of mutually consistent matches is equivalent to finding the maximum-weighted clique on a graph. The stereo matching allows to represent the scene view as a graph which emerge from the features of the accepted clique. On the other hand, the frame-to-frame matching defines a graph whose vertices are features in 3D space. The efficiency of the approach is increased by minimizing the geometric and algebraic errors to estimate the final displacement of the stereo camera between consecutive acquired frames. The proposed approach has been tested for mobile robotics navigation purposes in real environments and using different features. Experimental results demonstrate the performance of the proposal, which could be applied in both industrial and service robot fields. PMID:22164016

  14. Blue guardian: an open architecture for rapid ISR demonstration

    NASA Astrophysics Data System (ADS)

    Barrett, Donald A.; Borntrager, Luke A.; Green, David M.

    2016-05-01

    Throughout the Department of Defense (DoD), acquisition, platform integration, and life cycle costs for weapons systems have continued to rise. Although Open Architecture (OA) interface standards are one of the primary methods being used to reduce these costs, the Air Force Rapid Capabilities Office (AFRCO) has extended the OA concept and chartered the Open Mission System (OMS) initiative with industry to develop and demonstrate a consensus-based, non-proprietary, OA standard for integrating subsystems and services into airborne platforms. The new OMS standard provides the capability to decouple vendor-specific sensors, payloads, and service implementations from platform-specific architectures and is still in the early stages of maturation and demonstration. The Air Force Research Laboratory (AFRL) - Sensors Directorate has developed the Blue Guardian program to demonstrate advanced sensing technology utilizing open architectures in operationally relevant environments. Over the past year, Blue Guardian has developed a platform architecture using the Air Force's OMS reference architecture and conducted a ground and flight test program of multiple payload combinations. Systems tested included a vendor-unique variety of Full Motion Video (FMV) systems, a Wide Area Motion Imagery (WAMI) system, a multi-mode radar system, processing and database functions, multiple decompression algorithms, multiple communications systems, and a suite of software tools. Initial results of the Blue Guardian program show the promise of OA to DoD acquisitions, especially for Intelligence, Surveillance and Reconnaissance (ISR) payload applications. Specifically, the OMS reference architecture was extremely useful in reducing the cost and time required for integrating new systems.

  15. USB video image controller used in CMOS image sensor

    NASA Astrophysics Data System (ADS)

    Zhang, Wenxuan; Wang, Yuxia; Fan, Hong

    2002-09-01

    CMOS process is mainstream technique in VLSI, possesses high integration. SE402 is multifunction microcontroller, which integrates image data I/O ports, clock control, exposure control and digital signal processing into one chip. SE402 reduces the number of chips and PCB's room. The paper studies emphatically on USB video image controller used in CMOS image sensor and give the application on digital still camera.

  16. Literature review on monitoring technologies and their outcomes in independently living elderly people.

    PubMed

    Peetoom, Kirsten K B; Lexis, Monique A S; Joore, Manuela; Dirksen, Carmen D; De Witte, Luc P

    2015-07-01

    To obtain insight into what kind of monitoring technologies exist to monitor activity in-home, what the characteristics and aims of applying these technologies are, what kind of research has been conducted on their effects and what kind of outcomes are reported. A systematic document search was conducted within the scientific databases Pubmed, Embase, Cochrane, PsycINFO and Cinahl, complemented by Google Scholar. Documents were included in this review if they reported on monitoring technologies that detect activities of daily living (ADL) or significant events, e.g. falls, of elderly people in-home, with the aim of prolonging independent living. Five main types of monitoring technologies were identified: PIR motion sensors, body-worn sensors, pressure sensors, video monitoring and sound recognition. In addition, multicomponent technologies and smart home technologies were identified. Research into the use of monitoring technologies is widespread, but in its infancy, consisting mainly of small-scale studies and including few longitudinal studies. Monitoring technology is a promising field, with applications to the long-term care of elderly persons. However, monitoring technologies have to be brought to the next level, with longitudinal studies that evaluate their (cost-) effectiveness to demonstrate the potential to prolong independent living of elderly persons. [Box: see text].

  17. The development of a performance assessment methodology for activity based intelligence: A study of spatial, temporal, and multimodal considerations

    NASA Astrophysics Data System (ADS)

    Lewis, Christian M.

    Activity Based Intelligence (ABI) is the derivation of information from a series of in- dividual actions, interactions, and transactions being recorded over a period of time. This usually occurs in Motion imagery and/or Full Motion Video. Due to the growth of unmanned aerial systems technology and the preponderance of mobile video devices, more interest has developed in analyzing people's actions and interactions in these video streams. Currently only visually subjective quality metrics exist for determining the utility of these data in detecting specific activities. One common misconception is that ABI boils down to a simple resolution problem; more pixels and higher frame rates are better. Increasing resolution simply provides more data, not necessary more informa- tion. As part of this research, an experiment was designed and performed to address this assumption. Nine sensors consisting of four modalities were place on top of the Chester F. Carlson Center for Imaging Science in order to record a group of participants executing a scripted set of activities. The multimodal characteristics include data from the visible, long-wave infrared, multispectral, and polarimetric regimes. The activities the participants were scripted to cover a wide range of spatial and temporal interactions (i.e. walking, jogging, and a group sporting event). As with any large data acquisition, only a subset of this data was analyzed for this research. Specifically, a walking object exchange scenario and simulated RPG. In order to analyze this data, several steps of preparation occurred. The data were spatially and temporally registered; the individual modalities were fused; a tracking algorithm was implemented, and an activity detection algorithm was applied. To develop a performance assessment for these activities a series of spatial and temporal degradations were performed. Upon completion of this work, the ground truth ABI dataset will be released to the community for further analysis.

  18. Videosensor for the Detection of Unsafe Driving Behavior in the Proximity of Black Spots

    PubMed Central

    Fuentes, Andres; Fuentes, Ricardo; Cabello, Enrique; Conde, Cristina; Martin, Isaac

    2014-01-01

    This paper discusses the overall design and implementation of a video sensor for the detection of risky behaviors of car drivers near previously identified and georeferenced black spots. The main goal is to provide the driver with a visual audio alert that informs of the proximity of an area of high incidence of highway accidents only if their driving behavior could result in a risky situation. It proposes a video sensor for detecting and supervising driver behavior, its main objective being manual distractions, so hand driver supervision is performed. A GPS signal is also considered, the GPS information is compared with a database of global positioning Black Spots to determine the relative proximity of a risky area. The outputs of the video sensor and GPS sensor are combined to evaluate a possible risky behavior. The results are promising in terms of risk analysis in order to be validated for use in the context of the automotive industry as future work. PMID:25347580

  19. Videosensor for the detection of unsafe driving behavior in the proximity of black spots.

    PubMed

    Fuentes, Andres; Fuentes, Ricardo; Cabello, Enrique; Conde, Cristina; Martin, Isaac

    2014-10-24

    This paper discusses the overall design and implementation of a video sensor for the detection of risky behaviors of car drivers near previously identified and georeferenced black spots. The main goal is to provide the driver with a visual audio alert that informs of the proximity of an area of high incidence of highway accidents only if their driving behavior could result in a risky situation. It proposes a video sensor for detecting and supervising driver behavior, its main objective being manual distractions, so hand driver supervision is performed. A GPS signal is also considered, the GPS information is compared with a database of global positioning Black Spots to determine the relative proximity of a risky area. The outputs of the video sensor and GPS sensor are combined to evaluate a possible risky behavior. The results are promising in terms of risk analysis in order to be validated for use in the context of the automotive industry as future work.

  20. Learning Projectile Motion with the Computer Game ``Scorched 3D``

    NASA Astrophysics Data System (ADS)

    Jurcevic, John S.

    2008-01-01

    For most of our students, video games are a normal part of their lives. We should take advantage of this medium to teach physics in a manner that is engrossing for our students. In particular, modern video games incorporate accurate physics in their game engines, and they allow us to visualize the physics through flashy and captivating graphics. I recently used the game "Scorched 3D" to help my students understand projectile motion.

  1. 47 CFR 101.141 - Microwave modulation.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 2 DS-1 1.60 6.17 N/A 4 DS-1 2.50 6.17 N/A 4 DS-1 3.75 12.3 N/A 8 DS-1 5.0 18.5 N/A 12 DS-1 10.0 44.7..., the minimum payload capacity must be 12.3 Mbits/s. (5) Transmitters carrying digital motion video... section, provided that at least 50 percent of the payload is digital video motion material and the minimum...

  2. Video didactic at the point of care impacts hand hygiene compliance in the neonatal intensive care unit (NICU).

    PubMed

    Hoang, Danthanh; Khawar, Nayaab; George, Maria; Gad, Ashraf; Sy, Farrah; Narula, Pramod

    2018-04-01

    To increase the hand-washing (HW) duration of staff and visitors in the NICU to a minimum of 20 seconds as recommended by the CDC. Intervention included video didactic triggered by motion sensor to play above wash basin. Video enacted Centers for Disease Control and Prevention (CDC) HW technique in real time and displayed timer of 20 seconds. HW was reviewed from surveillance video. Swabs of hands plated and observed for qualitative growth (QG) of bacterial colonies. In visitors, the mean HW duration at baseline was 16.3 seconds and increased to 23.4 seconds at the 2-week interval (p = .003) and 22.9 seconds at the 9-month interval (p < .0005). In staff, the mean HW duration at baseline was 18.4 seconds and increased to 29.0 seconds at 2-week interval (p = .001) and 25.7 seconds at the 9-month interval (p < .0005). In visitors, HW compliance at baseline was 33% and increased to 52% at the 2-week interval (p = .076) and 69% at the 9-month interval (p = .001). In staff, HW compliance at baseline was 42% and increased to 64% at the 2-week interval (p = .025) and 72% at the 9-month interval (p = .001). Increasing HW was significantly associated with linear decrease in bacterial QG. The intervention significantly increased mean HW time, compliance with a 20-econd wash time and decreased bacterial QG of hands and these results were sustained over a 9-month period. © 2018 American Society for Healthcare Risk Management of the American Hospital Association.

  3. The effect of action video game playing on sensorimotor learning: Evidence from a movement tracking task.

    PubMed

    Gozli, Davood G; Bavelier, Daphne; Pratt, Jay

    2014-10-12

    Research on the impact of action video game playing has revealed performance advantages on a wide range of perceptual and cognitive tasks. It is not known, however, if playing such games confers similar advantages in sensorimotor learning. To address this issue, the present study used a manual motion-tracking task that allowed for a sensitive measure of both accuracy and improvement over time. When the target motion pattern was consistent over trials, gamers improved with a faster rate and eventually outperformed non-gamers. Performance between the two groups, however, did not differ initially. When the target motion was inconsistent, changing on every trial, results revealed no difference between gamers and non-gamers. Together, our findings suggest that video game playing confers no reliable benefit in sensorimotor control, but it does enhance sensorimotor learning, enabling superior performance in tasks with consistent and predictable structure. Copyright © 2014. Published by Elsevier B.V.

  4. ShakeMapple : tapping laptop motion sensors to map the felt extents of an earthquake

    NASA Astrophysics Data System (ADS)

    Bossu, Remy; McGilvary, Gary; Kamb, Linus

    2010-05-01

    There is a significant pool of untapped sensor resources available in portable computer embedded motion sensors. Included primarily to detect sudden strong motion in order to park the disk heads to prevent damage to the disks in the event of a fall or other severe motion, these sensors may also be tapped for other uses as well. We have developed a system that takes advantage of the Apple Macintosh laptops' embedded Sudden Motion Sensors to record earthquake strong motion data to rapidly build maps of where and to what extent an earthquake has been felt. After an earthquake, it is vital to understand the damage caused especially in urban environments as this is often the scene for large amounts of damage caused by earthquakes. Gathering as much information from these impacts to determine where the areas that are likely to be most effected, can aid in distributing emergency services effectively. The ShakeMapple system operates in the background, continuously saving the most recent data from the motion sensors. After an earthquake has occurred, the ShakeMapple system calculates the peak acceleration within a time window around the expected arrival and sends that to servers at the EMSC. A map plotting the felt responses is then generated and presented on the web. Because large-scale testing of such an application is inherently difficult, we propose to organize a broadly distributed "simulated event" test. The software will be available for download in April, after which we plan to organize a large-scale test by the summer. At a specified time, participating testers will be asked to create their own strong motion to be registered and submitted by the ShakeMapple client. From these responses, a felt map will be produced representing the broadly-felt effects of the simulated event.

  5. Video-based heart rate monitoring across a range of skin pigmentations during an acute hypoxic challenge.

    PubMed

    Addison, Paul S; Jacquel, Dominique; Foo, David M H; Borg, Ulf R

    2017-11-09

    The robust monitoring of heart rate from the video-photoplethysmogram (video-PPG) during challenging conditions requires new analysis techniques. The work reported here extends current research in this area by applying a motion tolerant algorithm to extract high quality video-PPGs from a cohort of subjects undergoing marked heart rate changes during a hypoxic challenge, and exhibiting a full range of skin pigmentation types. High uptimes in reported video-based heart rate (HR vid ) were targeted, while retaining high accuracy in the results. Ten healthy volunteers were studied during a double desaturation hypoxic challenge. Video-PPGs were generated from the acquired video image stream and processed to generate heart rate. HR vid was compared to the pulse rate posted by a reference pulse oximeter device (HR p ). Agreement between video-based heart rate and that provided by the pulse oximeter was as follows: Bias = - 0.21 bpm, RMSD = 2.15 bpm, least squares fit gradient = 1.00 (Pearson R = 0.99, p < 0.0001), with a 98.78% reporting uptime. The difference between the HR vid and HR p exceeded 5 and 10 bpm, for 3.59 and 0.35% of the reporting time respectively, and at no point did these differences exceed 25 bpm. Excellent agreement was found between the HR vid and HR p in a study covering the whole range of skin pigmentation types (Fitzpatrick scales I-VI), using standard room lighting and with moderate subject motion. Although promising, further work should include a larger cohort with multiple subjects per Fitzpatrick class combined with a more rigorous motion and lighting protocol.

  6. SU-E-J-196: Implementation of An In-House Visual Feedback System for Motion Management During Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nguyen, V; James, J; Wang, B

    Purpose: To describe an in-house video goggle feedback system for motion management during simulation and treatment of radiation therapy patients. Methods: This video goggle system works by splitting and amplifying the video output signal directly from the Varian Real-Time Position Management (RPM) workstation or TrueBeam imaging workstation into two signals using a Distribution Amplifier. The first signal S[1] gets reconnected back to the monitor. The second signal S[2] gets connected to the input of a Video Scaler. The S[2] signal can be scaled, cropped and panned in real time to display only the relevant information to the patient. The outputmore » signal from the Video Scaler gets connected to an HDMI Extender Transmitter via a DVI-D to HDMI converter cable. The S[2] signal can be transported from the HDMI Extender Transmitter to the HDMI Extender Receiver located inside the treatment room via a Cat5e/6 cable. Inside the treatment room, the HDMI Extender Receiver is permanently mounted on the wall near the conduit where the Cat5e/6 cable is located. An HDMI cable is used to connect from the output of the HDMI Receiver to the video goggles. Results: This video goggle feedback system is currently being used at two institutions. At one institution, the system was just recently implemented for simulation and treatments on two breath-hold gated patients with 8+ total fractions over a two month period. At the other institution, the system was used to treat 100+ breath-hold gated patients on three Varian TrueBeam linacs and has been operational for twelve months. The average time to prepare the video goggle system for treatment is less than 1 minute. Conclusion: The video goggle system provides an efficient and reliable method to set up a video feedback signal for radiotherapy patients with motion management.« less

  7. Using Passive Sensing to Estimate Relative Energy Expenditure for Eldercare Monitoring

    PubMed Central

    2012-01-01

    This paper describes ongoing work in analyzing sensor data logged in the homes of seniors. An estimation of relative energy expenditure is computed using motion density from passive infrared motion sensors mounted in the environment. We introduce a new algorithm for detecting visitors in the home using motion sensor data and a set of fuzzy rules. The visitor algorithm, as well as a previous algorithm for identifying time-away-from-home (TAFH), are used to filter the logged motion sensor data. Thus, the energy expenditure estimate uses data collected only when the resident is home alone. Case studies are included from TigerPlace, an Aging in Place community, to illustrate how the relative energy expenditure estimate can be used to track health conditions over time. PMID:25266777

  8. Applying emerging digital video interface standards to airborne avionics sensor and digital map integrations: benefits outweigh the initial costs

    NASA Astrophysics Data System (ADS)

    Kuehl, C. Stephen

    1996-06-01

    Video signal system performance can be compromised in a military aircraft cockpit management system (CMS) with the tailoring of vintage Electronics Industries Association (EIA) RS170 and RS343A video interface standards. Video analog interfaces degrade when induced system noise is present. Further signal degradation has been traditionally associated with signal data conversions between avionics sensor outputs and the cockpit display system. If the CMS engineering process is not carefully applied during the avionics video and computing architecture development, extensive and costly redesign will occur when visual sensor technology upgrades are incorporated. Close monitoring and technical involvement in video standards groups provides the knowledge-base necessary for avionic systems engineering organizations to architect adaptable and extendible cockpit management systems. With the Federal Communications Commission (FCC) in the process of adopting the Digital HDTV Grand Alliance System standard proposed by the Advanced Television Systems Committee (ATSC), the entertainment and telecommunications industries are adopting and supporting the emergence of new serial/parallel digital video interfaces and data compression standards that will drastically alter present NTSC-M video processing architectures. The re-engineering of the U.S. Broadcasting system must initially preserve the electronic equipment wiring networks within broadcast facilities to make the transition to HDTV affordable. International committee activities in technical forums like ITU-R (former CCIR), ANSI/SMPTE, IEEE, and ISO/IEC are establishing global consensus on video signal parameterizations that support a smooth transition from existing analog based broadcasting facilities to fully digital computerized systems. An opportunity exists for implementing these new video interface standards over existing video coax/triax cabling in military aircraft cockpit management systems. Reductions in signal conversion processing steps, major improvement in video noise reduction, and an added capability to pass audio/embedded digital data within the digital video signal stream are the significant performance increases associated with the incorporation of digital video interface standards. By analyzing the historical progression of military CMS developments, establishing a systems engineering process for CMS design, tracing the commercial evolution of video signal standardization, adopting commercial video signal terminology/definitions, and comparing/contrasting CMS architecture modifications using digital video interfaces; this paper provides a technical explanation on how a systems engineering process approach to video interface standardization can result in extendible and affordable cockpit management systems.

  9. Compressed-domain video indexing techniques using DCT and motion vector information in MPEG video

    NASA Astrophysics Data System (ADS)

    Kobla, Vikrant; Doermann, David S.; Lin, King-Ip; Faloutsos, Christos

    1997-01-01

    Development of various multimedia applications hinges on the availability of fast and efficient storage, browsing, indexing, and retrieval techniques. Given that video is typically stored efficiently in a compressed format, if we can analyze the compressed representation directly, we can avoid the costly overhead of decompressing and operating at the pixel level. Compressed domain parsing of video has been presented in earlier work where a video clip is divided into shots, subshots, and scenes. In this paper, we describe key frame selection, feature extraction, and indexing and retrieval techniques that are directly applicable to MPEG compressed video. We develop a frame-type independent representation of the various types of frames present in an MPEG video in which al frames can be considered equivalent. Features are derived from the available DCT, macroblock, and motion vector information and mapped to a low-dimensional space where they can be accessed with standard database techniques. The spatial information is used as primary index while the temporal information is used to enhance the robustness of the system during the retrieval process. The techniques presented enable fast archiving, indexing, and retrieval of video. Our operational prototype typically takes a fraction of a second to retrieve similar video scenes from our database, with over 95% success.

  10. In vitro validation and reliability study of electromagnetic skin sensors for evaluation of end range of motion positions of the hip.

    PubMed

    Audenaert, E A; Vigneron, L; Van Hoof, T; D'Herde, K; van Maele, G; Oosterlinck, D; Pattyn, C

    2011-12-01

    There is growing evidence that femoroacetabular impingement (FAI) is a probable risk factor for the development of early osteoarthritis in the nondysplastic hip. As FAI arises with end range of motion activities, measurement errors related to skin movement might be higher than anticipated when using previously reported methods for kinematic evaluation of the hip. We performed an in vitro validation and reliability study of a noninvasive method to define pelvic and femur positions in end range of motion activities of the hip using an electromagnetic tracking device. Motion data, collected from sensors attached to the bone and skin of 11 cadaver hips, were simultaneously obtained and compared in a global reference frame. Motion data were then transposed in the hip joint local coordinate systems. Observer-related variability in locating the anatomical landmarks required to define the local coordinate system and variability of determining the hip joint center was evaluated. Angular root mean square (RMS) differences between the bony and skin sensors averaged 3.2° (SD 3.5°) and 1.8° (SD 2.3°) in the global reference frame for the femur and pelvic sensors, respectively. Angular RMS differences between the bony and skin sensors in the hip joint local coordinate systems ranged at end range of motion and dependent on the motion under investigation from 1.91 to 5.81°. The presented protocol for evaluation of hip motion seems to be suited for the 3-D description of motion relevant to the experimental and clinical evaluation of femoroacetabular impingement.

  11. Real-time full-motion color Flash lidar for target detection and identification

    NASA Astrophysics Data System (ADS)

    Nelson, Roy; Coppock, Eric; Craig, Rex; Craner, Jeremy; Nicks, Dennis; von Niederhausern, Kurt

    2015-05-01

    Greatly improved understanding of areas and objects of interest can be gained when real time, full-motion Flash LiDAR is fused with inertial navigation data and multi-spectral context imagery. On its own, full-motion Flash LiDAR provides the opportunity to exploit the z dimension for improved intelligence vs. 2-D full-motion video (FMV). The intelligence value of this data is enhanced when it is combined with inertial navigation data to produce an extended, georegistered data set suitable for a variety of analysis. Further, when fused with multispectral context imagery the typical point cloud now becomes a rich 3-D scene which is intuitively obvious to the user and allows rapid cognitive analysis with little or no training. Ball Aerospace has developed and demonstrated a real-time, full-motion LIDAR system that fuses context imagery (VIS to MWIR demonstrated) and inertial navigation data in real time, and can stream these information-rich geolocated/fused 3-D scenes from an airborne platform. In addition, since the higher-resolution context camera is boresighted and frame synchronized to the LiDAR camera and the LiDAR camera is an array sensor, techniques have been developed to rapidly interpolate the LIDAR pixel values creating a point cloud that has the same resolution as the context camera, effectively creating a high definition (HD) LiDAR image. This paper presents a design overview of the Ball TotalSight™ LIDAR system along with typical results over urban and rural areas collected from both rotary and fixed-wing aircraft. We conclude with a discussion of future work.

  12. Motion Sickness

    MedlinePlus

    ... sickness from certain visual activities, such as playing video games or watching spinning objects. Symptoms can strike without ... of your body. For example, when playing a video game, your eyes may sense that you are moving ...

  13. Feature Extraction in Sequential Multimedia Images: with Applications in Satellite Images and On-line Videos

    NASA Astrophysics Data System (ADS)

    Liang, Yu-Li

    Multimedia data is increasingly important in scientific discovery and people's daily lives. Content of massive multimedia is often diverse and noisy, and motion between frames is sometimes crucial in analyzing those data. Among all, still images and videos are commonly used formats. Images are compact in size but do not contain motion information. Videos record motion but are sometimes too big to be analyzed. Sequential images, which are a set of continuous images with low frame rate, stand out because they are smaller than videos and still maintain motion information. This thesis investigates features in different types of noisy sequential images, and the proposed solutions that intelligently combined multiple features to successfully retrieve visual information from on-line videos and cloudy satellite images. The first task is detecting supraglacial lakes above ice sheet in sequential satellite images. The dynamics of supraglacial lakes on the Greenland ice sheet deeply affect glacier movement, which is directly related to sea level rise and global environment change. Detecting lakes above ice is suffering from diverse image qualities and unexpected clouds. A new method is proposed to efficiently extract prominent lake candidates with irregular shapes, heterogeneous backgrounds, and in cloudy images. The proposed system fully automatize the procedure that track lakes with high accuracy. We further cooperated with geoscientists to examine the tracked lakes and found new scientific findings. The second one is detecting obscene content in on-line video chat services, such as Chatroulette, that randomly match pairs of users in video chat sessions. A big problem encountered in such systems is the presence of flashers and obscene content. Because of various obscene content and unstable qualities of videos capture by home web-camera, detecting misbehaving users is a highly challenging task. We propose SafeVchat, which is the first solution that achieves satisfactory detection rate by using facial features and skin color model. To harness all the features in the scene, we further developed another system using multiple types of local descriptors along with Bag-of-Visual Word framework. In addition, an investigation of new contour feature in detecting obscene content is presented.

  14. Positive effect on patient experience of video information given prior to cardiovascular magnetic resonance imaging: A clinical trial.

    PubMed

    Ahlander, Britt-Marie; Engvall, Jan; Maret, Eva; Ericsson, Elisabeth

    2018-03-01

    To evaluate the effect of video information given before cardiovascular magnetic resonance imaging on patient anxiety and to compare patient experiences of cardiovascular magnetic resonance imaging versus myocardial perfusion scintigraphy. To evaluate whether additional information has an impact on motion artefacts. Cardiovascular magnetic resonance imaging and myocardial perfusion scintigraphy are technically advanced methods for the evaluation of heart diseases. Although cardiovascular magnetic resonance imaging is considered to be painless, patients may experience anxiety due to the closed environment. A prospective randomised intervention study, not registered. The sample (n = 148) consisted of 97 patients referred for cardiovascular magnetic resonance imaging, randomised to receive either video information in addition to standard text-information (CMR-video/n = 49) or standard text-information alone (CMR-standard/n = 48). A third group undergoing myocardial perfusion scintigraphy (n = 51) was compared with the cardiovascular magnetic resonance imaging-standard group. Anxiety was evaluated before, immediately after the procedure and 1 week later. Five questionnaires were used: Cardiac Anxiety Questionnaire, State-Trait Anxiety Inventory, Hospital Anxiety and Depression scale, MRI Fear Survey Schedule and the MRI-Anxiety Questionnaire. Motion artefacts were evaluated by three observers, blinded to the information given. Data were collected between April 2015-April 2016. The study followed the CONSORT guidelines. The CMR-video group scored lower (better) than the cardiovascular magnetic resonance imaging-standard group in the factor Relaxation (p = .039) but not in the factor Anxiety. Anxiety levels were lower during scintigraphic examinations compared to the CMR-standard group (p < .001). No difference was found regarding motion artefacts between CMR-video and CMR-standard. Patient ability to relax during cardiovascular magnetic resonance imaging increased by adding video information prior the exam, which is important in relation to perceived quality in nursing. No effect was seen on motion artefacts. Video information prior to examinations can be an easy and time effective method to help patients cooperate in imaging procedures. © 2017 John Wiley & Sons Ltd.

  15. Traffic Monitor

    NASA Technical Reports Server (NTRS)

    1995-01-01

    Intelligent Vision Systems, Inc. (InVision) needed image acquisition technology that was reliable in bad weather for its TDS-200 Traffic Detection System. InVision researchers used information from NASA Tech Briefs and assistance from Johnson Space Center to finish the system. The NASA technology used was developed for Earth-observing imaging satellites: charge coupled devices, in which silicon chips convert light directly into electronic or digital images. The TDS-200 consists of sensors mounted above traffic on poles or span wires, enabling two sensors to view an intersection; a "swing and sway" feature to compensate for movement of the sensors; a combination of electronic shutter and gain control; and sensor output to an image digital signal processor, still frame video and optionally live video.

  16. Transcoding method from H.264/AVC to high efficiency video coding based on similarity of intraprediction, interprediction, and motion vector

    NASA Astrophysics Data System (ADS)

    Liu, Mei-Feng; Zhong, Guo-Yun; He, Xiao-Hai; Qing, Lin-Bo

    2016-09-01

    Currently, most video resources on line are encoded in the H.264/AVC format. More fluent video transmission can be obtained if these resources are encoded in the newest international video coding standard: high efficiency video coding (HEVC). In order to improve the video transmission and storage on line, a transcoding method from H.264/AVC to HEVC is proposed. In this transcoding algorithm, the coding information of intraprediction, interprediction, and motion vector (MV) in H.264/AVC video stream are used to accelerate the coding in HEVC. It is found through experiments that the region of interprediction in HEVC overlaps that in H.264/AVC. Therefore, the intraprediction for the region in HEVC, which is interpredicted in H.264/AVC, can be skipped to reduce coding complexity. Several macroblocks in H.264/AVC are combined into one PU in HEVC when the MV difference between two of the macroblocks in H.264/AVC is lower than a threshold. This method selects only one coding unit depth and one prediction unit (PU) mode to reduce the coding complexity. An MV interpolation method of combined PU in HEVC is proposed according to the areas and distances between the center of one macroblock in H.264/AVC and that of the PU in HEVC. The predicted MV accelerates the motion estimation for HEVC coding. The simulation results show that our proposed algorithm achieves significant coding time reduction with a little loss in bitrates distortion rate, compared to the existing transcoding algorithms and normal HEVC coding.

  17. Global motion compensated visual attention-based video watermarking

    NASA Astrophysics Data System (ADS)

    Oakes, Matthew; Bhowmik, Deepayan; Abhayaratne, Charith

    2016-11-01

    Imperceptibility and robustness are two key but complementary requirements of any watermarking algorithm. Low-strength watermarking yields high imperceptibility but exhibits poor robustness. High-strength watermarking schemes achieve good robustness but often suffer from embedding distortions resulting in poor visual quality in host media. This paper proposes a unique video watermarking algorithm that offers a fine balance between imperceptibility and robustness using motion compensated wavelet-based visual attention model (VAM). The proposed VAM includes spatial cues for visual saliency as well as temporal cues. The spatial modeling uses the spatial wavelet coefficients while the temporal modeling accounts for both local and global motion to arrive at the spatiotemporal VAM for video. The model is then used to develop a video watermarking algorithm, where a two-level watermarking weighting parameter map is generated from the VAM saliency maps using the saliency model and data are embedded into the host image according to the visual attentiveness of each region. By avoiding higher strength watermarking in the visually attentive region, the resulting watermarked video achieves high perceived visual quality while preserving high robustness. The proposed VAM outperforms the state-of-the-art video visual attention methods in joint saliency detection and low computational complexity performance. For the same embedding distortion, the proposed visual attention-based watermarking achieves up to 39% (nonblind) and 22% (blind) improvement in robustness against H.264/AVC compression, compared to existing watermarking methodology that does not use the VAM. The proposed visual attention-based video watermarking results in visual quality similar to that of low-strength watermarking and a robustness similar to those of high-strength watermarking.

  18. Patterned Video Sensors For Low Vision

    NASA Technical Reports Server (NTRS)

    Juday, Richard D.

    1996-01-01

    Miniature video cameras containing photoreceptors arranged in prescribed non-Cartesian patterns to compensate partly for some visual defects proposed. Cameras, accompanied by (and possibly integrated with) miniature head-mounted video display units restore some visual function in humans whose visual fields reduced by defects like retinitis pigmentosa.

  19. Smart Braid Feedback for the Closed-Loop Control of Soft Robotic Systems.

    PubMed

    Felt, Wyatt; Chin, Khai Yi; Remy, C David

    2017-09-01

    This article experimentally investigates the potential of using flexible, inductance-based contraction sensors in the closed-loop motion control of soft robots. Accurate motion control remains a highly challenging task for soft robotic systems. Precise models of the actuation dynamics and environmental interactions are often unavailable. This renders open-loop control impossible, while closed-loop control suffers from a lack of suitable feedback. Conventional motion sensors, such as linear or rotary encoders, are difficult to adapt to robots that lack discrete mechanical joints. The rigid nature of these sensors runs contrary to the aspirational benefits of soft systems. As truly soft sensor solutions are still in their infancy, motion control of soft robots has so far relied on laboratory-based sensing systems such as motion capture, electromagnetic (EM) tracking, or Fiber Bragg Gratings. In this article, we used embedded flexible sensors known as Smart Braids to sense the contraction of McKibben muscles through changes in inductance. We evaluated closed-loop control on two systems: a revolute joint and a planar, one degree of freedom continuum manipulator. In the revolute joint, our proposed controller compensated for elasticity in the actuator connections. The Smart Braid feedback allowed motion control with a steady-state root-mean-square (RMS) error of [1.5]°. In the continuum manipulator, Smart Braid feedback enabled tracking of the desired tip angle with a steady-state RMS error of [1.25]°. This work demonstrates that Smart Braid sensors can provide accurate position feedback in closed-loop motion control suitable for field applications of soft robotic systems.

  20. Class Energy Image Analysis for Video Sensor-Based Gait Recognition: A Review

    PubMed Central

    Lv, Zhuowen; Xing, Xianglei; Wang, Kejun; Guan, Donghai

    2015-01-01

    Gait is a unique perceptible biometric feature at larger distances, and the gait representation approach plays a key role in a video sensor-based gait recognition system. Class Energy Image is one of the most important gait representation methods based on appearance, which has received lots of attentions. In this paper, we reviewed the expressions and meanings of various Class Energy Image approaches, and analyzed the information in the Class Energy Images. Furthermore, the effectiveness and robustness of these approaches were compared on the benchmark gait databases. We outlined the research challenges and provided promising future directions for the field. To the best of our knowledge, this is the first review that focuses on Class Energy Image. It can provide a useful reference in the literature of video sensor-based gait representation approach. PMID:25574935

  1. A sensor fusion method for tracking vertical velocity and height based on inertial and barometric altimeter measurements.

    PubMed

    Sabatini, Angelo Maria; Genovese, Vincenzo

    2014-07-24

    A sensor fusion method was developed for vertical channel stabilization by fusing inertial measurements from an Inertial Measurement Unit (IMU) and pressure altitude measurements from a barometric altimeter integrated in the same device (baro-IMU). An Extended Kalman Filter (EKF) estimated the quaternion from the sensor frame to the navigation frame; the sensed specific force was rotated into the navigation frame and compensated for gravity, yielding the vertical linear acceleration; finally, a complementary filter driven by the vertical linear acceleration and the measured pressure altitude produced estimates of height and vertical velocity. A method was also developed to condition the measured pressure altitude using a whitening filter, which helped to remove the short-term correlation due to environment-dependent pressure changes from raw pressure altitude. The sensor fusion method was implemented to work on-line using data from a wireless baro-IMU and tested for the capability of tracking low-frequency small-amplitude vertical human-like motions that can be critical for stand-alone inertial sensor measurements. Validation tests were performed in different experimental conditions, namely no motion, free-fall motion, forced circular motion and squatting. Accurate on-line tracking of height and vertical velocity was achieved, giving confidence to the use of the sensor fusion method for tracking typical vertical human motions: velocity Root Mean Square Error (RMSE) was in the range 0.04-0.24 m/s; height RMSE was in the range 5-68 cm, with statistically significant performance gains when the whitening filter was used by the sensor fusion method to track relatively high-frequency vertical motions.

  2. A spatiotemporal decomposition strategy for personal home video management

    NASA Astrophysics Data System (ADS)

    Yi, Haoran; Kozintsev, Igor; Polito, Marzia; Wu, Yi; Bouguet, Jean-Yves; Nefian, Ara; Dulong, Carole

    2007-01-01

    With the advent and proliferation of low cost and high performance digital video recorder devices, an increasing number of personal home video clips are recorded and stored by the consumers. Compared to image data, video data is lager in size and richer in multimedia content. Efficient access to video content is expected to be more challenging than image mining. Previously, we have developed a content-based image retrieval system and the benchmarking framework for personal images. In this paper, we extend our personal image retrieval system to include personal home video clips. A possible initial solution to video mining is to represent video clips by a set of key frames extracted from them thus converting the problem into an image search one. Here we report that a careful selection of key frames may improve the retrieval accuracy. However, because video also has temporal dimension, its key frame representation is inherently limited. The use of temporal information can give us better representation for video content at semantic object and concept levels than image-only based representation. In this paper we propose a bottom-up framework to combine interest point tracking, image segmentation and motion-shape factorization to decompose the video into spatiotemporal regions. We show an example application of activity concept detection using the trajectories extracted from the spatio-temporal regions. The proposed approach shows good potential for concise representation and indexing of objects and their motion in real-life consumer video.

  3. Development of a video image-based QA system for the positional accuracy of dynamic tumor tracking irradiation in the Vero4DRT system.

    PubMed

    Ebe, Kazuyu; Sugimoto, Satoru; Utsunomiya, Satoru; Kagamu, Hiroshi; Aoyama, Hidefumi; Court, Laurence; Tokuyama, Katsuichi; Baba, Ryuta; Ogihara, Yoshisada; Ichikawa, Kosuke; Toyama, Joji

    2015-08-01

    To develop and evaluate a new video image-based QA system, including in-house software, that can display a tracking state visually and quantify the positional accuracy of dynamic tumor tracking irradiation in the Vero4DRT system. Sixteen trajectories in six patients with pulmonary cancer were obtained with the ExacTrac in the Vero4DRT system. Motion data in the cranio-caudal direction (Y direction) were used as the input for a programmable motion table (Quasar). A target phantom was placed on the motion table, which was placed on the 2D ionization chamber array (MatriXX). Then, the 4D modeling procedure was performed on the target phantom during a reproduction of the patient's tumor motion. A substitute target with the patient's tumor motion was irradiated with 6-MV x-rays under the surrogate infrared system. The 2D dose images obtained from the MatriXX (33 frames/s; 40 s) were exported to in-house video-image analyzing software. The absolute differences in the Y direction between the center of the exposed target and the center of the exposed field were calculated. Positional errors were observed. The authors' QA results were compared to 4D modeling function errors and gimbal motion errors obtained from log analyses in the ExacTrac to verify the accuracy of their QA system. The patients' tumor motions were evaluated in the wave forms, and the peak-to-peak distances were also measured to verify their reproducibility. Thirteen of sixteen trajectories (81.3%) were successfully reproduced with Quasar. The peak-to-peak distances ranged from 2.7 to 29.0 mm. Three trajectories (18.7%) were not successfully reproduced due to the limited motions of the Quasar. Thus, 13 of 16 trajectories were summarized. The mean number of video images used for analysis was 1156. The positional errors (absolute mean difference + 2 standard deviation) ranged from 0.54 to 1.55 mm. The error values differed by less than 1 mm from 4D modeling function errors and gimbal motion errors in the ExacTrac log analyses (n = 13). The newly developed video image-based QA system, including in-house software, can analyze more than a thousand images (33 frames/s). Positional errors are approximately equivalent to those in ExacTrac log analyses. This system is useful for the visual illustration of the progress of the tracking state and for the quantification of positional accuracy during dynamic tumor tracking irradiation in the Vero4DRT system.

  4. Camera Control and Geo-Registration for Video Sensor Networks

    NASA Astrophysics Data System (ADS)

    Davis, James W.

    With the use of large video networks, there is a need to coordinate and interpret the video imagery for decision support systems with the goal of reducing the cognitive and perceptual overload of human operators. We present computer vision strategies that enable efficient control and management of cameras to effectively monitor wide-coverage areas, and examine the framework within an actual multi-camera outdoor urban video surveillance network. First, we construct a robust and precise camera control model for commercial pan-tilt-zoom (PTZ) video cameras. In addition to providing a complete functional control mapping for PTZ repositioning, the model can be used to generate wide-view spherical panoramic viewspaces for the cameras. Using the individual camera control models, we next individually map the spherical panoramic viewspace of each camera to a large aerial orthophotograph of the scene. The result provides a unified geo-referenced map representation to permit automatic (and manual) video control and exploitation of cameras in a coordinated manner. The combined framework provides new capabilities for video sensor networks that are of significance and benefit to the broad surveillance/security community.

  5. Analysing Harmonic Motions with an iPhone's Magnetometer

    ERIC Educational Resources Information Center

    Yavuz, Ahmet; Temiz, Burak Kagan

    2016-01-01

    In this paper, we propose an experiment for analysing harmonic motion using an iPhone's (or iPad's) magnetometer. This experiment consists of the detection of magnetic field variations obtained from an iPhone's magnetometer sensor. A graph of harmonic motion is directly displayed on the iPhone's screen using the "Sensor Kinetics"…

  6. Training industrial robots with gesture recognition techniques

    NASA Astrophysics Data System (ADS)

    Piane, Jennifer; Raicu, Daniela; Furst, Jacob

    2013-01-01

    In this paper we propose to use gesture recognition approaches to track a human hand in 3D space and, without the use of special clothing or markers, be able to accurately generate code for training an industrial robot to perform the same motion. The proposed hand tracking component includes three methods: a color-thresholding model, naïve Bayes analysis and Support Vector Machine (SVM) to detect the human hand. Next, it performs stereo matching on the region where the hand was detected to find relative 3D coordinates. The list of coordinates returned is expectedly noisy due to the way the human hand can alter its apparent shape while moving, the inconsistencies in human motion and detection failures in the cluttered environment. Therefore, the system analyzes the list of coordinates to determine a path for the robot to move, by smoothing the data to reduce noise and looking for significant points used to determine the path the robot will ultimately take. The proposed system was applied to pairs of videos recording the motion of a human hand in a „real‟ environment to move the end-affector of a SCARA robot along the same path as the hand of the person in the video. The correctness of the robot motion was determined by observers indicating that motion of the robot appeared to match the motion of the video.

  7. The right frame of reference makes it simple: an example of introductory mechanics supported by video analysis of motion

    NASA Astrophysics Data System (ADS)

    Klein, P.; Gröber, S.; Kuhn, J.; Fleischhauer, A.; Müller, A.

    2015-01-01

    The selection and application of coordinate systems is an important issue in physics. However, considering different frames of references in a given problem sometimes seems un-intuitive and is difficult for students. We present a concrete problem of projectile motion which vividly demonstrates the value of considering different frames of references. We use this example to explore the effectiveness of video-based motion analysis (VBMA) as an instructional technique at university level in enhancing students’ understanding of the abstract concept of coordinate systems. A pilot study with 47 undergraduate students indicates that VBMA instruction improves conceptual understanding of this issue.

  8. Parallax visualization of full motion video using the Pursuer GUI

    NASA Astrophysics Data System (ADS)

    Mayhew, Christopher A.; Forgues, Mark B.

    2014-06-01

    In 2013, the Authors reported to the SPIE on the Phase 1 development of a Parallax Visualization (PV) plug-in toolset for Wide Area Motion Imaging (WAMI) data using the Pursuer Graphical User Interface (GUI).1 In addition to the ability to PV WAMI data, the Phase 1 plug-in toolset also featured a limited ability to visualize Full Motion video (FMV) data. The ability to visualize both WAMI and FMV data is highly advantageous capability for an Electric Light Table (ELT) toolset. This paper reports on the Phase 2 development and addition of a full featured FMV capability to the Pursuer WAMI PV Plug-in.

  9. Drift-Free Position Estimation of Periodic or Quasi-Periodic Motion Using Inertial Sensors

    PubMed Central

    Latt, Win Tun; Veluvolu, Kalyana Chakravarthy; Ang, Wei Tech

    2011-01-01

    Position sensing with inertial sensors such as accelerometers and gyroscopes usually requires other aided sensors or prior knowledge of motion characteristics to remove position drift resulting from integration of acceleration or velocity so as to obtain accurate position estimation. A method based on analytical integration has previously been developed to obtain accurate position estimate of periodic or quasi-periodic motion from inertial sensors using prior knowledge of the motion but without using aided sensors. In this paper, a new method is proposed which employs linear filtering stage coupled with adaptive filtering stage to remove drift and attenuation. The prior knowledge of the motion the proposed method requires is only approximate band of frequencies of the motion. Existing adaptive filtering methods based on Fourier series such as weighted-frequency Fourier linear combiner (WFLC), and band-limited multiple Fourier linear combiner (BMFLC) are modified to combine with the proposed method. To validate and compare the performance of the proposed method with the method based on analytical integration, simulation study is performed using periodic signals as well as real physiological tremor data, and real-time experiments are conducted using an ADXL-203 accelerometer. Results demonstrate that the performance of the proposed method outperforms the existing analytical integration method. PMID:22163935

  10. Implementation of advanced fiber optic and piezoelectric sensors : fabrication and laboratory testing of piezoelectric ceramic-polymer composite sensors for weigh-in-motion systems.

    DOT National Transportation Integrated Search

    1999-02-01

    Weigh-in-motion (WIM) systems might soon replace the conventional techniques used to enforce : weight restrictions for large vehicles on highways. Currently WIM systems use a piezoelectric : polymer sensor that produces a voltage proportional to an a...

  11. Automatic acquisition of motion trajectories: tracking hockey players

    NASA Astrophysics Data System (ADS)

    Okuma, Kenji; Little, James J.; Lowe, David

    2003-12-01

    Computer systems that have the capability of analyzing complex and dynamic scenes play an essential role in video annotation. Scenes can be complex in such a way that there are many cluttered objects with different colors, shapes and sizes, and can be dynamic with multiple interacting moving objects and a constantly changing background. In reality, there are many scenes that are complex, dynamic, and challenging enough for computers to describe. These scenes include games of sports, air traffic, car traffic, street intersections, and cloud transformations. Our research is about the challenge of inventing a descriptive computer system that analyzes scenes of hockey games where multiple moving players interact with each other on a constantly moving background due to camera motions. Ultimately, such a computer system should be able to acquire reliable data by extracting the players" motion as their trajectories, querying them by analyzing the descriptive information of data, and predict the motions of some hockey players based on the result of the query. Among these three major aspects of the system, we primarily focus on visual information of the scenes, that is, how to automatically acquire motion trajectories of hockey players from video. More accurately, we automatically analyze the hockey scenes by estimating parameters (i.e., pan, tilt, and zoom) of the broadcast cameras, tracking hockey players in those scenes, and constructing a visual description of the data by displaying trajectories of those players. Many technical problems in vision such as fast and unpredictable players' motions and rapid camera motions make our challenge worth tackling. To the best of our knowledge, there have not been any automatic video annotation systems for hockey developed in the past. Although there are many obstacles to overcome, our efforts and accomplishments would hopefully establish the infrastructure of the automatic hockey annotation system and become a milestone for research in automatic video annotation in this domain.

  12. Distributed Coding/Decoding Complexity in Video Sensor Networks

    PubMed Central

    Cordeiro, Paulo J.; Assunção, Pedro

    2012-01-01

    Video Sensor Networks (VSNs) are recent communication infrastructures used to capture and transmit dense visual information from an application context. In such large scale environments which include video coding, transmission and display/storage, there are several open problems to overcome in practical implementations. This paper addresses the most relevant challenges posed by VSNs, namely stringent bandwidth usage and processing time/power constraints. In particular, the paper proposes a novel VSN architecture where large sets of visual sensors with embedded processors are used for compression and transmission of coded streams to gateways, which in turn transrate the incoming streams and adapt them to the variable complexity requirements of both the sensor encoders and end-user decoder terminals. Such gateways provide real-time transcoding functionalities for bandwidth adaptation and coding/decoding complexity distribution by transferring the most complex video encoding/decoding tasks to the transcoding gateway at the expense of a limited increase in bit rate. Then, a method to reduce the decoding complexity, suitable for system-on-chip implementation, is proposed to operate at the transcoding gateway whenever decoders with constrained resources are targeted. The results show that the proposed method achieves good performance and its inclusion into the VSN infrastructure provides an additional level of complexity control functionality. PMID:22736972

  13. Distributed coding/decoding complexity in video sensor networks.

    PubMed

    Cordeiro, Paulo J; Assunção, Pedro

    2012-01-01

    Video Sensor Networks (VSNs) are recent communication infrastructures used to capture and transmit dense visual information from an application context. In such large scale environments which include video coding, transmission and display/storage, there are several open problems to overcome in practical implementations. This paper addresses the most relevant challenges posed by VSNs, namely stringent bandwidth usage and processing time/power constraints. In particular, the paper proposes a novel VSN architecture where large sets of visual sensors with embedded processors are used for compression and transmission of coded streams to gateways, which in turn transrate the incoming streams and adapt them to the variable complexity requirements of both the sensor encoders and end-user decoder terminals. Such gateways provide real-time transcoding functionalities for bandwidth adaptation and coding/decoding complexity distribution by transferring the most complex video encoding/decoding tasks to the transcoding gateway at the expense of a limited increase in bit rate. Then, a method to reduce the decoding complexity, suitable for system-on-chip implementation, is proposed to operate at the transcoding gateway whenever decoders with constrained resources are targeted. The results show that the proposed method achieves good performance and its inclusion into the VSN infrastructure provides an additional level of complexity control functionality.

  14. Tele-Assessment of the Berg Balance Scale: Effects of Transmission Characteristics.

    PubMed

    Venkataraman, Kavita; Morgan, Michelle; Amis, Kristopher A; Landerman, Lawrence R; Koh, Gerald C; Caves, Kevin; Hoenig, Helen

    2017-04-01

    To compare Berg Balance Scale (BBS) rating using videos with differing transmission characteristics with direct in-person rating. Repeated-measures study for the assessment of the BBS in 8 configurations: in person, high-definition video with slow motion review, standard-definition videos with varying bandwidths and frame rates (768 kilobytes per second [kbps] videos at 8, 15, and 30 frames per second [fps], 30 fps videos at 128, 384, and 768 kbps). Medical center. Patients with limitations (N=45) in ≥1 of 3 specific aspects of motor function: fine motor coordination, gross motor coordination, and gait and balance. Not applicable. Ability to rate the BBS in person and using videos with differing bandwidths and frame rates in frontal and lateral views. Compared with in-person rating (7%), 18% (P=.29) of high-definition videos and 37% (P=.03) of standard-definition videos could not be rated. Interrater reliability for the high-definition videos was .96 (95% confidence interval, .94-.97). Rating failure proportions increased from 20% in videos with the highest bandwidth to 60% (P<.001) in videos with the lowest bandwidth, with no significant differences in proportions across frame rate categories. Both frontal and lateral views were critical for successful rating using videos, with 60% to 70% (P<.001) of videos unable to be rated on a single view. Although there is some loss of information when using videos to rate the BBS compared to in-person ratings, it is feasible to reliably rate the BBS remotely in standard clinical spaces. However, optimal video rating requires frontal and lateral views for each assessment, high-definition video with high bandwidth, and the ability to carry out slow motion review. Copyright © 2016 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  15. Inertial Motion Capture Costume Design Study

    PubMed Central

    Szczęsna, Agnieszka; Skurowski, Przemysław; Lach, Ewa; Pruszowski, Przemysław; Pęszor, Damian; Paszkuta, Marcin; Słupik, Janusz; Lebek, Kamil; Janiak, Mateusz; Polański, Andrzej; Wojciechowski, Konrad

    2017-01-01

    The paper describes a scalable, wearable multi-sensor system for motion capture based on inertial measurement units (IMUs). Such a unit is composed of accelerometer, gyroscope and magnetometer. The final quality of an obtained motion arises from all the individual parts of the described system. The proposed system is a sequence of the following stages: sensor data acquisition, sensor orientation estimation, system calibration, pose estimation and data visualisation. The construction of the system’s architecture with the dataflow programming paradigm makes it easy to add, remove and replace the data processing steps. The modular architecture of the system allows an effortless introduction of a new sensor orientation estimation algorithms. The original contribution of the paper is the design study of the individual components used in the motion capture system. The two key steps of the system design are explored in this paper: the evaluation of sensors and algorithms for the orientation estimation. The three chosen algorithms have been implemented and investigated as part of the experiment. Due to the fact that the selection of the sensor has a significant impact on the final result, the sensor evaluation process is also explained and tested. The experimental results confirmed that the choice of sensor and orientation estimation algorithm affect the quality of the final results. PMID:28304337

  16. Hybrid markerless tracking of complex articulated motion in golf swings.

    PubMed

    Fung, Sim Kwoh; Sundaraj, Kenneth; Ahamed, Nizam Uddin; Kiang, Lam Chee; Nadarajah, Sivadev; Sahayadhas, Arun; Ali, Md Asraf; Islam, Md Anamul; Palaniappan, Rajkumar

    2014-04-01

    Sports video tracking is a research topic that has attained increasing attention due to its high commercial potential. A number of sports, including tennis, soccer, gymnastics, running, golf, badminton and cricket have been utilised to display the novel ideas in sports motion tracking. The main challenge associated with this research concerns the extraction of a highly complex articulated motion from a video scene. Our research focuses on the development of a markerless human motion tracking system that tracks the major body parts of an athlete straight from a sports broadcast video. We proposed a hybrid tracking method, which consists of a combination of three algorithms (pyramidal Lucas-Kanade optical flow (LK), normalised correlation-based template matching and background subtraction), to track the golfer's head, body, hands, shoulders, knees and feet during a full swing. We then match, track and map the results onto a 2D articulated human stick model to represent the pose of the golfer over time. Our work was tested using two video broadcasts of a golfer, and we obtained satisfactory results. The current outcomes of this research can play an important role in enhancing the performance of a golfer, provide vital information to sports medicine practitioners by providing technically sound guidance on movements and should assist to diminish the risk of golfing injuries. Copyright © 2013 Elsevier Ltd. All rights reserved.

  17. Technologies for Positioning and Placement of Underwater Structures

    DTIC Science & Technology

    2000-03-01

    for imaging the bottom immediately before placement of the structure. c. Use passive sensors (such as tiltmeters , inclinometers, and gyrocompasses...4 Acoustic Sensors .................................................................... 5 Multibeamn and Side-Scan Sonar Transducers...11.I Video Camera....................................................................11. Passive Sensors

  18. Infrared video based gas leak detection method using modified FAST features

    NASA Astrophysics Data System (ADS)

    Wang, Min; Hong, Hanyu; Huang, Likun

    2018-03-01

    In order to detect the invisible leaking gas that is usually dangerous and easily leads to fire or explosion in time, many new technologies have arisen in the recent years, among which the infrared video based gas leak detection is widely recognized as a viable tool. However, all the moving regions of a video frame can be detected as leaking gas regions by the existing infrared video based gas leak detection methods, without discriminating the property of each detected region, e.g., a walking person in a video frame may be also detected as gas by the current gas leak detection methods.To solve this problem, we propose a novel infrared video based gas leak detection method in this paper, which is able to effectively suppress strong motion disturbances.Firstly, the Gaussian mixture model(GMM) is used to establish the background model.Then due to the observation that the shapes of gas regions are different from most rigid moving objects, we modify the Features From Accelerated Segment Test (FAST) algorithm and use the modified FAST (mFAST) features to describe each connected component. In view of the fact that the statistical property of the mFAST features extracted from gas regions is different from that of other motion regions, we propose the Pixel-Per-Points (PPP) condition to further select candidate connected components.Experimental results show that the algorithm is able to effectively suppress most strong motion disturbances and achieve real-time leaking gas detection.

  19. A hybrid video codec based on extended block sizes, recursive integer transforms, improved interpolation, and flexible motion representation

    NASA Astrophysics Data System (ADS)

    Karczewicz, Marta; Chen, Peisong; Joshi, Rajan; Wang, Xianglin; Chien, Wei-Jung; Panchal, Rahul; Coban, Muhammed; Chong, In Suk; Reznik, Yuriy A.

    2011-01-01

    This paper describes video coding technology proposal submitted by Qualcomm Inc. in response to a joint call for proposal (CfP) issued by ITU-T SG16 Q.6 (VCEG) and ISO/IEC JTC1/SC29/WG11 (MPEG) in January 2010. Proposed video codec follows a hybrid coding approach based on temporal prediction, followed by transform, quantization, and entropy coding of the residual. Some of its key features are extended block sizes (up to 64x64), recursive integer transforms, single pass switched interpolation filters with offsets (single pass SIFO), mode dependent directional transform (MDDT) for intra-coding, luma and chroma high precision filtering, geometry motion partitioning, adaptive motion vector resolution. It also incorporates internal bit-depth increase (IBDI), and modified quadtree based adaptive loop filtering (QALF). Simulation results are presented for a variety of bit rates, resolutions and coding configurations to demonstrate the high compression efficiency achieved by the proposed video codec at moderate level of encoding and decoding complexity. For random access hierarchical B configuration (HierB), the proposed video codec achieves an average BD-rate reduction of 30.88c/o compared to the H.264/AVC alpha anchor. For low delay hierarchical P (HierP) configuration, the proposed video codec achieves an average BD-rate reduction of 32.96c/o and 48.57c/o, compared to the H.264/AVC beta and gamma anchors, respectively.

  20. Ad Hoc Network Architecture for Multi-Media Networks

    DTIC Science & Technology

    2007-12-01

    sensor network . Video traffic is modeled and simulations are performed via the use of the Sun Small Programmable Object Technology (Sun SPOT) Java...characteristics of video traffic must be studied and understood. This thesis focuses on evaluating the possibility of routing video images over a wireless

Top