Sample records for optical motion capture

  1. Design of a haptic device with grasp and push-pull force feedback for a master-slave surgical robot.

    PubMed

    Hu, Zhenkai; Yoon, Chae-Hyun; Park, Samuel Byeongjun; Jo, Yung-Ho

    2016-07-01

    We propose a portable haptic device providing grasp (kinesthetic) and push-pull (cutaneous) sensations for optical-motion-capture master interfaces. Although optical-motion-capture master interfaces for surgical robot systems can overcome the stiffness, friction, and coupling problems of mechanical master interfaces, it is difficult to add haptic feedback to an optical-motion-capture master interface without constraining the free motion of the operator's hands. Therefore, we utilized a Bowden cable-driven mechanism to provide the grasp and push-pull sensation while retaining the free hand motion of the optical-motion capture master interface. To evaluate the haptic device, we construct a 2-DOF force sensing/force feedback system. We compare the sensed force and the reproduced force of the haptic device. Finally, a needle insertion test was done to evaluate the performance of the haptic interface in the master-slave system. The results demonstrate that both the grasp force feedback and the push-pull force feedback provided by the haptic interface closely matched with the sensed forces of the slave robot. We successfully apply our haptic interface in the optical-motion-capture master-slave system. The results of the needle insertion test showed that our haptic feedback can provide more safety than merely visual observation. We develop a suitable haptic device to produce both kinesthetic grasp force feedback and cutaneous push-pull force feedback. Our future research will include further objective performance evaluations of the optical-motion-capture master-slave robot system with our haptic interface in surgical scenarios.

  2. Kinematic differences between optical motion capture and biplanar videoradiography during a jump-cut maneuver

    PubMed Central

    Miranda, Daniel L; Rainbow, Michael J; Crisco, Joseph J; Fleming, Braden C

    2012-01-01

    Jumping and cutting activities are investigated in many laboratories attempting to better understand the biomechanics associated with non-contact ACL injury. Optical motion capture is widely used; however, it is subject to soft tissue artifact (STA). Biplanar videoradiography offers a unique approach to collecting skeletal motion without STA. The goal of this study was to compare how STA affects the six-degree-of-freedom motion of the femur and tibia during a jump-cut maneuver associated with non-contact ACL injury. Ten volunteers performed a jump-cut maneuver while their landing leg was imaged using optical motion capture (OMC) and biplanar videoradiography. The within-bone motion differences were compared using anatomical coordinate systems for the femur and tibia, respectively. The knee joint kinematic measurements were compared during two periods: before and after ground contact. Over the entire activity, the within-bone motion differences between the two motion capture techniques were significantly lower for the tibia than the femur for two of the rotational axes (flexion/extension, internal/external) and the origin. The OMC and biplanar videoradiography knee joint kinematics were in best agreement before landing. Kinematic deviations between the two techniques increased significantly after contact. This study provides information on the kinematic discrepancies between OMC and biplanar videoradiography that can be used to optimize methods employing both technologies for studying dynamic in vivo knee kinematics and kinetics during a jump-cut maneuver. PMID:23084785

  3. [An Introduction to A Newly-developed "Acupuncture Needle Manipulation Training-evaluation System" Based on Optical Motion Capture Technique].

    PubMed

    Zhang, Ao; Yan, Xing-Ke; Liu, An-Guo

    2016-12-25

    In the present paper, the authors introduce a newly-developed "Acupuncture Needle Manipulation Training-evaluation System" based on optical motion capture technique. It is composed of two parts, sensor and software, and overcomes some shortages of mechanical motion capture technique. This device is able to analyze the data of operations of the pressing-hand and needle-insertion hand during acupuncture performance and its software contains personal computer (PC) version, Android version, and Internetwork Operating System (IOS) Apple version. It is competent in recording and analyzing information of any ope-rator's needling manipulations, and is quite helpful for teachers in teaching, training and examining students in clinical practice.

  4. Low-cost human motion capture system for postural analysis onboard ships

    NASA Astrophysics Data System (ADS)

    Nocerino, Erica; Ackermann, Sebastiano; Del Pizzo, Silvio; Menna, Fabio; Troisi, Salvatore

    2011-07-01

    The study of human equilibrium, also known as postural stability, concerns different research sectors (medicine, kinesiology, biomechanics, robotics, sport) and is usually performed employing motion analysis techniques for recording human movements and posture. A wide range of techniques and methodologies has been developed, but the choice of instrumentations and sensors depends on the requirement of the specific application. Postural stability is a topic of great interest for the maritime community, since ship motions can make demanding and difficult the maintenance of the upright stance with hazardous consequences for the safety of people onboard. The need of capturing the motion of an individual standing on a ship during its daily service does not permit to employ optical systems commonly used for human motion analysis. These sensors are not designed for operating in disadvantageous environmental conditions (water, wetness, saltiness) and with not optimal lighting. The solution proposed in this study consists in a motion acquisition system that could be easily usable onboard ships. It makes use of two different methodologies: (I) motion capture with videogrammetry and (II) motion measurement with Inertial Measurement Unit (IMU). The developed image-based motion capture system, made up of three low-cost, light and compact video cameras, was validated against a commercial optical system and then used for testing the reliability of the inertial sensors. In this paper, the whole process of planning, designing, calibrating, and assessing the accuracy of the motion capture system is reported and discussed. Results from the laboratory tests and preliminary campaigns in the field are presented.

  5. Quantitative analysis of arm movement smoothness

    NASA Astrophysics Data System (ADS)

    Szczesna, Agnieszka; Błaszczyszyn, Monika

    2017-07-01

    The paper deals with the problem of motion data quantitative smoothness analysis. We investigated values of movement unit, fluidity and jerk for healthy and paralyzed arm of patients with hemiparesis after stroke. Patients were performing drinking task. To validate the approach, movement of 24 patients were captured using optical motion capture system.

  6. Validation of enhanced kinect sensor based motion capturing for gait assessment

    PubMed Central

    Müller, Björn; Ilg, Winfried; Giese, Martin A.

    2017-01-01

    Optical motion capturing systems are expensive and require substantial dedicated space to be set up. On the other hand, they provide unsurpassed accuracy and reliability. In many situations however flexibility is required and the motion capturing system can only temporarily be placed. The Microsoft Kinect v2 sensor is comparatively cheap and with respect to gait analysis promising results have been published. We here present a motion capturing system that is easy to set up, flexible with respect to the sensor locations and delivers high accuracy in gait parameters comparable to a gold standard motion capturing system (VICON). Further, we demonstrate that sensor setups which track the person only from one-side are less accurate and should be replaced by two-sided setups. With respect to commonly analyzed gait parameters, especially step width, our system shows higher agreement with the VICON system than previous reports. PMID:28410413

  7. Validation of Attitude and Heading Reference System and Microsoft Kinect for Continuous Measurement of Cervical Range of Motion Compared to the Optical Motion Capture System.

    PubMed

    Song, Young Seop; Yang, Kyung Yong; Youn, Kibum; Yoon, Chiyul; Yeom, Jiwoon; Hwang, Hyeoncheol; Lee, Jehee; Kim, Keewon

    2016-08-01

    To compare optical motion capture system (MoCap), attitude and heading reference system (AHRS) sensor, and Microsoft Kinect for the continuous measurement of cervical range of motion (ROM). Fifteen healthy adult subjects were asked to sit in front of the Kinect camera with optical markers and AHRS sensors attached to the body in a room equipped with optical motion capture camera. Subjects were instructed to independently perform axial rotation followed by flexion/extension and lateral bending. Each movement was repeated 5 times while being measured simultaneously with 3 devices. Using the MoCap system as the gold standard, the validity of AHRS and Kinect for measurement of cervical ROM was assessed by calculating correlation coefficient and Bland-Altman plot with 95% limits of agreement (LoA). MoCap and ARHS showed fair agreement (95% LoA<10°), while MoCap and Kinect showed less favorable agreement (95% LoA>10°) for measuring ROM in all directions. Intraclass correlation coefficient (ICC) values between MoCap and AHRS in -40° to 40° range were excellent for flexion/extension and lateral bending (ICC>0.9). ICC values were also fair for axial rotation (ICC>0.8). ICC values between MoCap and Kinect system in -40° to 40° range were fair for all motions. Our study showed feasibility of using AHRS to measure cervical ROM during continuous motion with an acceptable range of error. AHRS and Kinect system can also be used for continuous monitoring of flexion/extension and lateral bending in ordinary range.

  8. Ubiquitous human upper-limb motion estimation using wearable sensors.

    PubMed

    Zhang, Zhi-Qiang; Wong, Wai-Choong; Wu, Jian-Kang

    2011-07-01

    Human motion capture technologies have been widely used in a wide spectrum of applications, including interactive game and learning, animation, film special effects, health care, navigation, and so on. The existing human motion capture techniques, which use structured multiple high-resolution cameras in a dedicated studio, are complicated and expensive. With the rapid development of microsensors-on-chip, human motion capture using wearable microsensors has become an active research topic. Because of the agility in movement, upper-limb motion estimation has been regarded as the most difficult problem in human motion capture. In this paper, we take the upper limb as our research subject and propose a novel ubiquitous upper-limb motion estimation algorithm, which concentrates on modeling the relationship between upper-arm movement and forearm movement. A link structure with 5 degrees of freedom (DOF) is proposed to model the human upper-limb skeleton structure. Parameters are defined according to Denavit-Hartenberg convention, forward kinematics equations are derived, and an unscented Kalman filter is deployed to estimate the defined parameters. The experimental results have shown that the proposed upper-limb motion capture and analysis algorithm outperforms other fusion methods and provides accurate results in comparison to the BTS optical motion tracker.

  9. Miniature low-power inertial sensors: promising technology for implantable motion capture systems.

    PubMed

    Lambrecht, Joris M; Kirsch, Robert F

    2014-11-01

    Inertial and magnetic sensors are valuable for untethered, self-contained human movement analysis. Very recently, complete integration of inertial sensors, magnetic sensors, and processing into single packages, has resulted in miniature, low power devices that could feasibly be employed in an implantable motion capture system. We developed a wearable sensor system based on a commercially available system-in-package inertial and magnetic sensor. We characterized the accuracy of the system in measuring 3-D orientation-with and without magnetometer-based heading compensation-relative to a research grade optical motion capture system. The root mean square error was less than 4° in dynamic and static conditions about all axes. Using four sensors, recording from seven degrees-of-freedom of the upper limb (shoulder, elbow, wrist) was demonstrated in one subject during reaching motions. Very high correlation and low error was found across all joints relative to the optical motion capture system. Findings were similar to previous publications using inertial sensors, but at a fraction of the power consumption and size of the sensors. Such ultra-small, low power sensors provide exciting new avenues for movement monitoring for various movement disorders, movement-based command interfaces for assistive devices, and implementation of kinematic feedback systems for assistive interventions like functional electrical stimulation.

  10. Dynamics analysis of microsphere in a dual-beam fiber-optic trap with transverse offset.

    PubMed

    Chen, Xinlin; Xiao, Guangzong; Luo, Hui; Xiong, Wei; Yang, Kaiyong

    2016-04-04

    A comprehensive dynamics analysis of microsphere has been presented in a dual-beam fiber-optic trap with transverse offset. As the offset distance between two counterpropagating beams increases, the motion type of the microsphere starts with capture, then spiral motion, then orbital rotation, and ends with escape. We analyze the transformation process and mechanism of the four motion types based on ray optics approximation. Dynamic simulations show that the existence of critical offset distances at which different motion types transform. The result is an important step toward explaining physical phenomena in a dual-beam fiber-optic trap with transverse offset, and is generally applicable to achieving controllable motions of microspheres in integrated systems, such as microfluidic systems and lab-on-a-chip systems.

  11. Data Fusion Based on Optical Technology for Observation of Human Manipulation

    NASA Astrophysics Data System (ADS)

    Falco, Pietro; De Maria, Giuseppe; Natale, Ciro; Pirozzi, Salvatore

    2012-01-01

    The adoption of human observation is becoming more and more frequent within imitation learning and programming by demonstration approaches (PbD) to robot programming. For robotic systems equipped with anthropomorphic hands, the observation phase is very challenging and no ultimate solution exists. This work proposes a novel mechatronic approach to the observation of human hand motion during manipulation tasks. The strategy is based on the combined use of an optical motion capture system and a low-cost data glove equipped with novel joint angle sensors, based on optoelectronic technology. The combination of the two information sources is obtained through a sensor fusion algorithm based on the extended Kalman filter (EKF) suitably modified to tackle the problem of marker occlusions, typical of optical motion capture systems. This approach requires a kinematic model of the human hand. Another key contribution of this work is a new method to calibrate this model.

  12. Motion capture for human motion measuring by using single camera with triangle markers

    NASA Astrophysics Data System (ADS)

    Takahashi, Hidenori; Tanaka, Takayuki; Kaneko, Shun'ichi

    2005-12-01

    This study aims to realize a motion capture for measuring 3D human motions by using single camera. Although motion capture by using multiple cameras is widely used in sports field, medical field, engineering field and so on, optical motion capture method with one camera is not established. In this paper, the authors achieved a 3D motion capture by using one camera, named as Mono-MoCap (MMC), on the basis of two calibration methods and triangle markers which each length of side is given. The camera calibration methods made 3D coordinates transformation parameter and a lens distortion parameter with Modified DLT method. The triangle markers enabled to calculate a coordinate value of a depth direction on a camera coordinate. Experiments of 3D position measurement by using the MMC on a measurement space of cubic 2 m on each side show an average error of measurement of a center of gravity of a triangle marker was less than 2 mm. As compared with conventional motion capture method by using multiple cameras, the MMC has enough accuracy for 3D measurement. Also, by putting a triangle marker on each human joint, the MMC was able to capture a walking motion, a standing-up motion and a bending and stretching motion. In addition, a method using a triangle marker together with conventional spherical markers was proposed. Finally, a method to estimate a position of a marker by measuring the velocity of the marker was proposed in order to improve the accuracy of MMC.

  13. Robust object tracking techniques for vision-based 3D motion analysis applications

    NASA Astrophysics Data System (ADS)

    Knyaz, Vladimir A.; Zheltov, Sergey Y.; Vishnyakov, Boris V.

    2016-04-01

    Automated and accurate spatial motion capturing of an object is necessary for a wide variety of applications including industry and science, virtual reality and movie, medicine and sports. For the most part of applications a reliability and an accuracy of the data obtained as well as convenience for a user are the main characteristics defining the quality of the motion capture system. Among the existing systems for 3D data acquisition, based on different physical principles (accelerometry, magnetometry, time-of-flight, vision-based), optical motion capture systems have a set of advantages such as high speed of acquisition, potential for high accuracy and automation based on advanced image processing algorithms. For vision-based motion capture accurate and robust object features detecting and tracking through the video sequence are the key elements along with a level of automation of capturing process. So for providing high accuracy of obtained spatial data the developed vision-based motion capture system "Mosca" is based on photogrammetric principles of 3D measurements and supports high speed image acquisition in synchronized mode. It includes from 2 to 4 technical vision cameras for capturing video sequences of object motion. The original camera calibration and external orientation procedures provide the basis for high accuracy of 3D measurements. A set of algorithms as for detecting, identifying and tracking of similar targets, so for marker-less object motion capture is developed and tested. The results of algorithms' evaluation show high robustness and high reliability for various motion analysis tasks in technical and biomechanics applications.

  14. An error-based micro-sensor capture system for real-time motion estimation

    NASA Astrophysics Data System (ADS)

    Yang, Lin; Ye, Shiwei; Wang, Zhibo; Huang, Zhipei; Wu, Jiankang; Kong, Yongmei; Zhang, Li

    2017-10-01

    A wearable micro-sensor motion capture system with 16 IMUs and an error-compensatory complementary filter algorithm for real-time motion estimation has been developed to acquire accurate 3D orientation and displacement in real life activities. In the proposed filter algorithm, the gyroscope bias error, orientation error and magnetic disturbance error are estimated and compensated, significantly reducing the orientation estimation error due to sensor noise and drift. Displacement estimation, especially for activities such as jumping, has been the challenge in micro-sensor motion capture. An adaptive gait phase detection algorithm has been developed to accommodate accurate displacement estimation in different types of activities. The performance of this system is benchmarked with respect to the results of VICON optical capture system. The experimental results have demonstrated effectiveness of the system in daily activities tracking, with estimation error 0.16 ± 0.06 m for normal walking and 0.13 ± 0.11 m for jumping motions. Research supported by the National Natural Science Foundation of China (Nos. 61431017, 81272166).

  15. Development of real-time motion capture system for 3D on-line games linked with virtual character

    NASA Astrophysics Data System (ADS)

    Kim, Jong Hyeong; Ryu, Young Kee; Cho, Hyung Suck

    2004-10-01

    Motion tracking method is being issued as essential part of the entertainment, medical, sports, education and industry with the development of 3-D virtual reality. Virtual human character in the digital animation and game application has been controlled by interfacing devices; mouse, joysticks, midi-slider, and so on. Those devices could not enable virtual human character to move smoothly and naturally. Furthermore, high-end human motion capture systems in commercial market are expensive and complicated. In this paper, we proposed a practical and fast motion capturing system consisting of optic sensors, and linked the data with 3-D game character with real time. The prototype experiment setup is successfully applied to a boxing game which requires very fast movement of human character.

  16. Numerical considerations on control of motion of nanoparticles using scattering field of laser light

    NASA Astrophysics Data System (ADS)

    Yokoi, Naomichi; Aizu, Yoshihisa

    2017-05-01

    Most of optical manipulation techniques proposed so far depend on carefully fabricated setups and samples. Similar conditions can be fixed in laboratories; however, it is still challenging to manipulate nanoparticles when the environment is not well controlled and is unknown in advance. Nonetheless, coherent light scattered by rough object generates a speckle pattern which consists of random interference speckle grains with well-defined statistical properties. In the present study, we numerically investigate the motion of a Brownian particle suspended in water under the illumination of a speckle pattern. Particle-captured time and size of particle-captured area are quantitatively estimated in relation to an optical force and a speckle diameter to confirm the feasibility of the present method for performing optical manipulation tasks such as trapping and guiding.

  17. Development of a new calibration procedure and its experimental validation applied to a human motion capture system.

    PubMed

    Royo Sánchez, Ana Cristina; Aguilar Martín, Juan José; Santolaria Mazo, Jorge

    2014-12-01

    Motion capture systems are often used for checking and analyzing human motion in biomechanical applications. It is important, in this context, that the systems provide the best possible accuracy. Among existing capture systems, optical systems are those with the highest accuracy. In this paper, the development of a new calibration procedure for optical human motion capture systems is presented. The performance and effectiveness of that new calibration procedure are also checked by experimental validation. The new calibration procedure consists of two stages. In the first stage, initial estimators of intrinsic and extrinsic parameters are sought. The camera calibration method used in this stage is the one proposed by Tsai. These parameters are determined from the camera characteristics, the spatial position of the camera, and the center of the capture volume. In the second stage, a simultaneous nonlinear optimization of all parameters is performed to identify the optimal values, which minimize the objective function. The objective function, in this case, minimizes two errors. The first error is the distance error between two markers placed in a wand. The second error is the error of position and orientation of the retroreflective markers of a static calibration object. The real co-ordinates of the two objects are calibrated in a co-ordinate measuring machine (CMM). The OrthoBio system is used to validate the new calibration procedure. Results are 90% lower than those from the previous calibration software and broadly comparable with results from a similarly configured Vicon system.

  18. Fast instantaneous center of rotation estimation algorithm for a skied-steered robot

    NASA Astrophysics Data System (ADS)

    Kniaz, V. V.

    2015-05-01

    Skid-steered robots are widely used as mobile platforms for machine vision systems. However it is hard to achieve a stable motion of such robots along desired trajectory due to an unpredictable wheel slip. It is possible to compensate the unpredictable wheel slip and stabilize the motion of the robot using visual odometry. This paper presents a fast optical flow based algorithm for estimation of instantaneous center of rotation, angular and longitudinal speed of the robot. The proposed algorithm is based on Horn-Schunck variational optical flow estimation method. The instantaneous center of rotation and motion of the robot is estimated by back projection of optical flow field to the ground surface. The developed algorithm was tested using skid-steered mobile robot. The robot is based on a mobile platform that includes two pairs of differential driven motors and a motor controller. Monocular visual odometry system consisting of a singleboard computer and a low cost webcam is mounted on the mobile platform. A state-space model of the robot was derived using standard black-box system identification. The input (commands) and the output (motion) were recorded using a dedicated external motion capture system. The obtained model was used to control the robot without visual odometry data. The paper is concluded with the algorithm quality estimation by comparison of the trajectories estimated by the algorithm with the data from motion capture system.

  19. Motion detection using extended fractional Fourier transform and digital speckle photography.

    PubMed

    Bhaduri, Basanta; Tay, C J; Quan, C; Sheppard, Colin J R

    2010-05-24

    Digital speckle photography is a useful tool for measuring the motion of optically rough surfaces from the speckle shift that takes place at the recording plane. A simple correlation based digital speckle photographic system has been proposed that implements two simultaneous optical extended fractional Fourier transforms (EFRTs) of different orders using only a single lens and detector to simultaneously detect both the magnitude and direction of translation and tilt by capturing only two frames: one before and another after the object motion. The dynamic range and sensitivity of the measurement can be varied readily by altering the position of the mirror/s used in the optical setup. Theoretical analysis and experiment results are presented.

  20. Computational cameras for moving iris recognition

    NASA Astrophysics Data System (ADS)

    McCloskey, Scott; Venkatesha, Sharath

    2015-05-01

    Iris-based biometric identification is increasingly used for facility access and other security applications. Like all methods that exploit visual information, however, iris systems are limited by the quality of captured images. Optical defocus due to a small depth of field (DOF) is one such challenge, as is the acquisition of sharply-focused iris images from subjects in motion. This manuscript describes the application of computational motion-deblurring cameras to the problem of moving iris capture, from the underlying theory to system considerations and performance data.

  1. Active eye-tracking for an adaptive optics scanning laser ophthalmoscope

    PubMed Central

    Sheehy, Christy K.; Tiruveedhula, Pavan; Sabesan, Ramkumar; Roorda, Austin

    2015-01-01

    We demonstrate a system that combines a tracking scanning laser ophthalmoscope (TSLO) and an adaptive optics scanning laser ophthalmoscope (AOSLO) system resulting in both optical (hardware) and digital (software) eye-tracking capabilities. The hybrid system employs the TSLO for active eye-tracking at a rate up to 960 Hz for real-time stabilization of the AOSLO system. AOSLO videos with active eye-tracking signals showed, at most, an amplitude of motion of 0.20 arcminutes for horizontal motion and 0.14 arcminutes for vertical motion. Subsequent real-time digital stabilization limited residual motion to an average of only 0.06 arcminutes (a 95% reduction). By correcting for high amplitude, low frequency drifts of the eye, the active TSLO eye-tracking system enabled the AOSLO system to capture high-resolution retinal images over a larger range of motion than previously possible with just the AOSLO imaging system alone. PMID:26203370

  2. Inertial motion capture system for biomechanical analysis in pressure suits

    NASA Astrophysics Data System (ADS)

    Di Capua, Massimiliano

    A non-invasive system has been developed at the University of Maryland Space System Laboratory with the goal of providing a new capability for quantifying the motion of the human inside a space suit. Based on an array of six microprocessors and eighteen microelectromechanical (MEMS) inertial measurement units (IMUs), the Body Pose Measurement System (BPMS) allows the monitoring of the kinematics of the suit occupant in an unobtrusive, self-contained, lightweight and compact fashion, without requiring any external equipment such as those necessary with modern optical motion capture systems. BPMS measures and stores the accelerations, angular rates and magnetic fields acting upon each IMU, which are mounted on the head, torso, and each segment of each limb. In order to convert the raw data into a more useful form, such as a set of body segment angles quantifying pose and motion, a series of geometrical models and a non-linear complimentary filter were implemented. The first portion of this works focuses on assessing system performance, which was measured by comparing the BPMS filtered data against rigid body angles measured through an external VICON optical motion capture system. This type of system is the industry standard, and is used here for independent measurement of body pose angles. By comparing the two sets of data, performance metrics such as BPMS system operational conditions, accuracy, and drift were evaluated and correlated against VICON data. After the system and models were verified and their capabilities and limitations assessed, a series of pressure suit evaluations were conducted. Three different pressure suits were used to identify the relationship between usable range of motion and internal suit pressure. In addition to addressing range of motion, a series of exploration tasks were also performed, recorded, and analysed in order to identify different motion patterns and trajectories as suit pressure is increased and overall suit mobility is reduced. The focus of these evaluations was to quantify the reduction in mobility when operating in any of the evaluated pressure suits. This data should be of value in defining new low cost alternatives for pressure suit performance verification and evaluation. This work demonstrates that the BPMS technology is a viable alternative or companion to optical motion capture; while BPMS is the first motion capture system that has been designed specifically to measure the kinematics of a human in a pressure suit, its capabilities are not constrained to just being a measurement tool. The last section of the manuscript is devoted to future possible uses for the system, with a specific focus on pressure suit applications such in the use of BPMS as a master control interface for robot teleoperation, as well as an input interface for future robotically augmented pressure suits.

  3. Clinical measurement of the dart throwing motion of the wrist: variability, accuracy and correction.

    PubMed

    Vardakastani, Vasiliki; Bell, Hannah; Mee, Sarah; Brigstocke, Gavin; Kedgley, Angela E

    2018-01-01

    Despite being functionally important, the dart throwing motion is difficult to assess accurately through goniometry. The objectives of this study were to describe a method for reliably quantifying the dart throwing motion using goniometric measurements within a healthy population. Wrist kinematics of 24 healthy participants were assessed using goniometry and optical motion tracking. Three wrist angles were measured at the starting and ending points of the motion: flexion-extension, radial-ulnar deviation and dart throwing motion angle. The orientation of the dart throwing motion plane relative to the flexion-extension axis ranged between 28° and 57° among the tested population. Plane orientations derived from optical motion capture differed from those calculated through goniometry by 25°. An equation to correct the estimation of the plane from goniometry measurements was derived. This was applied and differences in the orientation of the plane were reduced to non-significant levels, enabling the dart throwing motion to be measured using goniometry alone.

  4. Model-based extended quaternion Kalman filter to inertial orientation tracking of arbitrary kinematic chains.

    PubMed

    Szczęsna, Agnieszka; Pruszowski, Przemysław

    2016-01-01

    Inertial orientation tracking is still an area of active research, especially in the context of out-door, real-time, human motion capture. Existing systems either propose loosely coupled tracking approaches where each segment is considered independently, taking the resulting drawbacks into account, or tightly coupled solutions that are limited to a fixed chain with few segments. Such solutions have no flexibility to change the skeleton structure, are dedicated to a specific set of joints, and have high computational complexity. This paper describes the proposal of a new model-based extended quaternion Kalman filter that allows for estimation of orientation based on outputs from the inertial measurements unit sensors. The filter considers interdependencies resulting from the construction of the kinematic chain so that the orientation estimation is more accurate. The proposed solution is a universal filter that does not predetermine the degree of freedom at the connections between segments of the model. To validation the motion of 3-segments single link pendulum captured by optical motion capture system is used. The next step in the research will be to use this method for inertial motion capture with a human skeleton model.

  5. A Kinematic Description of the Temporal Characteristics of Jaw Motion for Early Chewing: Preliminary Findings

    ERIC Educational Resources Information Center

    Wilson, Erin M.; Green, Jordan R.; Weismer, Gary

    2012-01-01

    Purpose: The purpose of this investigation was to describe age- and consistency-related changes in the temporal characteristics of chewing in typically developing children between the ages of 4 and 35 months and adults using high-resolution optically based motion capture technology. Method: Data were collected from 60 participants (48 children, 12…

  6. Variational optical flow estimation for images with spectral and photometric sensor diversity

    NASA Astrophysics Data System (ADS)

    Bengtsson, Tomas; McKelvey, Tomas; Lindström, Konstantin

    2015-03-01

    Motion estimation of objects in image sequences is an essential computer vision task. To this end, optical flow methods compute pixel-level motion, with the purpose of providing low-level input to higher-level algorithms and applications. Robust flow estimation is crucial for the success of applications, which in turn depends on the quality of the captured image data. This work explores the use of sensor diversity in the image data within a framework for variational optical flow. In particular, a custom image sensor setup intended for vehicle applications is tested. Experimental results demonstrate the improved flow estimation performance when IR sensitivity or flash illumination is added to the system.

  7. A New Multi-Sensor Fusion Scheme to Improve the Accuracy of Knee Flexion Kinematics for Functional Rehabilitation Movements.

    PubMed

    Tannous, Halim; Istrate, Dan; Benlarbi-Delai, Aziz; Sarrazin, Julien; Gamet, Didier; Ho Ba Tho, Marie Christine; Dao, Tien Tuan

    2016-11-15

    Exergames have been proposed as a potential tool to improve the current practice of musculoskeletal rehabilitation. Inertial or optical motion capture sensors are commonly used to track the subject's movements. However, the use of these motion capture tools suffers from the lack of accuracy in estimating joint angles, which could lead to wrong data interpretation. In this study, we proposed a real time quaternion-based fusion scheme, based on the extended Kalman filter, between inertial and visual motion capture sensors, to improve the estimation accuracy of joint angles. The fusion outcome was compared to angles measured using a goniometer. The fusion output shows a better estimation, when compared to inertial measurement units and Kinect outputs. We noted a smaller error (3.96°) compared to the one obtained using inertial sensors (5.04°). The proposed multi-sensor fusion system is therefore accurate enough to be applied, in future works, to our serious game for musculoskeletal rehabilitation.

  8. Motion of Cesium Atoms in the One-Dimensional Magneto-Optical Trap

    NASA Technical Reports Server (NTRS)

    Li, Yimin; Chen, Xuzong; Wang, Qingji; Wang, Yiqiu

    1996-01-01

    The force to which Cs atoms are subjected in the one-dimensional magneto-optical trap (lD-MOT) is calculated, and properties of this force are discussed. Several methods to increase the number of Cs atoms in the lD-MOT are presented on the basis of the analysis of the capture and escape of Cs atoms in the ID-MOT.

  9. Lumbar joint torque estimation based on simplified motion measurement using multiple inertial sensors.

    PubMed

    Miyajima, Saori; Tanaka, Takayuki; Imamura, Yumeko; Kusaka, Takashi

    2015-01-01

    We estimate lumbar torque based on motion measurement using only three inertial sensors. First, human motion is measured by a 6-axis motion tracking device that combines a 3-axis accelerometer and a 3-axis gyroscope placed on the shank, thigh, and back. Next, the lumbar joint torque during the motion is estimated by kinematic musculoskeletal simulation. The conventional method for estimating joint torque uses full body motion data measured by an optical motion capture system. However, in this research, joint torque is estimated by using only three link angles of the body, thigh, and shank. The utility of our method was verified by experiments. We measured motion of bendung knee and waist simultaneously. As the result, we were able to estimate the lumbar joint torque from measured motion.

  10. Evaluation of a portable markerless finger position capture device: accuracy of the Leap Motion controller in healthy adults.

    PubMed

    Tung, James Y; Lulic, Tea; Gonzalez, Dave A; Tran, Johnathan; Dickerson, Clark R; Roy, Eric A

    2015-05-01

    Although motion analysis is frequently employed in upper limb motor assessment (e.g. visually-guided reaching), they are resource-intensive and limited to laboratory settings. This study evaluated the reliability and accuracy of a new markerless motion capture device, the Leap Motion controller, to measure finger position. Testing conditions that influence reliability and agreement between the Leap and a research-grade motion capture system were examined. Nine healthy young adults pointed to 15 targets on a computer screen under two conditions: (1) touching the target (touch) and (2) 4 cm away from the target (no-touch). Leap data was compared to an Optotrak marker attached to the index finger. Across all trials, root mean square (RMS) error of the Leap system was 17.30  ±  9.56 mm (mean ± SD), sampled at 65.47  ±  21.53 Hz. The % viable trials and mean sampling rate were significantly lower in the touch condition (44% versus 64%, p < 0.001; 52.02  ±  2.93 versus 73.98  ±  4.48 Hz, p = 0.003). While linear correlations were high (horizontal: r(2) = 0.995, vertical r(2) = 0.945), the limits of agreement were large (horizontal: -22.02 to +26.80 mm, vertical: -29.41 to +30.14 mm). While not as precise as more sophisticated optical motion capture systems, the Leap Motion controller is sufficiently reliable for measuring motor performance in pointing tasks that do not require high positional accuracy (e.g. reaction time, Fitt's, trails, bimanual coordination).

  11. Motion tracking to enable pre-surgical margin mapping in basal cell carcinoma using optical imaging modalities: initial feasibility study using optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Duffy, M.; Richardson, T. J.; Craythorne, E.; Mallipeddi, R.; Coleman, A. J.

    2014-02-01

    A system has been developed to assess the feasibility of using motion tracking to enable pre-surgical margin mapping of basal cell carcinoma (BCC) in the clinic using optical coherence tomography (OCT). This system consists of a commercial OCT imaging system (the VivoSight 1500, MDL Ltd., Orpington, UK), which has been adapted to incorporate a webcam and a single-sensor electromagnetic positional tracking module (the Flock of Birds, Ascension Technology Corp, Vermont, USA). A supporting software interface has also been developed which allows positional data to be captured and projected onto a 2D dermoscopic image in real-time. Initial results using a stationary test phantom are encouraging, with maximum errors in the projected map in the order of 1-2mm. Initial clinical results were poor due to motion artefact, despite attempts to stabilise the patient. However, the authors present several suggested modifications that are expected to reduce the effects of motion artefact and improve the overall accuracy and clinical usability of the system.

  12. Discomfort Evaluation of Truck Ingress/Egress Motions Based on Biomechanical Analysis

    PubMed Central

    Choi, Nam-Chul; Lee, Sang Hun

    2015-01-01

    This paper presents a quantitative discomfort evaluation method based on biomechanical analysis results for human body movement, as well as its application to an assessment of the discomfort for truck ingress and egress. In this study, the motions of a human subject entering and exiting truck cabins with different types, numbers, and heights of footsteps were first measured using an optical motion capture system and load sensors. Next, the maximum voluntary contraction (MVC) ratios of the muscles were calculated through a biomechanical analysis of the musculoskeletal human model for the captured motion. Finally, the objective discomfort was evaluated using the proposed discomfort model based on the MVC ratios. To validate this new discomfort assessment method, human subject experiments were performed to investigate the subjective discomfort levels through a questionnaire for comparison with the objective discomfort levels. The validation results showed that the correlation between the objective and subjective discomforts was significant and could be described by a linear regression model. PMID:26067194

  13. A novel validation and calibration method for motion capture systems based on micro-triangulation.

    PubMed

    Nagymáté, Gergely; Tuchband, Tamás; Kiss, Rita M

    2018-06-06

    Motion capture systems are widely used to measure human kinematics. Nevertheless, users must consider system errors when evaluating their results. Most validation techniques for these systems are based on relative distance and displacement measurements. In contrast, our study aimed to analyse the absolute volume accuracy of optical motion capture systems by means of engineering surveying reference measurement of the marker coordinates (uncertainty: 0.75 mm). The method is exemplified on an 18 camera OptiTrack Flex13 motion capture system. The absolute accuracy was defined by the root mean square error (RMSE) between the coordinates measured by the camera system and by engineering surveying (micro-triangulation). The original RMSE of 1.82 mm due to scaling error was managed to be reduced to 0.77 mm while the correlation of errors to their distance from the origin reduced from 0.855 to 0.209. A simply feasible but less accurate absolute accuracy compensation method using tape measure on large distances was also tested, which resulted in similar scaling compensation compared to the surveying method or direct wand size compensation by a high precision 3D scanner. The presented validation methods can be less precise in some respects as compared to previous techniques, but they address an error type, which has not been and cannot be studied with the previous validation methods. Copyright © 2018 Elsevier Ltd. All rights reserved.

  14. An optical MEMS accelerometer fabricated using double-sided deep reactive ion etching on silicon-on-insulator wafer

    NASA Astrophysics Data System (ADS)

    Teo, Adrian J. T.; Li, Holden; Tan, Say Hwa; Yoon, Yong-Jin

    2017-06-01

    Optical MEMS devices provide fast detection, electromagnetic resilience and high sensitivity. Using this technology, an optical gratings based accelerometer design concept was developed for seismic motion detection purposes that provides miniaturization, high manufacturability, low costs and high sensitivity. Detailed in-house fabrication procedures of a double-sided deep reactive ion etching (DRIE) on a silicon-on-insulator (SOI) wafer for a micro opto electro mechanical system (MOEMS) device are presented and discussed. Experimental results obtained show that the conceptual device successfully captured motion similar to a commercial accelerometer with an average sensitivity of 13.6 mV G-1, and a highest recorded sensitivity of 44.1 mV G-1. A noise level of 13.5 mV was detected due to experimental setup limitations. This is the first MOEMS accelerometer developed using double-sided DRIE on SOI wafer for the application of seismic motion detection, and is a breakthrough technology platform to open up options for lower cost MOEMS devices.

  15. Dance-the-Music: an educational platform for the modeling, recognition and audiovisual monitoring of dance steps using spatiotemporal motion templates

    NASA Astrophysics Data System (ADS)

    Maes, Pieter-Jan; Amelynck, Denis; Leman, Marc

    2012-12-01

    In this article, a computational platform is presented, entitled "Dance-the-Music", that can be used in a dance educational context to explore and learn the basics of dance steps. By introducing a method based on spatiotemporal motion templates, the platform facilitates to train basic step models from sequentially repeated dance figures performed by a dance teacher. Movements are captured with an optical motion capture system. The teachers' models can be visualized from a first-person perspective to instruct students how to perform the specific dance steps in the correct manner. Moreover, recognition algorithms-based on a template matching method-can determine the quality of a student's performance in real time by means of multimodal monitoring techniques. The results of an evaluation study suggest that the Dance-the-Music is effective in helping dance students to master the basics of dance figures.

  16. Estimation of Ground Reaction Forces and Moments During Gait Using Only Inertial Motion Capture

    PubMed Central

    Karatsidis, Angelos; Bellusci, Giovanni; Schepers, H. Martin; de Zee, Mark; Andersen, Michael S.; Veltink, Peter H.

    2016-01-01

    Ground reaction forces and moments (GRF&M) are important measures used as input in biomechanical analysis to estimate joint kinetics, which often are used to infer information for many musculoskeletal diseases. Their assessment is conventionally achieved using laboratory-based equipment that cannot be applied in daily life monitoring. In this study, we propose a method to predict GRF&M during walking, using exclusively kinematic information from fully-ambulatory inertial motion capture (IMC). From the equations of motion, we derive the total external forces and moments. Then, we solve the indeterminacy problem during double stance using a distribution algorithm based on a smooth transition assumption. The agreement between the IMC-predicted and reference GRF&M was categorized over normal walking speed as excellent for the vertical (ρ = 0.992, rRMSE = 5.3%), anterior (ρ = 0.965, rRMSE = 9.4%) and sagittal (ρ = 0.933, rRMSE = 12.4%) GRF&M components and as strong for the lateral (ρ = 0.862, rRMSE = 13.1%), frontal (ρ = 0.710, rRMSE = 29.6%), and transverse GRF&M (ρ = 0.826, rRMSE = 18.2%). Sensitivity analysis was performed on the effect of the cut-off frequency used in the filtering of the input kinematics, as well as the threshold velocities for the gait event detection algorithm. This study was the first to use only inertial motion capture to estimate 3D GRF&M during gait, providing comparable accuracy with optical motion capture prediction. This approach enables applications that require estimation of the kinetics during walking outside the gait laboratory. PMID:28042857

  17. Altitude-dependent Drift of a Chemical Release Cloud at Middle Latitudes

    NASA Astrophysics Data System (ADS)

    Pedersen, T.; Holmes, J. M.; Sutton, E. K.

    2017-12-01

    A chemical release experiment conducted at the White Sands Missile Range in February 2015 consisted of firing of three identical canisters at different altitudes along a near-vertical trajectory, creating a large structured cloud after diffusion and expansion of the three initial dispersals. Dedicated optical observations from near the launch site and a remote site allow determination of the position and motion of the extended optical cloud as a function of time, while photographs captured and posted by members of the general public provide additional look angles to constrain the cloud shape in more detail. We compare the observed drift and evolution of the cloud with empirical and theoretical models of the neutral winds to examine the altitudinal shear in the neutral winds and their effects on the motion and shape of the extended optical cloud.

  18. 3D Measurement of Forearm and Upper Arm during Throwing Motion using Body Mounted Sensor

    NASA Astrophysics Data System (ADS)

    Koda, Hideharu; Sagawa, Koichi; Kuroshima, Kouta; Tsukamoto, Toshiaki; Urita, Kazutaka; Ishibashi, Yasuyuki

    The aim of this study is to propose the measurement method of three-dimensional (3D) movement of forearm and upper arm during pitching motion of baseball using inertial sensors without serious consideration of sensor installation. Although high accuracy measurement of sports motion is achieved by using optical motion capture system at present, it has some disadvantages such as the calibration of cameras and limitation of measurement place. Whereas the proposed method for 3D measurement of pitching motion using body mounted sensors provides trajectory and orientation of upper arm by the integration of acceleration and angular velocity measured on upper limb. The trajectory of forearm is derived so that the elbow joint axis of forearm corresponds to that of upper arm. Spatial relation between upper limb and sensor system is obtained by performing predetermined movements of upper limb and utilizing angular velocity and gravitational acceleration. The integration error is modified so that the estimated final position, velocity and posture of upper limb agree with the actual ones. The experimental results of the measurement of pitching motion show that trajectories of shoulder, elbow and wrist estimated by the proposed method are highly correlated to those from the motion capture system within the estimation error of about 10 [%].

  19. Development of method for quantifying essential tremor using a small optical device.

    PubMed

    Chen, Kai-Hsiang; Lin, Po-Chieh; Chen, Yu-Jung; Yang, Bing-Shiang; Lin, Chin-Hsien

    2016-06-15

    Clinical assessment scales are the most common means used by physicians to assess tremor severity. Some scientific tools that may be able to replace these scales to objectively assess the severity, such as accelerometers, digital tablets, electromyography (EMG) measurement devices, and motion capture cameras, are currently available. However, most of the operational modes of these tools are relatively complex or are only able to capture part of the clinical information; furthermore, using these tools is sometimes time consuming. Currently, there is no tool available for automatically quantifying tremor severity in clinical environments. We aimed to develop a rapid, objective, and quantitative system for measuring the severity of finger tremor using a small portable optical device (Leap Motion). A single test took 15s to conduct, and three algorithms were proposed to quantify the severity of finger tremor. The system was tested with four patients diagnosed with essential tremor. The proposed algorithms were able to quantify different characteristics of tremor in clinical environments, and could be used as references for future clinical assessments. A portable, easy-to-use, small-sized, and noncontact device (Leap Motion) was used to clinically detect and record finger movement, and three algorithms were proposed to describe tremor amplitudes. Copyright © 2016 Elsevier B.V. All rights reserved.

  20. Pre-Capture Privacy for Small Vision Sensors.

    PubMed

    Pittaluga, Francesco; Koppal, Sanjeev Jagannatha

    2017-11-01

    The next wave of micro and nano devices will create a world with trillions of small networked cameras. This will lead to increased concerns about privacy and security. Most privacy preserving algorithms for computer vision are applied after image/video data has been captured. We propose to use privacy preserving optics that filter or block sensitive information directly from the incident light-field before sensor measurements are made, adding a new layer of privacy. In addition to balancing the privacy and utility of the captured data, we address trade-offs unique to miniature vision sensors, such as achieving high-quality field-of-view and resolution within the constraints of mass and volume. Our privacy preserving optics enable applications such as depth sensing, full-body motion tracking, people counting, blob detection and privacy preserving face recognition. While we demonstrate applications on macro-scale devices (smartphones, webcams, etc.) our theory has impact for smaller devices.

  1. A low cost wearable optical-based goniometer for human joint monitoring

    NASA Astrophysics Data System (ADS)

    Lim, Chee Kian; Luo, Zhiqiang; Chen, I.-Ming; Yeo, Song Huat

    2011-03-01

    Widely used in the fields of physical and occupational therapy, goniometers are indispensible when it comes to angular measurement of the human joint. In both fields, there is a need to measure the range of motion associated with various joints and muscle groups. For example, a goniometer may be used to help determine the current status of the range of motion in bend the arm at the elbow, bending the knee, or bending at the waist. The device can help to establish the range of motion at the beginning of the treatment series, and also allow the therapist to monitor progress during subsequent sessions. Most commonly found are the mechanical goniometers which are inexpensive but bulky. As the parts are mechanically linked, accuracy and resolution are largely limited. On the other hand, electronic and optical fiberbased goniometers promise better performance over its mechanical counterpart but due to higher cost and setup requirements does not make it an attractive proposition as well. In this paper, we present a reliable and non-intrusive design of an optical-based goniometer for human joint measurement. This device will allow continuous and longterm monitoring of human joint motion in everyday setting. The proposed device was benchmarked against mechanical goniometer and optical based motion capture system to validate its performance. From the empirical results, it has been proven that this design can be use as a robust and effective wearable joint monitoring device.

  2. Studying Upper-Limb Amputee Prosthesis Use to Inform Device Design

    DTIC Science & Technology

    2015-10-01

    the study. This equipment has included a modified GoPro head-mounted camera and a Vicon 13-camera optical motion capture system, which was not part...also completed for relevant members of the study team. 4. The head-mounted camera setup has been established (a modified GoPro Hero 3 with external

  3. Do kinematic metrics of walking balance adapt to perturbed optical flow?

    PubMed

    Thompson, Jessica D; Franz, Jason R

    2017-08-01

    Visual (i.e., optical flow) perturbations can be used to study balance control and balance deficits. However, it remains unclear whether walking balance control adapts to such perturbations over time. Our purpose was to investigate the propensity for visuomotor adaptation in walking balance control using prolonged exposure to optical flow perturbations. Ten subjects (age: 25.4±3.8years) walked on a treadmill while watching a speed-matched virtual hallway with and without continuous mediolateral optical flow perturbations of three different amplitudes. Each of three perturbation trials consisted of 8min of prolonged exposure followed by 1min of unperturbed walking. Using 3D motion capture, we analyzed changes in foot placement kinematics and mediolateral sacrum motion. At their onset, perturbations elicited wider and shorter steps, alluding to a more cautious, general anticipatory balance control strategy. As perturbations continued, foot placement tended toward values seen during unperturbed walking while step width variability and mediolateral sacrum motion concurrently increased. Our findings suggest that subjects progressively shifted from a general anticipatory balance control strategy to a reactive, task-specific strategy using step-to-step adjustments. Prolonged exposure to optical flow perturbations may have clinical utility to reinforce reactive, task-specific balance control through training. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Method for measuring tri-axial lumbar motion angles using wearable sheet stretch sensors

    PubMed Central

    Nakamoto, Hiroyuki; Yamaji, Tokiya; Ootaka, Hideo; Bessho, Yusuke; Nakamura, Ryo; Ono, Rei

    2017-01-01

    Background Body movements, such as trunk flexion and rotation, are risk factors for low back pain in occupational settings, especially in healthcare workers. Wearable motion capture systems are potentially useful to monitor lower back movement in healthcare workers to help avoid the risk factors. In this study, we propose a novel system using sheet stretch sensors and investigate the system validity for estimating lower back movement. Methods Six volunteers (female:male = 1:1, mean age: 24.8 ± 4.0 years, height 166.7 ± 5.6 cm, weight 56.3 ± 7.6 kg) participated in test protocols that involved executing seven types of movements. The movements were three uniaxial trunk movements (i.e., trunk flexion-extension, trunk side-bending, and trunk rotation) and four multiaxial trunk movements (i.e., flexion + rotation, flexion + side-bending, side-bending + rotation, and moving around the cranial–caudal axis). Each trial lasted for approximately 30 s. Four stretch sensors were attached to each participant’s lower back. The lumbar motion angles were estimated using simple linear regression analysis based on the stretch sensor outputs and compared with those obtained by the optical motion capture system. Results The estimated lumbar motion angles showed a good correlation with the actual angles, with correlation values of r = 0.68 (SD = 0.35), r = 0.60 (SD = 0.19), and r = 0.72 (SD = 0.18) for the flexion-extension, side bending, and rotation movements, respectively (all P < 0.05). The estimation errors in all three directions were less than 3°. Conclusion The stretch sensors mounted on the back provided reasonable estimates of the lumbar motion angles. The novel motion capture system provided three directional angles without capture space limits. The wearable system possessed great potential to monitor the lower back movement in healthcare workers and helping prevent low back pain. PMID:29020053

  5. Application of side-oblique image-motion blur correction to Kuaizhou-1 agile optical images.

    PubMed

    Sun, Tao; Long, Hui; Liu, Bao-Cheng; Li, Ying

    2016-03-21

    Given the recent development of agile optical satellites for rapid-response land observation, side-oblique image-motion (SOIM) detection and blur correction have become increasingly essential for improving the radiometric quality of side-oblique images. The Chinese small-scale agile mapping satellite Kuaizhou-1 (KZ-1) was developed by the Harbin Institute of Technology and launched for multiple emergency applications. Like other agile satellites, KZ-1 suffers from SOIM blur, particularly in captured images with large side-oblique angles. SOIM detection and blur correction are critical for improving the image radiometric accuracy. This study proposes a SOIM restoration method based on segmental point spread function detection. The segment region width is determined by satellite parameters such as speed, height, integration time, and side-oblique angle. The corresponding algorithms and a matrix form are proposed for SOIM blur correction. Radiometric objective evaluation indices are used to assess the restoration quality. Beijing regional images from KZ-1 are used as experimental data. The radiometric quality is found to increase greatly after SOIM correction. Thus, the proposed method effectively corrects image motion for KZ-1 agile optical satellites.

  6. A compact fiber optics-based heterodyne combined normal and transverse displacement interferometer.

    PubMed

    Zuanetti, Bryan; Wang, Tianxue; Prakash, Vikas

    2017-03-01

    While Photonic Doppler Velocimetry (PDV) has become a common diagnostic tool for the measurement of normal component of particle motion in shock wave experiments, this technique has not yet been modified for the measurement of combined normal and transverse motion, as needed in oblique plate impact experiments. In this paper, we discuss the design and implementation of a compact fiber-optics-based heterodyne combined normal and transverse displacement interferometer. Like the standard PDV, this diagnostic tool is assembled using commercially available telecommunications hardware and uses a 1550 nm wavelength 2 W fiber-coupled laser, an optical focuser, and single mode fibers to transport light to and from the target. Two additional optical probes capture first-order beams diffracted from a reflective grating at the target free-surface and deliver the beams past circulators and a coupler where the signal is combined to form a beat frequency. The combined signal is then digitized and analyzed to determine the transverse component of the particle motion. The maximum normal velocity that can be measured by this system is limited by the equivalent transmission bandwidth (3.795 GHz) of the combined detector, amplifier, and digitizer and is estimated to be ∼2.9 km/s. Sample symmetric oblique plate-impact experiments are performed to demonstrate the capability of this diagnostic tool in the measurement of the combined normal and transverse displacement particle motion.

  7. Repurposing video recordings for structure motion estimations

    NASA Astrophysics Data System (ADS)

    Khaloo, Ali; Lattanzi, David

    2016-04-01

    Video monitoring of public spaces is becoming increasingly ubiquitous, particularly near essential structures and facilities. During any hazard event that dynamically excites a structure, such as an earthquake or hurricane, proximal video cameras may inadvertently capture the motion time-history of the structure during the event. If this dynamic time-history could be extracted from the repurposed video recording it would become a valuable forensic analysis tool for engineers performing post-disaster structural evaluations. The difficulty is that almost all potential video cameras are not installed to monitor structure motions, leading to camera perspective distortions and other associated challenges. This paper presents a method for extracting structure motions from videos using a combination of computer vision techniques. Images from a video recording are first reprojected into synthetic images that eliminate perspective distortion, using as-built knowledge of a structure for calibration. The motion of the camera itself during an event is also considered. Optical flow, a technique for tracking per-pixel motion, is then applied to these synthetic images to estimate the building motion. The developed method was validated using the experimental records of the NEESHub earthquake database. The results indicate that the technique is capable of estimating structural motions, particularly the frequency content of the response. Further work will evaluate variants and alternatives to the optical flow algorithm, as well as study the impact of video encoding artifacts on motion estimates.

  8. A novel teaching system for industrial robots.

    PubMed

    Lin, Hsien-I; Lin, Yu-Hsiang

    2014-03-27

    The most important tool for controlling an industrial robotic arm is a teach pendant, which controls the robotic arm movement in work spaces and accomplishes teaching tasks. A good teaching tool should be easy to operate and can complete teaching tasks rapidly and effortlessly. In this study, a new teaching system is proposed for enabling users to operate robotic arms and accomplish teaching tasks easily. The proposed teaching system consists of the teach pen, optical markers on the pen, a motion capture system, and the pen tip estimation algorithm. With the marker positions captured by the motion capture system, the pose of the teach pen is accurately calculated by the pen tip algorithm and used to control the robot tool frame. In addition, Fitts' Law is adopted to verify the usefulness of this new system, and the results show that the system provides high accuracy, excellent operation performance, and a stable error rate. In addition, the system maintains superior performance, even when users work on platforms with different inclination angles.

  9. A Novel Teaching System for Industrial Robots

    PubMed Central

    Lin, Hsien-I; Lin, Yu-Hsiang

    2014-01-01

    The most important tool for controlling an industrial robotic arm is a teach pendant, which controls the robotic arm movement in work spaces and accomplishes teaching tasks. A good teaching tool should be easy to operate and can complete teaching tasks rapidly and effortlessly. In this study, a new teaching system is proposed for enabling users to operate robotic arms and accomplish teaching tasks easily. The proposed teaching system consists of the teach pen, optical markers on the pen, a motion capture system, and the pen tip estimation algorithm. With the marker positions captured by the motion capture system, the pose of the teach pen is accurately calculated by the pen tip algorithm and used to control the robot tool frame. In addition, Fitts' Law is adopted to verify the usefulness of this new system, and the results show that the system provides high accuracy, excellent operation performance, and a stable error rate. In addition, the system maintains superior performance, even when users work on platforms with different inclination angles. PMID:24681669

  10. Self-Management of Patient Body Position, Pose, and Motion Using Wide-Field, Real-Time Optical Measurement Feedback: Results of a Volunteer Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parkhurst, James M.; Price, Gareth J., E-mail: gareth.price@christie.nhs.uk; Faculty of Medical and Human Sciences, Manchester Academic Health Sciences Centre, University of Manchester, Manchester

    2013-12-01

    Purpose: We present the results of a clinical feasibility study, performed in 10 healthy volunteers undergoing a simulated treatment over 3 sessions, to investigate the use of a wide-field visual feedback technique intended to help patients control their pose while reducing motion during radiation therapy treatment. Methods and Materials: An optical surface sensor is used to capture wide-area measurements of a subject's body surface with visualizations of these data displayed back to them in real time. In this study we hypothesize that this active feedback mechanism will enable patients to control their motion and help them maintain their setup posemore » and position. A capability hierarchy of 3 different level-of-detail abstractions of the measured surface data is systematically compared. Results: Use of the device enabled volunteers to increase their conformance to a reference surface, as measured by decreased variability across their body surfaces. The use of visual feedback also enabled volunteers to reduce their respiratory motion amplitude to 1.7 ± 0.6 mm compared with 2.7 ± 1.4 mm without visual feedback. Conclusions: The use of live feedback of their optically measured body surfaces enabled a set of volunteers to better manage their pose and motion when compared with free breathing. The method is suitable to be taken forward to patient studies.« less

  11. Determining the maximum diameter for holes in the shoe without compromising shoe integrity when using a multi-segment foot model.

    PubMed

    Shultz, Rebecca; Jenkyn, Thomas

    2012-01-01

    Measuring individual foot joint motions requires a multi-segment foot model, even when the subject is wearing a shoe. Each foot segment must be tracked with at least three skin-mounted markers, but for these markers to be visible to an optical motion capture system holes or 'windows' must be cut into the structure of the shoe. The holes must be sufficiently large avoiding interfering with the markers, but small enough that they do not compromise the shoe's structural integrity. The objective of this study was to determine the maximum size of hole that could be cut into a running shoe upper without significantly compromising its structural integrity or changing the kinematics of the foot within the shoe. Three shoe designs were tested: (1) neutral cushioning, (2) motion control and (3) stability shoes. Holes were cut progressively larger, with four sizes tested in all. Foot joint motions were measured: (1) hindfoot with respect to midfoot in the frontal plane, (2) forefoot twist with respect to midfoot in the frontal plane, (3) the height-to-length ratio of the medial longitudinal arch and (4) the hallux angle with respect to first metatarsal in the sagittal plane. A single subject performed level walking at her preferred pace in each of the three shoes with ten repetitions for each hole size. The largest hole that did not disrupt shoe integrity was an oval of 1.7cm×2.5cm. The smallest shoe deformations were seen with the motion control shoe. The least change in foot joint motion was forefoot twist in both the neutral shoe and stability shoe for any size hole. This study demonstrates that for a hole smaller than this size, optical motion capture with a cluster-based multi-segment foot model is feasible for measure foot in shoe kinematics in vivo. Copyright © 2011. Published by Elsevier Ltd.

  12. Optical head tracking for functional magnetic resonance imaging using structured light.

    PubMed

    Zaremba, Andrei A; MacFarlane, Duncan L; Tseng, Wei-Che; Stark, Andrew J; Briggs, Richard W; Gopinath, Kaundinya S; Cheshkov, Sergey; White, Keith D

    2008-07-01

    An accurate motion-tracking technique is needed to compensate for subject motion during functional magnetic resonance imaging (fMRI) procedures. Here, a novel approach to motion metrology is discussed. A structured light pattern specifically coded for digital signal processing is positioned onto a fiduciary of the patient. As the patient undergoes spatial transformations in 6 DoF (degrees of freedom), a high-resolution CCD camera captures successive images for analysis on a computing platform. A high-speed image processing algorithm is used to calculate spatial transformations in a time frame commensurate with patient movements (10-100 ms) and with a precision of at least 0.5 microm for translations and 0.1 deg for rotations.

  13. Experimental Studies of the Brownian Diffusion of Boomerang Colloidal Particle in a Confined Geometry

    NASA Astrophysics Data System (ADS)

    Chakrabarty, Ayan; Wang, Feng; Joshi, Bhuwan; Wei, Qi-Huo

    2011-03-01

    Recent studies shows that the boomerang shaped molecules can form various kinds of liquid crystalline phases. One debated topic related to boomerang molecules is the existence of biaxial nematic liquid crystalline phase. Developing and optical microscopic studies of colloidal systems of boomerang particles would allow us to gain better understanding of orientation ordering and dynamics at ``single molecule'' level. Here we report the fabrication and experimental studies of the Brownian motion of individual boomerang colloidal particles confined between two glass plates. We used dark-field optical microscopy to directly visualize the Brownian motion of the single colloidal particles in a quasi two dimensional geometry. An EMCCD was used to capture the motion in real time. An indigenously developed imaging processing algorithm based on MatLab program was used to precisely track the position and orientation of the particles with sub-pixel accuracy. The experimental finding of the Brownian diffusion of a single boomerang colloidal particle will be discussed.

  14. Some uses of wavelets for imaging dynamic processes in live cochlear structures

    NASA Astrophysics Data System (ADS)

    Boutet de Monvel, J.

    2007-09-01

    A variety of image and signal processing algorithms based on wavelet filtering tools have been developed during the last few decades, that are well adapted to the experimental variability typically encountered in live biological microscopy. A number of processing tools are reviewed, that use wavelets for adaptive image restoration and for motion or brightness variation analysis by optical flow computation. The usefulness of these tools for biological imaging is illustrated in the context of the restoration of images of the inner ear and the analysis of cochlear motion patterns in two and three dimensions. I also report on recent work that aims at capturing fluorescence intensity changes associated with vesicle dynamics at synaptic zones of sensory hair cells. This latest application requires one to separate the intensity variations associated with the physiological process under study from the variations caused by motion of the observed structures. A wavelet optical flow algorithm for doing this is presented, and its effectiveness is demonstrated on artificial and experimental image sequences.

  15. Pupil Tracking for Real-Time Motion Corrected Anterior Segment Optical Coherence Tomography

    PubMed Central

    Carrasco-Zevallos, Oscar M.; Nankivil, Derek; Viehland, Christian; Keller, Brenton; Izatt, Joseph A.

    2016-01-01

    Volumetric acquisition with anterior segment optical coherence tomography (ASOCT) is necessary to obtain accurate representations of the tissue structure and to account for asymmetries of the anterior eye anatomy. Additionally, recent interest in imaging of anterior segment vasculature and aqueous humor flow resulted in application of OCT angiography techniques to generate en face and 3D micro-vasculature maps of the anterior segment. Unfortunately, ASOCT structural and vasculature imaging systems do not capture volumes instantaneously and are subject to motion artifacts due to involuntary eye motion that may hinder their accuracy and repeatability. Several groups have demonstrated real-time tracking for motion-compensated in vivo OCT retinal imaging, but these techniques are not applicable in the anterior segment. In this work, we demonstrate a simple and low-cost pupil tracking system integrated into a custom swept-source OCT system for real-time motion-compensated anterior segment volumetric imaging. Pupil oculography hardware coaxial with the swept-source OCT system enabled fast detection and tracking of the pupil centroid. The pupil tracking ASOCT system with a field of view of 15 x 15 mm achieved diffraction-limited imaging over a lateral tracking range of +/- 2.5 mm and was able to correct eye motion at up to 22 Hz. Pupil tracking ASOCT offers a novel real-time motion compensation approach that may facilitate accurate and reproducible anterior segment imaging. PMID:27574800

  16. Diffraction-based optical sensor detection system for capture-restricted environments

    NASA Astrophysics Data System (ADS)

    Khandekar, Rahul M.; Nikulin, Vladimir V.

    2008-04-01

    The use of digital cameras and camcorders in prohibited areas presents a growing problem. Piracy in the movie theaters results in huge revenue loss to the motion picture industry every year, but still image and video capture may present even a bigger threat if performed in high-security locations. While several attempts are being made to address this issue, an effective solution is yet to be found. We propose to approach this problem using a very commonly observed optical phenomenon. Cameras and camcorders use CCD and CMOS sensors, which include a number of photosensitive elements/pixels arranged in a certain fashion. Those are photosites in CCD sensors and semiconductor elements in CMOS sensors. They are known to reflect a small fraction of incident light, but could also act as a diffraction grating, resulting in the optical response that could be utilized to identify the presence of such a sensor. A laser-based detection system is proposed that accounts for the elements in the optical train of the camera, as well as the eye-safety of the people who could be exposed to optical beam radiation. This paper presents preliminary experimental data, as well as the proof-of-concept simulation results.

  17. Feasibility of Using Low-Cost Motion Capture for Automated Screening of Shoulder Motion Limitation after Breast Cancer Surgery.

    PubMed

    Gritsenko, Valeriya; Dailey, Eric; Kyle, Nicholas; Taylor, Matt; Whittacre, Sean; Swisher, Anne K

    2015-01-01

    To determine if a low-cost, automated motion analysis system using Microsoft Kinect could accurately measure shoulder motion and detect motion impairments in women following breast cancer surgery. Descriptive study of motion measured via 2 methods. Academic cancer center oncology clinic. 20 women (mean age = 60 yrs) were assessed for active and passive shoulder motions during a routine post-operative clinic visit (mean = 18 days after surgery) following mastectomy (n = 4) or lumpectomy (n = 16) for breast cancer. Participants performed 3 repetitions of active and passive shoulder motions on the side of the breast surgery. Arm motion was recorded using motion capture by Kinect for Windows sensor and on video. Goniometric values were determined from video recordings, while motion capture data were transformed to joint angles using 2 methods (body angle and projection angle). Correlation of motion capture with goniometry and detection of motion limitation. Active shoulder motion measured with low-cost motion capture agreed well with goniometry (r = 0.70-0.80), while passive shoulder motion measurements did not correlate well. Using motion capture, it was possible to reliably identify participants whose range of shoulder motion was reduced by 40% or more. Low-cost, automated motion analysis may be acceptable to screen for moderate to severe motion impairments in active shoulder motion. Automatic detection of motion limitation may allow quick screening to be performed in an oncologist's office and trigger timely referrals for rehabilitation.

  18. Undergraduate Labs for Biological Physics: Brownian Motion and Optical Trapping

    NASA Astrophysics Data System (ADS)

    Chu, Kelvin; Laughney, A.; Williams, J.

    2006-12-01

    We describe a set of case-study driven labs for an upper-division biological physics course. These labs are motivated by case-studies and consist of inquiry-driven investigations of Brownian motion and optical-trapping experiments. Each lab incorporates two innovative educational techniques to drive the process and application aspects of scientific learning. Case studies are used to encourage students to think independently and apply the scientific method to a novel lab situation. Student input from this case study is then used to decide how to best do the measurement, guide the project and ultimately evaluate the success of the program. Where appropriate, visualization and simulation using VPython is used. Direct visualization of Brownian motion allows students to directly calculate Avogadro's number or the Boltzmann constant. Following case-study driven discussion, students use video microscopy to measure the motion of latex spheres in different viscosity fluids arrive at a good approximation of NA or kB. Optical trapping (laser tweezer) experiments allow students to investigate the consequences of 100-pN forces on small particles. The case study consists of a discussion of the Boltzmann distribution and equipartition theorem followed by a consideration of the shape of the potential. Students can then use video capture to measure the distribution of bead positions to determine the shape and depth of the trap. This work supported by NSF DUE-0536773.

  19. Pixel-wise deblurring imaging system based on active vision for structural health monitoring at a speed of 100 km/h

    NASA Astrophysics Data System (ADS)

    Hayakawa, Tomohiko; Moko, Yushi; Morishita, Kenta; Ishikawa, Masatoshi

    2018-04-01

    In this paper, we propose a pixel-wise deblurring imaging (PDI) system based on active vision for compensation of the blur caused by high-speed one-dimensional motion between a camera and a target. The optical axis is controlled by back-and-forth motion of a galvanometer mirror to compensate the motion. High-spatial-resolution image captured by our system in high-speed motion is useful for efficient and precise visual inspection, such as visually judging abnormal parts of a tunnel surface to prevent accidents; hence, we applied the PDI system for structural health monitoring. By mounting the system onto a vehicle in a tunnel, we confirmed significant improvement in image quality for submillimeter black-and-white stripes and real tunnel-surface cracks at a speed of 100 km/h.

  20. Full-color digitized holography for large-scale holographic 3D imaging of physical and nonphysical objects.

    PubMed

    Matsushima, Kyoji; Sonobe, Noriaki

    2018-01-01

    Digitized holography techniques are used to reconstruct three-dimensional (3D) images of physical objects using large-scale computer-generated holograms (CGHs). The object field is captured at three wavelengths over a wide area at high densities. Synthetic aperture techniques using single sensors are used for image capture in phase-shifting digital holography. The captured object field is incorporated into a virtual 3D scene that includes nonphysical objects, e.g., polygon-meshed CG models. The synthetic object field is optically reconstructed as a large-scale full-color CGH using red-green-blue color filters. The CGH has a wide full-parallax viewing zone and reconstructs a deep 3D scene with natural motion parallax.

  1. Orthogonal-blendshape-based editing system for facial motion capture data.

    PubMed

    Li, Qing; Deng, Zhigang

    2008-01-01

    The authors present a novel data-driven 3D facial motion capture data editing system using automated construction of an orthogonal blendshape face model and constrained weight propagation, aiming to bridge the popular facial motion capture technique and blendshape approach. In this work, a 3D facial-motion-capture-editing problem is transformed to a blendshape-animation-editing problem. Given a collected facial motion capture data set, we construct a truncated PCA space spanned by the greatest retained eigenvectors and a corresponding blendshape face model for each anatomical region of the human face. As such, modifying blendshape weights (PCA coefficients) is equivalent to editing their corresponding motion capture sequence. In addition, a constrained weight propagation technique allows animators to balance automation and flexible controls.

  2. Trajectory of coronary motion and its significance in robotic motion cancellation.

    PubMed

    Cattin, Philippe; Dave, Hitendu; Grünenfelder, Jürg; Szekely, Gabor; Turina, Marko; Zünd, Gregor

    2004-05-01

    To characterize remaining coronary artery motion of beating pig hearts after stabilization with an 'Octopus' using an optical remote analysis technique. Three pigs (40, 60 and 65 kg) underwent full sternotomy after receiving general anesthesia. An 8-bit high speed black and white video camera (50 frames/s) coupled with a laser sensor (60 microm resolution) were used to capture heart wall motion in all three dimensions. Dopamine infusion was used to deliberately modulate cardiac contractility. Synchronized ECG, blood pressure, airway pressure and video data of the region around the first branching point of the left anterior descending (LAD) coronary artery after Octopus stabilization were captured for stretches of 8 s each. Several sequences of the same region were captured over a period of several minutes. Computerized off-line analysis allowed us to perform minute characterization of the heart wall motion. The movement of the points of interest on the LAD ranged from 0.22 to 0.81 mm in the lateral plane (x/y-axis) and 0.5-2.6 mm out of the plane (z-axis). Fast excursions (>50 microm/s in the lateral plane) occurred corresponding to the QRS complex and the T wave; while slow excursion phases (<50 microm/s in the lateral plane) were observed during the P wave and the ST segment. The trajectories of the points of interest during consecutive cardiac cycles as well as during cardiac cycles minutes apart remained comparable (the differences were negligible), provided the hemodynamics remained stable. Inotrope-induced changes in cardiac contractility influenced not only the maximum excursion, but also the shape of the trajectory. Normal positive pressure ventilation displacing the heart in the thoracic cage was evident by the displacement of the reference point of the trajectory. The movement of the coronary artery after stabilization appears to be still significant. Minute characterization of the trajectory of motion could provide the substrate for achieving motion cancellation for existing robotic systems. Velocity plots could also help improve gated cardiac imaging.

  3. Fixation not required: characterizing oculomotor attention capture for looming stimuli.

    PubMed

    Lewis, Joanna E; Neider, Mark B

    2015-10-01

    A stimulus moving toward us, such as a ball being thrown in our direction or a vehicle braking suddenly in front of ours, often represents a stimulus that requires a rapid response. Using a visual search task in which target and distractor items were systematically associated with a looming object, we explored whether this sort of looming motion captures attention, the nature of such capture using eye movement measures (overt/covert), and the extent to which such capture effects are more closely tied to motion onset or the motion itself. We replicated previous findings indicating that looming motion induces response time benefits and costs during visual search Lin, Franconeri, & Enns(Psychological Science 19(7): 686-693, 2008). These differences in response times were independent of fixation, indicating that these capture effects did not necessitate overt attentional shifts to a looming object for search benefits or costs to occur. Interestingly, we found no differences in capture benefits and costs associated with differences in looming motion type. Combined, our results suggest that capture effects associated with looming motion are more likely subserved by covert attentional mechanisms rather than overt mechanisms, and attention capture for looming motion is likely related to motion itself rather than the onset of motion.

  4. Example-based human motion denoising.

    PubMed

    Lou, Hui; Chai, Jinxiang

    2010-01-01

    With the proliferation of motion capture data, interest in removing noise and outliers from motion capture data has increased. In this paper, we introduce an efficient human motion denoising technique for the simultaneous removal of noise and outliers from input human motion data. The key idea of our approach is to learn a series of filter bases from precaptured motion data and use them along with robust statistics techniques to filter noisy motion data. Mathematically, we formulate the motion denoising process in a nonlinear optimization framework. The objective function measures the distance between the noisy input and the filtered motion in addition to how well the filtered motion preserves spatial-temporal patterns embedded in captured human motion data. Optimizing the objective function produces an optimal filtered motion that keeps spatial-temporal patterns in captured motion data. We also extend the algorithm to fill in the missing values in input motion data. We demonstrate the effectiveness of our system by experimenting with both real and simulated motion data. We also show the superior performance of our algorithm by comparing it with three baseline algorithms and to those in state-of-art motion capture data processing software such as Vicon Blade.

  5. Manipulation of Micro Scale Particles in Optical Traps Using Programmable Spatial Light Modulation

    NASA Technical Reports Server (NTRS)

    Seibel, Robin E.; Decker, Arthur J. (Technical Monitor)

    2003-01-01

    1064 nm light, from an Nd:YAG laser, was polarized and incident upon a programmable parallel aligned liquid crystal spatial light modulator (PAL-SLM), where it was phase modulated according to the program controlling the PAL-SLM. Light reflected from the PAL-SLM was injected into a microscope and focused. At the focus, multiple optical traps were formed in which 9.975 m spheres were captured. The traps and the spheres were moved by changing the program of the PAL-SLM. The motion of ordered groups of micro particles was clearly demonstrated.

  6. Dual Use of Image Based Tracking Techniques: Laser Eye Surgery and Low Vision Prosthesis

    NASA Technical Reports Server (NTRS)

    Juday, Richard D.; Barton, R. Shane

    1994-01-01

    With a concentration on Fourier optics pattern recognition, we have developed several methods of tracking objects in dynamic imagery to automate certain space applications such as orbital rendezvous and spacecraft capture, or planetary landing. We are developing two of these techniques for Earth applications in real-time medical image processing. The first is warping of a video image, developed to evoke shift invariance to scale and rotation in correlation pattern recognition. The technology is being applied to compensation for certain field defects in low vision humans. The second is using the optical joint Fourier transform to track the translation of unmodeled scenes. Developed as an image fixation tool to assist in calculating shape from motion, it is being applied to tracking motions of the eyeball quickly enough to keep a laser photocoagulation spot fixed on the retina, thus avoiding collateral damage.

  7. Dual use of image based tracking techniques: Laser eye surgery and low vision prosthesis

    NASA Technical Reports Server (NTRS)

    Juday, Richard D.

    1994-01-01

    With a concentration on Fourier optics pattern recognition, we have developed several methods of tracking objects in dynamic imagery to automate certain space applications such as orbital rendezvous and spacecraft capture, or planetary landing. We are developing two of these techniques for Earth applications in real-time medical image processing. The first is warping of a video image, developed to evoke shift invariance to scale and rotation in correlation pattern recognition. The technology is being applied to compensation for certain field defects in low vision humans. The second is using the optical joint Fourier transform to track the translation of unmodeled scenes. Developed as an image fixation tool to assist in calculating shape from motion, it is being applied to tracking motions of the eyeball quickly enough to keep a laser photocoagulation spot fixed on the retina, thus avoiding collateral damage.

  8. Capturing Revolute Motion and Revolute Joint Parameters with Optical Tracking

    NASA Astrophysics Data System (ADS)

    Antonya, C.

    2017-12-01

    Optical tracking of users and various technical systems are becoming more and more popular. It consists of analysing sequence of recorded images using video capturing devices and image processing algorithms. The returned data contains mainly point-clouds, coordinates of markers or coordinates of point of interest. These data can be used for retrieving information related to the geometry of the objects, but also to extract parameters for the analytical model of the system useful in a variety of computer aided engineering simulations. The parameter identification of joints deals with extraction of physical parameters (mainly geometric parameters) for the purpose of constructing accurate kinematic and dynamic models. The input data are the time-series of the marker’s position. The least square method was used for fitting the data into different geometrical shapes (ellipse, circle, plane) and for obtaining the position and orientation of revolute joins.

  9. Opto-mechanical design and development of a 460mm diffractive transmissive telescope

    NASA Astrophysics Data System (ADS)

    Qi, Bo; Wang, Lihua; Cui, Zhangang; Bian, Jiang; Xiang, Sihua; Ma, Haotong; Fan, Bin

    2018-01-01

    Using lightweight, replicated diffractive optics, we can construct extremely large aperture telescopes in space.The transmissive primary significantly reduces the sensitivities to out of plane motion as compared to reflective systems while reducing the manufacturing time and costs. This paper focuses on the design, fabrication and ground demonstration of a 460mm diffractive transmissive telescope the primary F/# is 6, optical field of view is 0.2° imagine bandwidth is 486nm 656nm.The design method of diffractive optical system was verified, the ability to capture a high-quality image using diffractive telescope collection optics was tested.The results show that the limit resolution is 94lp/mm, the diffractive system has a good imagine performance with broad bandwidths. This technology is particularly promising as a means to achieve extremely large optical primaries from compact, lightweight packages.

  10. Ultrahigh-frame CCD imagers

    NASA Astrophysics Data System (ADS)

    Lowrance, John L.; Mastrocola, V. J.; Renda, George F.; Swain, Pradyumna K.; Kabra, R.; Bhaskaran, Mahalingham; Tower, John R.; Levine, Peter A.

    2004-02-01

    This paper describes the architecture, process technology, and performance of a family of high burst rate CCDs. These imagers employ high speed, low lag photo-detectors with local storage at each photo-detector to achieve image capture at rates greater than 106 frames per second. One imager has a 64 x 64 pixel array with 12 frames of storage. A second imager has a 80 x 160 array with 28 frames of storage, and the third imager has a 64 x 64 pixel array with 300 frames of storage. Application areas include capture of rapid mechanical motion, optical wavefront sensing, fluid cavitation research, combustion studies, plasma research and wind-tunnel-based gas dynamics research.

  11. Motion onset does not capture attention when subsequent motion is "smooth".

    PubMed

    Sunny, Meera Mary; von Mühlenen, Adrian

    2011-12-01

    Previous research on the attentional effects of moving objects has shown that motion per se does not capture attention. However, in later studies it was argued that the onset of motion does capture attention. Here, we show that this motion-onset effect critically depends on motion jerkiness--that is, the rate at which the moving stimulus is refreshed. Experiment 1 used search displays with a static, a motion-onset, and an abrupt-onset stimulus, while systematically varying the refresh rate of the moving stimulus. The results showed that motion onset only captures attention when subsequent motion is jerky (8 and 17 Hz), not when it is smooth (33 and 100 Hz). Experiment 2 replaced motion onset with continuous motion, showing that motion jerkiness does not affect how continuous motion is processed. These findings do not support accounts that assume a special role for motion onset, but they are in line with the more general unique-event account.

  12. Generating action descriptions from statistically integrated representations of human motions and sentences.

    PubMed

    Takano, Wataru; Kusajima, Ikuo; Nakamura, Yoshihiko

    2016-08-01

    It is desirable for robots to be able to linguistically understand human actions during human-robot interactions. Previous research has developed frameworks for encoding human full body motion into model parameters and for classifying motion into specific categories. For full understanding, the motion categories need to be connected to the natural language such that the robots can interpret human motions as linguistic expressions. This paper proposes a novel framework for integrating observation of human motion with that of natural language. This framework consists of two models; the first model statistically learns the relations between motions and their relevant words, and the second statistically learns sentence structures as word n-grams. Integration of these two models allows robots to generate sentences from human motions by searching for words relevant to the motion using the first model and then arranging these words in appropriate order using the second model. This allows making sentences that are the most likely to be generated from the motion. The proposed framework was tested on human full body motion measured by an optical motion capture system. In this, descriptive sentences were manually attached to the motions, and the validity of the system was demonstrated. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. Human Age Estimation Method Robust to Camera Sensor and/or Face Movement

    PubMed Central

    Nguyen, Dat Tien; Cho, So Ra; Pham, Tuyen Danh; Park, Kang Ryoung

    2015-01-01

    Human age can be employed in many useful real-life applications, such as customer service systems, automatic vending machines, entertainment, etc. In order to obtain age information, image-based age estimation systems have been developed using information from the human face. However, limitations exist for current age estimation systems because of the various factors of camera motion and optical blurring, facial expressions, gender, etc. Motion blurring can usually be presented on face images by the movement of the camera sensor and/or the movement of the face during image acquisition. Therefore, the facial feature in captured images can be transformed according to the amount of motion, which causes performance degradation of age estimation systems. In this paper, the problem caused by motion blurring is addressed and its solution is proposed in order to make age estimation systems robust to the effects of motion blurring. Experiment results show that our method is more efficient for enhancing age estimation performance compared with systems that do not employ our method. PMID:26334282

  14. FlyCap: Markerless Motion Capture Using Multiple Autonomous Flying Cameras.

    PubMed

    Xu, Lan; Liu, Yebin; Cheng, Wei; Guo, Kaiwen; Zhou, Guyue; Dai, Qionghai; Fang, Lu

    2017-07-18

    Aiming at automatic, convenient and non-instrusive motion capture, this paper presents a new generation markerless motion capture technique, the FlyCap system, to capture surface motions of moving characters using multiple autonomous flying cameras (autonomous unmanned aerial vehicles(UAVs) each integrated with an RGBD video camera). During data capture, three cooperative flying cameras automatically track and follow the moving target who performs large-scale motions in a wide space. We propose a novel non-rigid surface registration method to track and fuse the depth of the three flying cameras for surface motion tracking of the moving target, and simultaneously calculate the pose of each flying camera. We leverage the using of visual-odometry information provided by the UAV platform, and formulate the surface tracking problem in a non-linear objective function that can be linearized and effectively minimized through a Gaussian-Newton method. Quantitative and qualitative experimental results demonstrate the plausible surface and motion reconstruction results.

  15. Involuntary eye motion correction in retinal optical coherence tomography: Hardware or software solution?

    PubMed

    Baghaie, Ahmadreza; Yu, Zeyun; D'Souza, Roshan M

    2017-04-01

    In this paper, we review state-of-the-art techniques to correct eye motion artifacts in Optical Coherence Tomography (OCT) imaging. The methods for eye motion artifact reduction can be categorized into two major classes: (1) hardware-based techniques and (2) software-based techniques. In the first class, additional hardware is mounted onto the OCT scanner to gather information about the eye motion patterns during OCT data acquisition. This information is later processed and applied to the OCT data for creating an anatomically correct representation of the retina, either in an offline or online manner. In software based techniques, the motion patterns are approximated either by comparing the acquired data to a reference image, or by considering some prior assumptions about the nature of the eye motion. Careful investigations done on the most common methods in the field provides invaluable insight regarding future directions of the research in this area. The challenge in hardware-based techniques lies in the implementation aspects of particular devices. However, the results of these techniques are superior to those obtained from software-based techniques because they are capable of capturing secondary data related to eye motion during OCT acquisition. Software-based techniques on the other hand, achieve moderate success and their performance is highly dependent on the quality of the OCT data in terms of the amount of motion artifacts contained in them. However, they are still relevant to the field since they are the sole class of techniques with the ability to be applied to legacy data acquired using systems that do not have extra hardware to track eye motion. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Motion Pattern Encapsulation for Data-Driven Constraint-Based Motion Editing

    NASA Astrophysics Data System (ADS)

    Carvalho, Schubert R.; Boulic, Ronan; Thalmann, Daniel

    The growth of motion capture systems have contributed to the proliferation of human motion database, mainly because human motion is important in many applications, ranging from games entertainment and films to sports and medicine. However, the captured motions normally attend specific needs. As an effort for adapting and reusing captured human motions in new tasks and environments and improving the animator's work, we present and discuss a new data-driven constraint-based animation system for interactive human motion editing. This method offers the compelling advantage that it provides faster deformations and more natural-looking motion results compared to goal-directed constraint-based methods found in the literature.

  17. Optical holography applications for the zero-g Atmospheric Cloud Physics Laboratory

    NASA Technical Reports Server (NTRS)

    Kurtz, R. L.

    1974-01-01

    A complete description of holography is provided, both for the time-dependent case of moving scene holography and for the time-independent case of stationary holography. Further, a specific holographic arrangement for application to the detection of particle size distribution in an atmospheric simulation cloud chamber. In this chamber particle growth rate is investigated; therefore, the proposed holographic system must capture continuous particle motion in real time. Such a system is described.

  18. Active contour-based visual tracking by integrating colors, shapes, and motions.

    PubMed

    Hu, Weiming; Zhou, Xue; Li, Wei; Luo, Wenhan; Zhang, Xiaoqin; Maybank, Stephen

    2013-05-01

    In this paper, we present a framework for active contour-based visual tracking using level sets. The main components of our framework include contour-based tracking initialization, color-based contour evolution, adaptive shape-based contour evolution for non-periodic motions, dynamic shape-based contour evolution for periodic motions, and the handling of abrupt motions. For the initialization of contour-based tracking, we develop an optical flow-based algorithm for automatically initializing contours at the first frame. For the color-based contour evolution, Markov random field theory is used to measure correlations between values of neighboring pixels for posterior probability estimation. For adaptive shape-based contour evolution, the global shape information and the local color information are combined to hierarchically evolve the contour, and a flexible shape updating model is constructed. For the dynamic shape-based contour evolution, a shape mode transition matrix is learnt to characterize the temporal correlations of object shapes. For the handling of abrupt motions, particle swarm optimization is adopted to capture the global motion which is applied to the contour in the current frame to produce an initial contour in the next frame.

  19. Samba: a real-time motion capture system using wireless camera sensor networks.

    PubMed

    Oh, Hyeongseok; Cha, Geonho; Oh, Songhwai

    2014-03-20

    There is a growing interest in 3D content following the recent developments in 3D movies, 3D TVs and 3D smartphones. However, 3D content creation is still dominated by professionals, due to the high cost of 3D motion capture instruments. The availability of a low-cost motion capture system will promote 3D content generation by general users and accelerate the growth of the 3D market. In this paper, we describe the design and implementation of a real-time motion capture system based on a portable low-cost wireless camera sensor network. The proposed system performs motion capture based on the data-driven 3D human pose reconstruction method to reduce the computation time and to improve the 3D reconstruction accuracy. The system can reconstruct accurate 3D full-body poses at 16 frames per second using only eight markers on the subject's body. The performance of the motion capture system is evaluated extensively in experiments.

  20. Samba: A Real-Time Motion Capture System Using Wireless Camera Sensor Networks

    PubMed Central

    Oh, Hyeongseok; Cha, Geonho; Oh, Songhwai

    2014-01-01

    There is a growing interest in 3D content following the recent developments in 3D movies, 3D TVs and 3D smartphones. However, 3D content creation is still dominated by professionals, due to the high cost of 3D motion capture instruments. The availability of a low-cost motion capture system will promote 3D content generation by general users and accelerate the growth of the 3D market. In this paper, we describe the design and implementation of a real-time motion capture system based on a portable low-cost wireless camera sensor network. The proposed system performs motion capture based on the data-driven 3D human pose reconstruction method to reduce the computation time and to improve the 3D reconstruction accuracy. The system can reconstruct accurate 3D full-body poses at 16 frames per second using only eight markers on the subject's body. The performance of the motion capture system is evaluated extensively in experiments. PMID:24658618

  1. Assessment of planarity of the golf swing based on the functional swing plane of the clubhead and motion planes of the body points.

    PubMed

    Kwon, Young-Hoo; Como, Christopher S; Singhal, Kunal; Lee, Sangwoo; Han, Ki Hoon

    2012-06-01

    The purposes of this study were (1) to determine the functional swing plane (FSP) of the clubhead and the motion planes (MPs) of the shoulder/arm points and (2) to assess planarity of the golf swing based on the FSP and the MPs. The swing motions of 14 male skilled golfers (mean handicap = -0.5 +/- 2.0) using three different clubs (driver, 5-iron, and pitching wedge) were captured by an optical motion capture system (250Hz). The FSP and MPs along with their slope/relative inclination and direction/direction of inclination were obtained using a new trajectory-plane fitting method. The slope and direction of the FSP revealed a significant club effect (p < 0.001). The relative inclination and direction of inclination of the MP showed significant point (p < 0.001) and club (p < 0.001) effects and interaction (p < 0.001). Maximum deviations of the points from the FSP revealed a significant point effect (p < 0.001) and point-club interaction (p < 0.001). It was concluded that skilled golfers exhibited well-defined and consistent FSP and MPs, and the shoulder/arm points moved on vastly different MPs and exhibited large deviations from the FSP. Skilled golfers in general exhibited semi-planar downswings with two distinct phases: a transition phase and a planar execution phase.

  2. Motion Analysis System for Instruction of Nihon Buyo using Motion Capture

    NASA Astrophysics Data System (ADS)

    Shinoda, Yukitaka; Murakami, Shingo; Watanabe, Yuta; Mito, Yuki; Watanuma, Reishi; Marumo, Mieko

    The passing on and preserving of advanced technical skills has become an important issue in a variety of fields, and motion analysis using motion capture has recently become popular in the research of advanced physical skills. This research aims to construct a system having a high on-site instructional effect on dancers learning Nihon Buyo, a traditional dance in Japan, and to classify Nihon Buyo dancing according to style, school, and dancer's proficiency by motion analysis. We have been able to study motion analysis systems for teaching Nihon Buyo now that body-motion data can be digitized and stored by motion capture systems using high-performance computers. Thus, with the aim of developing a user-friendly instruction-support system, we have constructed a motion analysis system that displays a dancer's time series of body motions and center of gravity for instructional purposes. In this paper, we outline this instructional motion analysis system based on three-dimensional position data obtained by motion capture. We also describe motion analysis that we performed based on center-of-gravity data obtained by this system and motion analysis focusing on school and age group using this system.

  3. Extracting cardiac shapes and motion of the chick embryo heart outflow tract from four-dimensional optical coherence tomography images

    NASA Astrophysics Data System (ADS)

    Yin, Xin; Liu, Aiping; Thornburg, Kent L.; Wang, Ruikang K.; Rugonyi, Sandra

    2012-09-01

    Recent advances in optical coherence tomography (OCT), and the development of image reconstruction algorithms, enabled four-dimensional (4-D) (three-dimensional imaging over time) imaging of the embryonic heart. To further analyze and quantify the dynamics of cardiac beating, segmentation procedures that can extract the shape of the heart and its motion are needed. Most previous studies analyzed cardiac image sequences using manually extracted shapes and measurements. However, this is time consuming and subject to inter-operator variability. Automated or semi-automated analyses of 4-D cardiac OCT images, although very desirable, are also extremely challenging. This work proposes a robust algorithm to semi automatically detect and track cardiac tissue layers from 4-D OCT images of early (tubular) embryonic hearts. Our algorithm uses a two-dimensional (2-D) deformable double-line model (DLM) to detect target cardiac tissues. The detection algorithm uses a maximum-likelihood estimator and was successfully applied to 4-D in vivo OCT images of the heart outflow tract of day three chicken embryos. The extracted shapes captured the dynamics of the chick embryonic heart outflow tract wall, enabling further analysis of cardiac motion.

  4. AMUC: Associated Motion capture User Categories.

    PubMed

    Norman, Sally Jane; Lawson, Sian E M; Olivier, Patrick; Watson, Paul; Chan, Anita M-A; Dade-Robertson, Martyn; Dunphy, Paul; Green, Dave; Hiden, Hugo; Hook, Jonathan; Jackson, Daniel G

    2009-07-13

    The AMUC (Associated Motion capture User Categories) project consisted of building a prototype sketch retrieval client for exploring motion capture archives. High-dimensional datasets reflect the dynamic process of motion capture and comprise high-rate sampled data of a performer's joint angles; in response to multiple query criteria, these data can potentially yield different kinds of information. The AMUC prototype harnesses graphic input via an electronic tablet as a query mechanism, time and position signals obtained from the sketch being mapped to the properties of data streams stored in the motion capture repository. As well as proposing a pragmatic solution for exploring motion capture datasets, the project demonstrates the conceptual value of iterative prototyping in innovative interdisciplinary design. The AMUC team was composed of live performance practitioners and theorists conversant with a variety of movement techniques, bioengineers who recorded and processed motion data for integration into the retrieval tool, and computer scientists who designed and implemented the retrieval system and server architecture, scoped for Grid-based applications. Creative input on information system design and navigation, and digital image processing, underpinned implementation of the prototype, which has undergone preliminary trials with diverse users, allowing identification of rich potential development areas.

  5. Fast left ventricle tracking in CMR images using localized anatomical affine optical flow

    NASA Astrophysics Data System (ADS)

    Queirós, Sandro; Vilaça, João. L.; Morais, Pedro; Fonseca, Jaime C.; D'hooge, Jan; Barbosa, Daniel

    2015-03-01

    In daily cardiology practice, assessment of left ventricular (LV) global function using non-invasive imaging remains central for the diagnosis and follow-up of patients with cardiovascular diseases. Despite the different methodologies currently accessible for LV segmentation in cardiac magnetic resonance (CMR) images, a fast and complete LV delineation is still limitedly available for routine use. In this study, a localized anatomically constrained affine optical flow method is proposed for fast and automatic LV tracking throughout the full cardiac cycle in short-axis CMR images. Starting from an automatically delineated LV in the end-diastolic frame, the endocardial and epicardial boundaries are propagated by estimating the motion between adjacent cardiac phases using optical flow. In order to reduce the computational burden, the motion is only estimated in an anatomical region of interest around the tracked boundaries and subsequently integrated into a local affine motion model. Such localized estimation enables to capture complex motion patterns, while still being spatially consistent. The method was validated on 45 CMR datasets taken from the 2009 MICCAI LV segmentation challenge. The proposed approach proved to be robust and efficient, with an average distance error of 2.1 mm and a correlation with reference ejection fraction of 0.98 (1.9 +/- 4.5%). Moreover, it showed to be fast, taking 5 seconds for the tracking of a full 4D dataset (30 ms per image). Overall, a novel fast, robust and accurate LV tracking methodology was proposed, enabling accurate assessment of relevant global function cardiac indices, such as volumes and ejection fraction

  6. Absolute position calculation for a desktop mobile rehabilitation robot based on three optical mouse sensors.

    PubMed

    Zabaleta, Haritz; Valencia, David; Perry, Joel; Veneman, Jan; Keller, Thierry

    2011-01-01

    ArmAssist is a wireless robot for post stroke upper limb rehabilitation. Knowing the position of the arm is essential for any rehabilitation device. In this paper, we describe a method based on an artificial landmark navigation system. The navigation system uses three optical mouse sensors. This enables the building of a cheap but reliable position sensor. Two of the sensors are the data source for odometry calculations, and the third optical mouse sensor takes very low resolution pictures of a custom designed mat. These pictures are processed by an optical symbol recognition algorithm which will estimate the orientation of the robot and recognize the landmarks placed on the mat. The data fusion strategy is described to detect the misclassifications of the landmarks in order to fuse only reliable information. The orientation given by the optical symbol recognition (OSR) algorithm is used to improve significantly the odometry and the recognition of the landmarks is used to reference the odometry to a absolute coordinate system. The system was tested using a 3D motion capture system. With the actual mat configuration, in a field of motion of 710 × 450 mm, the maximum error in position estimation was 49.61 mm with an average error of 36.70 ± 22.50 mm. The average test duration was 36.5 seconds and the average path length was 4173 mm.

  7. Accuracy of an optical active-marker system to track the relative motion of rigid bodies.

    PubMed

    Maletsky, Lorin P; Sun, Junyi; Morton, Nicholas A

    2007-01-01

    The measurement of relative motion between two moving bones is commonly accomplished for in vitro studies by attaching to each bone a series of either passive or active markers in a fixed orientation to create a rigid body (RB). This work determined the accuracy of motion between two RBs using an Optotrak optical motion capture system with active infrared LEDs. The stationary noise in the system was quantified by recording the apparent change in position with the RBs stationary and found to be 0.04 degrees and 0.03 mm. Incremental 10 degrees rotations and 10-mm translations were made using a more precise tool than the Optotrak. Increasing camera distance decreased the precision or increased the range of values observed for a set motion and increased the error in rotation or bias between the measured and actual rotation. The relative positions of the RBs with respect to the camera-viewing plane had a minimal effect on the kinematics and, therefore, for a given distance in the volume less than or close to the precalibrated camera distance, any motion was similarly reliable. For a typical operating set-up, a 10 degrees rotation showed a bias of 0.05 degrees and a 95% repeatability limit of 0.67 degrees. A 10-mm translation showed a bias of 0.03 mm and a 95% repeatability limit of 0.29 mm. To achieve a high level of accuracy it is important to keep the distance between the cameras and the markers near the distance the cameras are focused to during calibration.

  8. Event-Based Computation of Motion Flow on a Neuromorphic Analog Neural Platform

    PubMed Central

    Giulioni, Massimiliano; Lagorce, Xavier; Galluppi, Francesco; Benosman, Ryad B.

    2016-01-01

    Estimating the speed and direction of moving objects is a crucial component of agents behaving in a dynamic world. Biological organisms perform this task by means of the neural connections originating from their retinal ganglion cells. In artificial systems the optic flow is usually extracted by comparing activity of two or more frames captured with a vision sensor. Designing artificial motion flow detectors which are as fast, robust, and efficient as the ones found in biological systems is however a challenging task. Inspired by the architecture proposed by Barlow and Levick in 1965 to explain the spiking activity of the direction-selective ganglion cells in the rabbit's retina, we introduce an architecture for robust optical flow extraction with an analog neuromorphic multi-chip system. The task is performed by a feed-forward network of analog integrate-and-fire neurons whose inputs are provided by contrast-sensitive photoreceptors. Computation is supported by the precise time of spike emission, and the extraction of the optical flow is based on time lag in the activation of nearby retinal neurons. Mimicking ganglion cells our neuromorphic detectors encode the amplitude and the direction of the apparent visual motion in their output spiking pattern. Hereby we describe the architectural aspects, discuss its latency, scalability, and robustness properties and demonstrate that a network of mismatched delicate analog elements can reliably extract the optical flow from a simple visual scene. This work shows how precise time of spike emission used as a computational basis, biological inspiration, and neuromorphic systems can be used together for solving specific tasks. PMID:26909015

  9. Event-Based Computation of Motion Flow on a Neuromorphic Analog Neural Platform.

    PubMed

    Giulioni, Massimiliano; Lagorce, Xavier; Galluppi, Francesco; Benosman, Ryad B

    2016-01-01

    Estimating the speed and direction of moving objects is a crucial component of agents behaving in a dynamic world. Biological organisms perform this task by means of the neural connections originating from their retinal ganglion cells. In artificial systems the optic flow is usually extracted by comparing activity of two or more frames captured with a vision sensor. Designing artificial motion flow detectors which are as fast, robust, and efficient as the ones found in biological systems is however a challenging task. Inspired by the architecture proposed by Barlow and Levick in 1965 to explain the spiking activity of the direction-selective ganglion cells in the rabbit's retina, we introduce an architecture for robust optical flow extraction with an analog neuromorphic multi-chip system. The task is performed by a feed-forward network of analog integrate-and-fire neurons whose inputs are provided by contrast-sensitive photoreceptors. Computation is supported by the precise time of spike emission, and the extraction of the optical flow is based on time lag in the activation of nearby retinal neurons. Mimicking ganglion cells our neuromorphic detectors encode the amplitude and the direction of the apparent visual motion in their output spiking pattern. Hereby we describe the architectural aspects, discuss its latency, scalability, and robustness properties and demonstrate that a network of mismatched delicate analog elements can reliably extract the optical flow from a simple visual scene. This work shows how precise time of spike emission used as a computational basis, biological inspiration, and neuromorphic systems can be used together for solving specific tasks.

  10. Assessment of congruence and impingement of the hip joint in professional ballet dancers: a motion capture study.

    PubMed

    Charbonnier, Caecilia; Kolo, Frank C; Duthon, Victoria B; Magnenat-Thalmann, Nadia; Becker, Christoph D; Hoffmeyer, Pierre; Menetrey, Jacques

    2011-03-01

    Early hip osteoarthritis in dancers could be explained by femoroacetabular impingements. However, there is a lack of validated noninvasive methods and dynamic studies to ascertain impingement during motion. Moreover, it is unknown whether the femoral head and acetabulum are congruent in typical dancing positions. The practice of some dancing movements could cause a loss of hip joint congruence and recurrent impingements, which could lead to early osteoarthritis. Descriptive laboratory study. Eleven pairs of female dancer's hips were motion captured with an optical tracking system while performing 6 different dancing movements. The resulting computed motions were applied to patient-specific hip joint 3-dimensional models based on magnetic resonance images. While visualizing the dancer's hip in motion, the authors detected impingements using computer-assisted techniques. The range of motion and congruence of the hip joint were also quantified in those 6 recorded dancing movements. The frequency of impingement and subluxation varied with the type of movement. Four dancing movements (développé à la seconde, grand écart facial, grand écart latéral, and grand plié) seem to induce significant stress in the hip joint, according to the observed high frequency of impingement and amount of subluxation. The femoroacetabular translations were high (range, 0.93 to 6.35 mm). For almost all movements, the computed zones of impingement were mainly located in the superior or posterosuperior quadrant of the acetabulum, which was relevant with respect to radiologically diagnosed damaged zones in the labrum. All dancers' hips were morphologically normal. Impingements and subluxations are frequently observed in typical ballet movements, causing cartilage hypercompression. These movements should be limited in frequency. The present study indicates that some dancing movements could damage the hip joint, which could lead to early osteoarthritis.

  11. Magneto-optical nanoparticles for cyclic magnetomotive photoacoustic imaging

    NASA Astrophysics Data System (ADS)

    Arnal, Bastien; Yoon, Soon Joon; Li, Junwei; Gao, Xiaohu; O'Donnell, Matthew

    2018-05-01

    Photoacoustic imaging is a highly promising tool to visualize molecular events with deep tissue penetration. Like most other modalities, however, image contrast under in vivo conditions is far from optimal due to background signals from tissue. Using iron oxide-gold core-shell nanoparticles, we previously demonstrated that magnetomotive photoacoustic (mmPA) imaging can dramatically reduce the influence of background signals and produce high-contrast molecular images. Here we report two significant advances toward clinical translation of this technology. First, we introduce a new class of compact, uniform, magneto-optically coupled core-shell nanoparticle, prepared through localized copolymerization of polypyrrole (PPy) on an iron oxide nanoparticle surface. The resulting iron oxide-PPy nanoparticles solve the photo-instability and small-scale synthesis problems previously encountered by the gold coating approach, and extend the large optical absorption coefficient of the particles beyond 1000 nm in wavelength. In parallel, we have developed a new generation of mmPA imaging featuring cyclic magnetic motion and ultrasound speckle tracking, with an image capture frame rate several hundred times faster than the photoacoustic speckle tracking method demonstrated previously. These advances enable robust artifact elimination caused by physiologic motion and first application of the mmPA technology in vivo for sensitive tumor imaging.

  12. Observation of motion of colloidal particles undergoing flowing Brownian motion using self-mixing laser velocimetry with a thin-slice solid-state laser.

    PubMed

    Sudo, S; Ohtomo, T; Otsuka, K

    2015-08-01

    We achieved a highly sensitive method for observing the motion of colloidal particles in a flowing suspension using a self-mixing laser Doppler velocimeter (LDV) comprising a laser-diode-pumped thin-slice solid-state laser and a simple photodiode. We describe the measurement method and the optical system of the self-mixing LDV for real-time measurements of the motion of colloidal particles. For a condensed solution, when the light scattered from the particles is reinjected into the solid-state laser, the laser output is modulated in intensity by the reinjected laser light. Thus, we can capture the motion of colloidal particles from the spectrum of the modulated laser output. For a diluted solution, when the relaxation oscillation frequency coincides with the Doppler shift frequency, fd, which is related to the average velocity of the particles, the spectrum reflecting the motion of the colloidal particles is enhanced by the resonant excitation of relaxation oscillations. Then, the spectral peak reflecting the motion of colloidal particles appears at 2×fd. The spectrum reflecting the motion of colloidal particles in a flowing diluted solution can be measured with high sensitivity, owing to the enhancement of the spectrum by the thin-slice solid-state laser.

  13. Mobile Motion Capture--MiMiC.

    PubMed

    Harbert, Simeon D; Jaiswal, Tushar; Harley, Linda R; Vaughn, Tyler W; Baranak, Andrew S

    2013-01-01

    The low cost, simple, robust, mobile, and easy to use Mobile Motion Capture (MiMiC) system is presented and the constraints which guided the design of MiMiC are discussed. The MiMiC Android application allows motion data to be captured from kinematic modules such as Shimmer 2r sensors over Bluetooth. MiMiC is cost effective and can be used for an entire day in a person's daily routine without being intrusive. MiMiC is a flexible motion capture system which can be used for many applications including fall detection, detection of fatigue in industry workers, and analysis of individuals' work patterns in various environments.

  14. Simultaneous NIRS and kinematics study of planning and execution of motor skill task: towards cerebral palsy rehabilitation

    NASA Astrophysics Data System (ADS)

    Chaudhary, Ujwal; Thompson, Bryant; Gonzalez, Jean; Jung, Young-Jin; Davis, Jennifer; Gonzalez, Patricia; Rice, Kyle; Bloyer, Martha; Elbaum, Leonard; Godavarty, Anuradha

    2013-03-01

    Cerebral palsy (CP) is a term that describes a group of motor impairment syndromes secondary to genetic and/or acquired disorders of the developing brain. In the current study, NIRS and motion capture were used simultaneously to correlate the brain's planning and execution activity during and with arm movement in healthy individual. The prefrontal region of the brain is non-invasively imaged using a custom built continuous-wave based near infrared spectroscopy (NIRS) system. The kinematics of the arm movement during the studies is recorded using an infrared based motion capture system, Qualisys. During the study, the subjects (over 18 years) performed 30 sec of arm movement followed by 30 sec rest for 5 times, both with their dominant and non-dominant arm. The optical signal acquired from NIRS system was processed to elucidate the activation and lateralization in the prefrontal region of participants. The preliminary results show difference, in terms of change in optical response, between task and rest in healthy adults. Currently simultaneous NIRS imaging and kinematics data are acquired in healthy individual and individual with CP in order to correlate brain activity to arm movement in real-time. The study has significant implication in elucidating the evolution in the functional activity of the brain as the physical movement of the arm evolves using NIRS. Hence the study has potential in augmenting the designing of training and hence rehabilitation regime for individuals with CP via kinematic monitoring and imaging brain activity.

  15. SU-C-BRA-07: Variability of Patient-Specific Motion Models Derived Using Different Deformable Image Registration Algorithms for Lung Cancer Stereotactic Body Radiotherapy (SBRT) Patients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dhou, S; Williams, C; Ionascu, D

    2016-06-15

    Purpose: To study the variability of patient-specific motion models derived from 4-dimensional CT (4DCT) images using different deformable image registration (DIR) algorithms for lung cancer stereotactic body radiotherapy (SBRT) patients. Methods: Motion models are derived by 1) applying DIR between each 4DCT image and a reference image, resulting in a set of displacement vector fields (DVFs), and 2) performing principal component analysis (PCA) on the DVFs, resulting in a motion model (a set of eigenvectors capturing the variations in the DVFs). Three DIR algorithms were used: 1) Demons, 2) Horn-Schunck, and 3) iterative optical flow. The motion models derived weremore » compared using patient 4DCT scans. Results: Motion models were derived and the variations were evaluated according to three criteria: 1) the average root mean square (RMS) difference which measures the absolute difference between the components of the eigenvectors, 2) the dot product between the eigenvectors which measures the angular difference between the eigenvectors in space, and 3) the Euclidean Model Norm (EMN), which is calculated by summing the dot products of an eigenvector with the first three eigenvectors from the reference motion model in quadrature. EMN measures how well an eigenvector can be reconstructed using another motion model derived using a different DIR algorithm. Results showed that comparing to a reference motion model (derived using the Demons algorithm), the eigenvectors of the motion model derived using the iterative optical flow algorithm has smaller RMS, larger dot product, and larger EMN values than those of the motion model derived using Horn-Schunck algorithm. Conclusion: The study showed that motion models vary depending on which DIR algorithms were used to derive them. The choice of a DIR algorithm may affect the accuracy of the resulting model, and it is important to assess the suitability of the algorithm chosen for a particular application. This project was supported, in part, through a Master Research Agreement with Varian Medical Systems, Inc, Palo Alto, CA.« less

  16. A Variational Approach to Video Registration with Subspace Constraints.

    PubMed

    Garg, Ravi; Roussos, Anastasios; Agapito, Lourdes

    2013-01-01

    This paper addresses the problem of non-rigid video registration, or the computation of optical flow from a reference frame to each of the subsequent images in a sequence, when the camera views deformable objects. We exploit the high correlation between 2D trajectories of different points on the same non-rigid surface by assuming that the displacement of any point throughout the sequence can be expressed in a compact way as a linear combination of a low-rank motion basis. This subspace constraint effectively acts as a trajectory regularization term leading to temporally consistent optical flow. We formulate it as a robust soft constraint within a variational framework by penalizing flow fields that lie outside the low-rank manifold. The resulting energy functional can be decoupled into the optimization of the brightness constancy and spatial regularization terms, leading to an efficient optimization scheme. Additionally, we propose a novel optimization scheme for the case of vector valued images, based on the dualization of the data term. This allows us to extend our approach to deal with colour images which results in significant improvements on the registration results. Finally, we provide a new benchmark dataset, based on motion capture data of a flag waving in the wind, with dense ground truth optical flow for evaluation of multi-frame optical flow algorithms for non-rigid surfaces. Our experiments show that our proposed approach outperforms state of the art optical flow and dense non-rigid registration algorithms.

  17. Wearable Stretch Sensors for Motion Measurement of the Wrist Joint Based on Dielectric Elastomers.

    PubMed

    Huang, Bo; Li, Mingyu; Mei, Tao; McCoul, David; Qin, Shihao; Zhao, Zhanfeng; Zhao, Jianwen

    2017-11-23

    Motion capture of the human body potentially holds great significance for exoskeleton robots, human-computer interaction, sports analysis, rehabilitation research, and many other areas. Dielectric elastomer sensors (DESs) are excellent candidates for wearable human motion capture systems because of their intrinsic characteristics of softness, light weight, and compliance. In this paper, DESs were applied to measure all component motions of the wrist joints. Five sensors were mounted to different positions on the wrist, and each one is for one component motion. To find the best position to mount the sensors, the distribution of the muscles is analyzed. Even so, the component motions and the deformation of the sensors are coupled; therefore, a decoupling method was developed. By the decoupling algorithm, all component motions can be measured with a precision of 5°, which meets the requirements of general motion capture systems.

  18. Effective motion planning strategy for space robot capturing targets under consideration of the berth position

    NASA Astrophysics Data System (ADS)

    Zhang, Xin; Liu, Jinguo

    2018-07-01

    Although many motion planning strategies for missions involving space robots capturing floating targets can be found in the literature, relatively little has discussed how to select the berth position where the spacecraft base hovers. In fact, the berth position is a flexible and controllable factor, and selecting a suitable berth position has a great impact on improving the efficiency of motion planning in the capture mission. Therefore, to make full use of the manoeuvrability of the space robot, this paper proposes a new viewpoint that utilizes the base berth position as an optimizable parameter to formulate a more comprehensive and effective motion planning strategy. Considering the dynamic coupling, the dynamic singularities, and the physical limitations of space robots, a unified motion planning framework based on the forward kinematics and parameter optimization technique is developed to convert the planning problem into the parameter optimization problem. For getting rid of the strict grasping position constraints in the capture mission, a new conception of grasping area is proposed to greatly simplify the difficulty of the motion planning. Furthermore, by utilizing the penalty function method, a new concise objective function is constructed. Here, the intelligent algorithm, Particle Swarm Optimization (PSO), is worked as solver to determine the free parameters. Two capturing cases, i.e., capturing a two-dimensional (2D) planar target and capturing a three-dimensional (3D) spatial target, are studied under this framework. The corresponding simulation results demonstrate that the proposed method is more efficient and effective for planning the capture missions.

  19. Evaluation method for acoustic trapping performance by tracking motion of trapped microparticle

    NASA Astrophysics Data System (ADS)

    Lim, Hae Gyun; Ham Kim, Hyung; Yoon, Changhan

    2018-05-01

    We report a method to evaluate the performances of a single-beam acoustic tweezer using a high-frequency ultrasound transducer. The motion of a microparticle trapped by a 45-MHz single-element transducer was captured and analyzed to deduce the magnitude of trapping force. In the proposed method, the motion of a trapped microparticle was analyzed from a series of microscopy images to compute trapping force; thus, no additional equipment such as microfluidics is required. The method could be used to estimate the effective trapping force in an acoustic tweezer experiment to assess cell membrane deformability by attaching a microbead to the surface of a cell and tracking the motion of the trapped bead, which is similar to a bead-based assay that uses optical tweezers. The results showed that the trapping force increased with increasing acoustic intensity and duty factor, but the force eventually reached a plateau at a higher acoustic intensity. They demonstrated that this method could be used as a simple tool to evaluate the performance and to optimize the operating conditions of acoustic tweezers.

  20. Projectile Motion on an Inclined Misty Surface: I. Capturing and Analysing the Trajectory

    ERIC Educational Resources Information Center

    Ho, S. Y.; Foong, S. K.; Lim, C. H.; Lim, C. C.; Lin, K.; Kuppan, L.

    2009-01-01

    Projectile motion is usually the first non-uniform two-dimensional motion that students will encounter in a pre-university physics course. In this article, we introduce a novel technique for capturing the trajectory of projectile motion on an inclined Perspex plane. This is achieved by coating the Perspex with a thin layer of fine water droplets…

  1. Three-dimensional optical reconstruction of vocal fold kinematics using high-speed video with a laser projection system

    PubMed Central

    Luegmair, Georg; Mehta, Daryush D.; Kobler, James B.; Döllinger, Michael

    2015-01-01

    Vocal fold kinematics and its interaction with aerodynamic characteristics play a primary role in acoustic sound production of the human voice. Investigating the temporal details of these kinematics using high-speed videoendoscopic imaging techniques has proven challenging in part due to the limitations of quantifying complex vocal fold vibratory behavior using only two spatial dimensions. Thus, we propose an optical method of reconstructing the superior vocal fold surface in three spatial dimensions using a high-speed video camera and laser projection system. Using stereo-triangulation principles, we extend the camera-laser projector method and present an efficient image processing workflow to generate the three-dimensional vocal fold surfaces during phonation captured at 4000 frames per second. Initial results are provided for airflow-driven vibration of an ex vivo vocal fold model in which at least 75% of visible laser points contributed to the reconstructed surface. The method captures the vertical motion of the vocal folds at a high accuracy to allow for the computation of three-dimensional mucosal wave features such as vibratory amplitude, velocity, and asymmetry. PMID:26087485

  2. Scalable Photogrammetric Motion Capture System "mosca": Development and Application

    NASA Astrophysics Data System (ADS)

    Knyaz, V. A.

    2015-05-01

    Wide variety of applications (from industrial to entertainment) has a need for reliable and accurate 3D information about motion of an object and its parts. Very often the process of movement is rather fast as in cases of vehicle movement, sport biomechanics, animation of cartoon characters. Motion capture systems based on different physical principles are used for these purposes. The great potential for obtaining high accuracy and high degree of automation has vision-based system due to progress in image processing and analysis. Scalable inexpensive motion capture system is developed as a convenient and flexible tool for solving various tasks requiring 3D motion analysis. It is based on photogrammetric techniques of 3D measurements and provides high speed image acquisition, high accuracy of 3D measurements and highly automated processing of captured data. Depending on the application the system can be easily modified for different working areas from 100 mm to 10 m. The developed motion capture system uses from 2 to 4 technical vision cameras for video sequences of object motion acquisition. All cameras work in synchronization mode at frame rate up to 100 frames per second under the control of personal computer providing the possibility for accurate calculation of 3D coordinates of interest points. The system was used for a set of different applications fields and demonstrated high accuracy and high level of automation.

  3. An integrated movement capture and control platform applied towards autonomous movements of surgical robots.

    PubMed

    Daluja, Sachin; Golenberg, Lavie; Cao, Alex; Pandya, Abhilash K; Auner, Gregory W; Klein, Michael D

    2009-01-01

    Robotic surgery has gradually gained acceptance due to its numerous advantages such as tremor filtration, increased dexterity and motion scaling. There remains, however, a significant scope for improvement, especially in the areas of surgeon-robot interface and autonomous procedures. Previous studies have attempted to identify factors affecting a surgeon's performance in a master-slave robotic system by tracking hand movements. These studies relied on conventional optical or magnetic tracking systems, making their use impracticable in the operating room. This study concentrated on building an intrinsic movement capture platform using microcontroller based hardware wired to a surgical robot. Software was developed to enable tracking and analysis of hand movements while surgical tasks were performed. Movement capture was applied towards automated movements of the robotic instruments. By emulating control signals, recorded surgical movements were replayed by the robot's end-effectors. Though this work uses a surgical robot as the platform, the ideas and concepts put forward are applicable to telerobotic systems in general.

  4. The Open Cluster Chemical Abundances and Mapping (OCCAM) Survey: Galactic Neutron Capture Abundance Gradients

    NASA Astrophysics Data System (ADS)

    O'Connell, Julia; Frinchaboy, Peter M.; Shetrone, Matthew D.; Melendez, Matthew; Cunha, Katia M. L.; Majewski, Steven R.; Zasowski, Gail; APOGEE Team

    2017-01-01

    The evolution of elements, as a function or age, throughout the Milky Way disk provides a key constraint for galaxy evolution models. In an effort to provide these constraints, we have conducted an investigation into the r- and s- process elemental abundances for a large sample of open clusters as part of an optical follow-up to the SDSS-III/APOGEE-1 survey. Stars were identified as cluster members by the Open Cluster Chemical Abundance & Mapping (OCCAM) survey, which culls member candidates by radial velocity, metallicity, and proper motion from the observed APOGEE sample. To obtain data for neutron capture elements in these clusters, we conducted a long-term observing campaign covering three years (2013-2016) using the McDonald Observatory Otto Struve 2.1-m telescope and Sandiford Cass Echelle Spectrograph (R ~ 60,000). We present Galactic neutron-capture abundance gradients using 30+ clusters, within 6 kpc of the Sun, covering a range of ages from ~80 Myr to ~10 Gyr .

  5. The Open Cluster Chemical Abundances and Mapping (OCCAM) Survey: Galactic Neutron CaptureAbundance Gradients

    NASA Astrophysics Data System (ADS)

    O'Connell, Julia; Frinchaboy, Peter M.; Shetrone, Matthew D.; Melendez, Matthew; Cunha, Katia; Majewski, Steven R.; Zasowski, Gail; APOGEE Team

    2017-06-01

    The evolution of elements, as a function or age, throughout the Milky Way disk provides a key constraint for galaxy evolution models. In an effort to provide these constraints, we have conducted an investigation into the r- and s- process elemental abundances for a large sample of open clusters as part of an optical follow-up to the SDSS-III/APOGEE-1 survey. Stars were identified as cluster members by the Open Cluster Chemical Abundance & Mapping (OCCAM) survey, which culls member candidates by radial velocity, metallicity and proper motion from the observed APOGEE sample. To obtain data for neutron capture elements in these clusters, we conducted a long-term observing campaign covering three years (2013-2016) using the McDonald Observatory Otto Struve 2.1-m telescope and Sandiford Cass Echelle Spectrograph (R ~ 60,000). We present Galactic neutron capture abundance gradients using 30+ clusters, within 6 kpc of the Sun, covering a range of ages from ~80 Myr to ~10 Gyr .

  6. Real-time marker-free motion capture system using blob feature analysis

    NASA Astrophysics Data System (ADS)

    Park, Chang-Joon; Kim, Sung-Eun; Kim, Hong-Seok; Lee, In-Ho

    2005-02-01

    This paper presents a real-time marker-free motion capture system which can reconstruct 3-dimensional human motions. The virtual character of the proposed system mimics the motion of an actor in real-time. The proposed system captures human motions by using three synchronized CCD cameras and detects the root and end-effectors of an actor such as a head, hands, and feet by exploiting the blob feature analysis. And then, the 3-dimensional positions of end-effectors are restored and tracked by using Kalman filter. At last, the positions of the intermediate joint are reconstructed by using anatomically constrained inverse kinematics algorithm. The proposed system was implemented under general lighting conditions and we confirmed that the proposed system could reconstruct motions of a lot of people wearing various clothes in real-time stably.

  7. A motion deblurring method with long/short exposure image pairs

    NASA Astrophysics Data System (ADS)

    Cui, Guangmang; Hua, Weiping; Zhao, Jufeng; Gong, Xiaoli; Zhu, Liyao

    2018-01-01

    In this paper, a motion deblurring method with long/short exposure image pairs is presented. The long/short exposure image pairs are captured for the same scene under different exposure time. The image pairs are treated as the input of the deblurring method and more information could be used to obtain a deblurring result with high image quality. Firstly, the luminance equalization process is carried out to the short exposure image. And the blur kernel is estimated with the image pair under the maximum a posteriori (MAP) framework using conjugate gradient algorithm. Then a L0 image smoothing based denoising method is applied to the luminance equalized image. And the final deblurring result is obtained with the gain controlled residual image deconvolution process with the edge map as the gain map. Furthermore, a real experimental optical system is built to capture the image pair in order to demonstrate the effectiveness of the proposed deblurring framework. The long/short image pairs are obtained under different exposure time and camera gain control. Experimental results show that the proposed method could provide a superior deblurring result in both subjective and objective assessment compared with other deblurring approaches.

  8. Full-motion video analysis for improved gender classification

    NASA Astrophysics Data System (ADS)

    Flora, Jeffrey B.; Lochtefeld, Darrell F.; Iftekharuddin, Khan M.

    2014-06-01

    The ability of computer systems to perform gender classification using the dynamic motion of the human subject has important applications in medicine, human factors, and human-computer interface systems. Previous works in motion analysis have used data from sensors (including gyroscopes, accelerometers, and force plates), radar signatures, and video. However, full-motion video, motion capture, range data provides a higher resolution time and spatial dataset for the analysis of dynamic motion. Works using motion capture data have been limited by small datasets in a controlled environment. In this paper, we explore machine learning techniques to a new dataset that has a larger number of subjects. Additionally, these subjects move unrestricted through a capture volume, representing a more realistic, less controlled environment. We conclude that existing linear classification methods are insufficient for the gender classification for larger dataset captured in relatively uncontrolled environment. A method based on a nonlinear support vector machine classifier is proposed to obtain gender classification for the larger dataset. In experimental testing with a dataset consisting of 98 trials (49 subjects, 2 trials per subject), classification rates using leave-one-out cross-validation are improved from 73% using linear discriminant analysis to 88% using the nonlinear support vector machine classifier.

  9. Perceived shifts of flashed stimuli by visible and invisible object motion.

    PubMed

    Watanabe, Katsumi; Sato, Takashi R; Shimojo, Shinsuke

    2003-01-01

    Perceived positions of flashed stimuli can be altered by motion signals in the visual field-position capture (Whitney and Cavanagh, 2000 Nature Neuroscience 3 954-959). We examined whether position capture of flashed stimuli depends on the spatial relationship between moving and flashed stimuli, and whether the phenomenal permanence of a moving object behind an occluding surface (tunnel effect; Michotte 1950 Acta Psychologica 7 293-322) can produce position capture. Observers saw two objects (circles) moving vertically in opposite directions, one in each visual hemifield. Two horizontal bars were simultaneously flashed at horizontally collinear positions with the fixation point at various timings. When the movement of the object was fully visible, the flashed bar appeared shifted in the motion direction of the circle. But this position-capture effect occurred only when the bar was presented ahead of or on the moving circle. Even when the motion trajectory was covered by an opaque surface and the bar was flashed after complete occlusion of the circle, the position-capture effect was still observed, though the positional asymmetry was less clear. These results show that movements of both visible and 'hidden' objects can modulate the perception of positions of flashed stimuli and suggest that a high-level representation of 'objects in motion' plays an important role in the position-capture effect.

  10. 4D cone beam CT phase sorting using high frequency optical surface measurement during image guided radiotherapy

    NASA Astrophysics Data System (ADS)

    Price, G. J.; Marchant, T. E.; Parkhurst, J. M.; Sharrock, P. J.; Whitfield, G. A.; Moore, C. J.

    2011-03-01

    In image guided radiotherapy (IGRT) two of the most promising recent developments are four dimensional cone beam CT (4D CBCT) and dynamic optical metrology of patient surfaces. 4D CBCT is now becoming commercially available and finds use in treatment planning and verification, and whilst optical monitoring is a young technology, its ability to measure during treatment delivery without dose consequences has led to its uptake in many institutes. In this paper, we demonstrate the use of dynamic patient surfaces, simultaneously captured during CBCT acquisition using an optical sensor, to phase sort projection images for 4D CBCT volume reconstruction. The dual modality approach we describe means that in addition to 4D volumetric data, the system provides correlated wide field measurements of the patient's skin surface with high spatial and temporal resolution. As well as the value of such complementary data in verification and motion analysis studies, it introduces flexibility into the acquisition of the signal required for phase sorting. The specific technique used may be varied according to individual patient circumstances and the imaging target. We give details of three different methods of obtaining a suitable signal from the optical surfaces: simply following the motion of triangulation spots used to calibrate the surfaces' absolute height; monitoring the surface height in a single, arbitrarily selected, camera pixel; and tracking, in three dimensions, the movement of a surface feature. In addition to describing the system and methodology, we present initial results from a case study oesophageal cancer patient.

  11. Accuracy of human motion capture systems for sport applications; state-of-the-art review.

    PubMed

    van der Kruk, Eline; Reijne, Marco M

    2018-05-09

    Sport research often requires human motion capture of an athlete. It can, however, be labour-intensive and difficult to select the right system, while manufacturers report on specifications which are determined in set-ups that largely differ from sport research in terms of volume, environment and motion. The aim of this review is to assist researchers in the selection of a suitable motion capture system for their experimental set-up for sport applications. An open online platform is initiated, to support (sport)researchers in the selection of a system and to enable them to contribute and update the overview. systematic review; Method: Electronic searches in Scopus, Web of Science and Google Scholar were performed, and the reference lists of the screened articles were scrutinised to determine human motion capture systems used in academically published studies on sport analysis. An overview of 17 human motion capture systems is provided, reporting the general specifications given by the manufacturer (weight and size of the sensors, maximum capture volume, environmental feasibilities), and calibration specifications as determined in peer-reviewed studies. The accuracy of each system is plotted against the measurement range. The overview and chart can assist researchers in the selection of a suitable measurement system. To increase the robustness of the database and to keep up with technological developments, we encourage researchers to perform an accuracy test prior to their experiment and to add to the chart and the system overview (online, open access).

  12. Motion visualization and estimation for flapping wing systems

    NASA Astrophysics Data System (ADS)

    Hsu, Tzu-Sheng Shane; Fitzgerald, Timothy; Nguyen, Vincent Phuc; Patel, Trisha; Balachandran, Balakumar

    2017-04-01

    Studies of fluid-structure interactions associated with flexible structures such as flapping wings require the capture and quantification of large motions of bodies that may be opaque. As a case study, motion capture of a free flying Manduca sexta, also known as hawkmoth, is considered by using three synchronized high-speed cameras. A solid finite element (FE) representation is used as a reference body and successive snapshots in time of the displacement fields are reconstructed via an optimization procedure. One of the original aspects of this work is the formulation of an objective function and the use of shadow matching and strain-energy regularization. With this objective function, the authors penalize the projection differences between silhouettes of the captured images and the FE representation of the deformed body. The process and procedures undertaken to go from high-speed videography to motion estimation are discussed, and snapshots of representative results are presented. Finally, the captured free-flight motion is also characterized and quantified.

  13. The effect of virtual reality on gait variability.

    PubMed

    Katsavelis, Dimitrios; Mukherjee, Mukul; Decker, Leslie; Stergiou, Nicholas

    2010-07-01

    Optic Flow (OF) plays an important role in human locomotion and manipulation of OF characteristics can cause changes in locomotion patterns. The purpose of the study was to investigate the effect of the velocity of optic flow on the amount and structure of gait variability. Each subject underwent four conditions of treadmill walking at their self-selected pace. In three conditions the subjects walked in an endless virtual corridor, while a fourth control condition was also included. The three virtual conditions differed in the speed of the optic flow displayed as follows--same speed (OFn), faster (OFf), and slower (OFs) than that of the treadmill. Gait kinematics were tracked with an optical motion capture system. Gait variability measures of the hip, knee and ankle range of motion and stride interval were analyzed. Amount of variability was evaluated with linear measures of variability--coefficient of variation, while structure of variability i.e., its organization over time, were measured with nonlinear measures--approximate entropy and detrended fluctuation analysis. The linear measures of variability, CV, did not show significant differences between Non-VR and VR conditions while nonlinear measures of variability identified significant differences at the hip, ankle, and in stride interval. In response to manipulation of the optic flow, significant differences were observed between the three virtual conditions in the following order: OFn greater than OFf greater than OFs. Measures of structure of variability are more sensitive to changes in gait due to manipulation of visual cues, whereas measures of the amount of variability may be concealed by adaptive mechanisms. Visual cues increase the complexity of gait variability and may increase the degrees of freedom available to the subject. Further exploration of the effects of optic flow manipulation on locomotion may provide us with an effective tool for rehabilitation of subjects with sensorimotor issues.

  14. An active-optics image-motion compensation technology application for high-speed searching and infrared detection system

    NASA Astrophysics Data System (ADS)

    Wu, Jianping; Lu, Fei; Zou, Kai; Yan, Hong; Wan, Min; Kuang, Yan; Zhou, Yanqing

    2018-03-01

    An ultra-high angular velocity and minor-caliber high-precision stably control technology application for active-optics image-motion compensation, is put forward innovatively in this paper. The image blur problem due to several 100°/s high-velocity relative motion between imaging system and target is theoretically analyzed. The velocity match model of detection system and active optics compensation system is built, and active optics image motion compensation platform experiment parameters are designed. Several 100°/s high-velocity high-precision control optics compensation technology is studied and implemented. The relative motion velocity is up to 250°/s, and image motion amplitude is more than 20 pixel. After the active optics compensation, motion blur is less than one pixel. The bottleneck technology of ultra-high angular velocity and long exposure time in searching and infrared detection system is successfully broke through.

  15. Capture by colour: evidence for dimension-specific singleton capture.

    PubMed

    Harris, Anthony M; Becker, Stefanie I; Remington, Roger W

    2015-10-01

    Previous work on attentional capture has shown the attentional system to be quite flexible in the stimulus properties it can be set to respond to. Several different attentional "modes" have been identified. Feature search mode allows attention to be set for specific features of a target (e.g., red). Singleton detection mode sets attention to respond to any discrepant item ("singleton") in the display. Relational search sets attention for the relative properties of the target in relation to the distractors (e.g., redder, larger). Recently, a new attentional mode was proposed that sets attention to respond to any singleton within a particular feature dimension (e.g., colour; Folk & Anderson, 2010). We tested this proposal against the predictions of previously established attentional modes. In a spatial cueing paradigm, participants searched for a colour target that was randomly either red or green. The nature of the attentional control setting was probed by presenting an irrelevant singleton cue prior to the target display and assessing whether it attracted attention. In all experiments, the cues were red, green, blue, or a white stimulus rapidly rotated (motion cue). The results of three experiments support the existence of a "colour singleton set," finding that all colour cues captured attention strongly, while motion cues captured attention only weakly or not at all. Notably, we also found that capture by motion cues in search for colour targets was moderated by their frequency; rare motion cues captured attention (weakly), while frequent motion cues did not.

  16. The 3D Human Motion Control Through Refined Video Gesture Annotation

    NASA Astrophysics Data System (ADS)

    Jin, Yohan; Suk, Myunghoon; Prabhakaran, B.

    In the beginning of computer and video game industry, simple game controllers consisting of buttons and joysticks were employed, but recently game consoles are replacing joystick buttons with novel interfaces such as the remote controllers with motion sensing technology on the Nintendo Wii [1] Especially video-based human computer interaction (HCI) technique has been applied to games, and the representative game is 'Eyetoy' on the Sony PlayStation 2. Video-based HCI technique has great benefit to release players from the intractable game controller. Moreover, in order to communicate between humans and computers, video-based HCI is very crucial since it is intuitive, easy to get, and inexpensive. On the one hand, extracting semantic low-level features from video human motion data is still a major challenge. The level of accuracy is really dependent on each subject's characteristic and environmental noises. Of late, people have been using 3D motion-capture data for visualizing real human motions in 3D space (e.g, 'Tiger Woods' in EA Sports, 'Angelina Jolie' in Bear-Wolf movie) and analyzing motions for specific performance (e.g, 'golf swing' and 'walking'). 3D motion-capture system ('VICON') generates a matrix for each motion clip. Here, a column is corresponding to a human's sub-body part and row represents time frames of data capture. Thus, we can extract sub-body part's motion only by selecting specific columns. Different from low-level feature values of video human motion, 3D human motion-capture data matrix are not pixel values, but is closer to human level of semantics.

  17. Exercise Sensing and Pose Recovery Inference Tool (ESPRIT) - A Compact Stereo-based Motion Capture Solution For Exercise Monitoring

    NASA Technical Reports Server (NTRS)

    Lee, Mun Wai

    2015-01-01

    Crew exercise is important during long-duration space flight not only for maintaining health and fitness but also for preventing adverse health problems, such as losses in muscle strength and bone density. Monitoring crew exercise via motion capture and kinematic analysis aids understanding of the effects of microgravity on exercise and helps ensure that exercise prescriptions are effective. Intelligent Automation, Inc., has developed ESPRIT to monitor exercise activities, detect body markers, extract image features, and recover three-dimensional (3D) kinematic body poses. The system relies on prior knowledge and modeling of the human body and on advanced statistical inference techniques to achieve robust and accurate motion capture. In Phase I, the company demonstrated motion capture of several exercises, including walking, curling, and dead lifting. Phase II efforts focused on enhancing algorithms and delivering an ESPRIT prototype for testing and demonstration.

  18. Automatic human body modeling for vision-based motion capture system using B-spline parameterization of the silhouette

    NASA Astrophysics Data System (ADS)

    Jaume-i-Capó, Antoni; Varona, Javier; González-Hidalgo, Manuel; Mas, Ramon; Perales, Francisco J.

    2012-02-01

    Human motion capture has a wide variety of applications, and in vision-based motion capture systems a major issue is the human body model and its initialization. We present a computer vision algorithm for building a human body model skeleton in an automatic way. The algorithm is based on the analysis of the human shape. We decompose the body into its main parts by computing the curvature of a B-spline parameterization of the human contour. This algorithm has been applied in a context where the user is standing in front of a camera stereo pair. The process is completed after the user assumes a predefined initial posture so as to identify the main joints and construct the human model. Using this model, the initialization problem of a vision-based markerless motion capture system of the human body is solved.

  19. Asynchronous beating of cilia enhances particle capture rate

    NASA Astrophysics Data System (ADS)

    Ding, Yang; Kanso, Eva

    2014-11-01

    Many aquatic micro-organisms use beating cilia to generate feeding currents and capture particles in surrounding fluids. One of the capture strategies is to ``catch up'' with particles when a cilium is beating towards the overall flow direction (effective stroke) and intercept particles on the downstream side of the cilium. Here, we developed a 3D computational model of a cilia band with prescribed motion in a viscous fluid and calculated the trajectories of the particles with different sizes in the fluid. We found an optimal particle diameter that maximizes the capture rate. The flow field and particle motion indicate that the low capture rate of smaller particles is due to the laminar flow in the neighbor of the cilia, whereas larger particles have to move above the cilia tips to get advected downstream which decreases their capture rate. We then analyzed the effect of beating coordination between neighboring cilia on the capture rate. Interestingly, we found that asynchrony of the beating of the cilia can enhance the relative motion between a cilium and the particles near it and hence increase the capture rate.

  20. The adaptation of GDL motion recognition system to sport and rehabilitation techniques analysis.

    PubMed

    Hachaj, Tomasz; Ogiela, Marek R

    2016-06-01

    The main novelty of this paper is presenting the adaptation of Gesture Description Language (GDL) methodology to sport and rehabilitation data analysis and classification. In this paper we showed that Lua language can be successfully used for adaptation of the GDL classifier to those tasks. The newly applied scripting language allows easily extension and integration of classifier with other software technologies and applications. The obtained execution speed allows using the methodology in the real-time motion capture data processing where capturing frequency differs from 100 Hz to even 500 Hz depending on number of features or classes to be calculated and recognized. Due to this fact the proposed methodology can be used to the high-end motion capture system. We anticipate that using novel, efficient and effective method will highly help both sport trainers and physiotherapist in they practice. The proposed approach can be directly applied to motion capture data kinematics analysis (evaluation of motion without regard to the forces that cause that motion). The ability to apply pattern recognition methods for GDL description can be utilized in virtual reality environment and used for sport training or rehabilitation treatment.

  1. Measurement of three-dimensional posture and trajectory of lower body during standing long jumping utilizing body-mounted sensors.

    PubMed

    Ibata, Yuki; Kitamura, Seiji; Motoi, Kosuke; Sagawa, Koichi

    2013-01-01

    The measurement method of three-dimensional posture and flying trajectory of lower body during jumping motion using body-mounted wireless inertial measurement units (WIMU) is introduced. The WIMU is composed of three-dimensional (3D) accelerometer and gyroscope of two kinds with different dynamic range and one 3D geomagnetic sensor to adapt to quick movement. Three WIMUs are mounted under the chest, right thigh and right shank. Thin film pressure sensors are connected to the shank WIMU and are installed under right heel and tiptoe to distinguish the state of the body motion between grounding and jumping. Initial and final postures of trunk, thigh and shank at standing-still are obtained using gravitational acceleration and geomagnetism. The posture of body is determined using the 3D direction of each segment updated by the numerical integration of angular velocity. Flying motion is detected from pressure sensors and 3D flying trajectory is derived by the double integration of trunk acceleration applying the 3D velocity of trunk at takeoff. Standing long jump experiments are performed and experimental results show that the joint angle and flying trajectory agree with the actual motion measured by the optical motion capture system.

  2. An effective attentional set for a specific colour does not prevent capture by infrequently presented motion distractors.

    PubMed

    Retell, James D; Becker, Stefanie I; Remington, Roger W

    2016-01-01

    An organism's survival depends on the ability to rapidly orient attention to unanticipated events in the world. Yet, the conditions needed to elicit such involuntary capture remain in doubt. Especially puzzling are spatial cueing experiments, which have consistently shown that involuntary shifts of attention to highly salient distractors are not determined by stimulus properties, but instead are contingent on attentional control settings induced by task demands. Do we always need to be set for an event to be captured by it, or is there a class of events that draw attention involuntarily even when unconnected to task goals? Recent results suggest that a task-irrelevant event will capture attention on first presentation, suggesting that salient stimuli that violate contextual expectations might automatically capture attention. Here, we investigated the role of contextual expectation by examining whether an irrelevant motion cue that was presented only rarely (∼3-6% of trials) would capture attention when observers had an active set for a specific target colour. The motion cue had no effect when presented frequently, but when rare produced a pattern of interference consistent with attentional capture. The critical dependence on the frequency with which the irrelevant motion singleton was presented is consistent with early theories of involuntary orienting to novel stimuli. We suggest that attention will be captured by salient stimuli that violate expectations, whereas top-down goals appear to modulate capture by stimuli that broadly conform to contextual expectations.

  3. Needle detection in ultrasound using the spectral properties of the displacement field: a feasibility study

    NASA Astrophysics Data System (ADS)

    Beigi, Parmida; Salcudean, Tim; Rohling, Robert; Lessoway, Victoria A.; Ng, Gary C.

    2015-03-01

    This paper presents a new needle detection technique for ultrasound guided interventions based on the spectral properties of small displacements arising from hand tremour or intentional motion. In a block-based approach, the displacement map is computed for each block of interest versus a reference frame, using an optical flow technique. To compute the flow parameters, the Lucas-Kanade approach is used in a multiresolution and regularized form. A least-squares fit is used to estimate the flow parameters from the overdetermined system of spatial and temporal gradients. Lateral and axial components of the displacement are obtained for each block of interest at consecutive frames. Magnitude-squared spectral coherency is derived between the median displacements of the reference block and each block of interest, to determine the spectral correlation. In vivo images were obtained from the tissue near the abdominal aorta to capture the extreme intrinsic body motion and insertion images were captured from a tissue-mimicking agar phantom. According to the analysis, both the involuntary and intentional movement of the needle produces coherent displacement with respect to a reference window near the insertion site. Intrinsic body motion also produces coherent displacement with respect to a reference window in the tissue; however, the coherency spectra of intrinsic and needle motion are distinguishable spectrally. Blocks with high spectral coherency at high frequencies are selected, estimating a channel for needle trajectory. The needle trajectory is detected from locally thresholded absolute displacement map within the initial estimate. Experimental results show the RMS localization accuracy of 1:0 mm, 0:7 mm, and 0:5 mm for hand tremour, vibrational and rotational needle movements, respectively.

  4. Technical skills measurement based on a cyber-physical system for endovascular surgery simulation.

    PubMed

    Tercero, Carlos; Kodama, Hirokatsu; Shi, Chaoyang; Ooe, Katsutoshi; Ikeda, Seiichi; Fukuda, Toshio; Arai, Fumihito; Negoro, Makoto; Kwon, Guiryong; Najdovski, Zoran

    2013-09-01

    Quantification of medical skills is a challenge, particularly simulator-based training. In the case of endovascular intervention, it is desirable that a simulator accurately recreates the morphology and mechanical characteristics of the vasculature while enabling scoring. For this purpose, we propose a cyber-physical system composed of optical sensors for a catheter's body motion encoding, a magnetic tracker for motion capture of an operator's hands, and opto-mechatronic sensors for measuring the interaction of the catheter tip with the vasculature model wall. Two pilot studies were conducted for measuring technical skills, one for distinguishing novices from experts and the other for measuring unnecessary motion. The proficiency levels were measurable between expert and novice and also between individual novice users. The results enabled scoring of the user's proficiency level, using sensitivity, reaction time, time to complete a task and respect for tissue integrity as evaluation criteria. Additionally, unnecessary motion was also measurable. The development of cyber-physical simulators for other domains of medicine depend on the study of photoelastic materials for human tissue modelling, and enables quantitative evaluation of skills using surgical instruments and a realistic representation of human tissue. Copyright © 2012 John Wiley & Sons, Ltd.

  5. A novel yet effective motion artefact reduction method for continuous physiological monitoring

    NASA Astrophysics Data System (ADS)

    Alzahrani, A.; Hu, S.; Azorin-Peris, V.; Kalawsky, R.; Zhang, X.; Liu, C.

    2014-03-01

    This study presents a non-invasive and wearable optical technique to continuously monitor vital human signs as required for personal healthcare in today's increasing ageing population. The study has researched an effective way to capture human critical physiological parameters, i.e., oxygen saturation (SaO2%), heart rate, respiration rate, body temperature, heart rate variability by a closely coupled wearable opto-electronic patch sensor (OEPS) together with real-time and secure wireless communication functionalities. The work presents the first step of this research; an automatic noise cancellation method using a 3-axes MEMS accelerometer to recover signals corrupted by body movement which is one of the biggest sources of motion artefacts. The effects of these motion artefacts have been reduced by an enhanced electronic design and development of self-cancellation of noise and stability of the sensor. The signals from the acceleration and the opto-electronic sensor are highly correlated thus leading to the desired pulse waveform with rich bioinformatics signals to be retrieved with reduced motion artefacts. The preliminary results from the bench tests and the laboratory setup demonstrate that the goal of the high performance wearable opto-electronics is viable and feasible.

  6. The long- and short-term variability of breathing induced tumor motion in lung and liver over the course of a radiotherapy treatment.

    PubMed

    Dhont, Jennifer; Vandemeulebroucke, Jef; Burghelea, Manuela; Poels, Kenneth; Depuydt, Tom; Van Den Begin, Robbe; Jaudet, Cyril; Collen, Christine; Engels, Benedikt; Reynders, Truus; Boussaer, Marlies; Gevaert, Thierry; De Ridder, Mark; Verellen, Dirk

    2018-02-01

    To evaluate the short and long-term variability of breathing induced tumor motion. 3D tumor motion of 19 lung and 18 liver lesions captured over the course of an SBRT treatment were evaluated and compared to the motion on 4D-CT. An implanted fiducial could be used for unambiguous motion information. Fast orthogonal fluoroscopy (FF) sequences, included in the treatment workflow, were used to evaluate motion during treatment. Several motion parameters were compared between different FF sequences from the same fraction to evaluate the intrafraction variability. To assess interfraction variability, amplitude and hysteresis were compared between fractions and with the 3D tumor motion registered by 4D-CT. Population based margins, necessary on top of the ITV to capture all motion variability, were calculated based on the motion captured during treatment. Baseline drift in the cranio-caudal (CC) or anterior-poster (AP) direction is significant (ie. >5 mm) for a large group of patients, in contrary to intrafraction amplitude and hysteresis variability. However, a correlation between intrafraction amplitude variability and mean motion amplitude was found (Pearson's correlation coefficient, r = 0.72, p < 10 -4 ). Interfraction variability in amplitude is significant for 46% of all lesions. As such, 4D-CT accurately captures the motion during treatment for some fractions but not for all. Accounting for motion variability during treatment increases the PTV margins in all directions, most significantly in CC from 5 mm to 13.7 mm for lung and 8.0 mm for liver. Both short-term and day-to-day tumor motion variability can be significant, especially for lesions moving with amplitudes above 7 mm. Abandoning passive motion management strategies in favor of more active ones is advised. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. IMU-Based Joint Angle Measurement for Gait Analysis

    PubMed Central

    Seel, Thomas; Raisch, Jorg; Schauer, Thomas

    2014-01-01

    This contribution is concerned with joint angle calculation based on inertial measurement data in the context of human motion analysis. Unlike most robotic devices, the human body lacks even surfaces and right angles. Therefore, we focus on methods that avoid assuming certain orientations in which the sensors are mounted with respect to the body segments. After a review of available methods that may cope with this challenge, we present a set of new methods for: (1) joint axis and position identification; and (2) flexion/extension joint angle measurement. In particular, we propose methods that use only gyroscopes and accelerometers and, therefore, do not rely on a homogeneous magnetic field. We provide results from gait trials of a transfemoral amputee in which we compare the inertial measurement unit (IMU)-based methods to an optical 3D motion capture system. Unlike most authors, we place the optical markers on anatomical landmarks instead of attaching them to the IMUs. Root mean square errors of the knee flexion/extension angles are found to be less than 1° on the prosthesis and about 3° on the human leg. For the plantar/dorsiflexion of the ankle, both deviations are about 1°. PMID:24743160

  8. MotionExplorer: exploratory search in human motion capture data based on hierarchical aggregation.

    PubMed

    Bernard, Jürgen; Wilhelm, Nils; Krüger, Björn; May, Thorsten; Schreck, Tobias; Kohlhammer, Jörn

    2013-12-01

    We present MotionExplorer, an exploratory search and analysis system for sequences of human motion in large motion capture data collections. This special type of multivariate time series data is relevant in many research fields including medicine, sports and animation. Key tasks in working with motion data include analysis of motion states and transitions, and synthesis of motion vectors by interpolation and combination. In the practice of research and application of human motion data, challenges exist in providing visual summaries and drill-down functionality for handling large motion data collections. We find that this domain can benefit from appropriate visual retrieval and analysis support to handle these tasks in presence of large motion data. To address this need, we developed MotionExplorer together with domain experts as an exploratory search system based on interactive aggregation and visualization of motion states as a basis for data navigation, exploration, and search. Based on an overview-first type visualization, users are able to search for interesting sub-sequences of motion based on a query-by-example metaphor, and explore search results by details on demand. We developed MotionExplorer in close collaboration with the targeted users who are researchers working on human motion synthesis and analysis, including a summative field study. Additionally, we conducted a laboratory design study to substantially improve MotionExplorer towards an intuitive, usable and robust design. MotionExplorer enables the search in human motion capture data with only a few mouse clicks. The researchers unanimously confirm that the system can efficiently support their work.

  9. Optical flow estimation on image sequences with differently exposed frames

    NASA Astrophysics Data System (ADS)

    Bengtsson, Tomas; McKelvey, Tomas; Lindström, Konstantin

    2015-09-01

    Optical flow (OF) methods are used to estimate dense motion information between consecutive frames in image sequences. In addition to the specific OF estimation method itself, the quality of the input image sequence is of crucial importance to the quality of the resulting flow estimates. For instance, lack of texture in image frames caused by saturation of the camera sensor during exposure can significantly deteriorate the performance. An approach to avoid this negative effect is to use different camera settings when capturing the individual frames. We provide a framework for OF estimation on such sequences that contain differently exposed frames. Information from multiple frames are combined into a total cost functional such that the lack of an active data term for saturated image areas is avoided. Experimental results demonstrate that using alternate camera settings to capture the full dynamic range of an underlying scene can clearly improve the quality of flow estimates. When saturation of image data is significant, the proposed methods show superior performance in terms of lower endpoint errors of the flow vectors compared to a set of baseline methods. Furthermore, we provide some qualitative examples of how and when our method should be used.

  10. A Quasi-Static Method for Determining the Characteristics of a Motion Capture Camera System in a "Split-Volume" Configuration

    NASA Technical Reports Server (NTRS)

    Miller, Chris; Mulavara, Ajitkumar; Bloomberg, Jacob

    2001-01-01

    To confidently report any data collected from a video-based motion capture system, its functional characteristics must be determined, namely accuracy, repeatability and resolution. Many researchers have examined these characteristics with motion capture systems, but they used only two cameras, positioned 90 degrees to each other. Everaert used 4 cameras, but all were aligned along major axes (two in x, one in y and z). Richards compared the characteristics of different commercially available systems set-up in practical configurations, but all cameras viewed a single calibration volume. The purpose of this study was to determine the accuracy, repeatability and resolution of a 6-camera Motion Analysis system in a split-volume configuration using a quasistatic methodology.

  11. Scalable sensing electronics towards a motion capture suit

    NASA Astrophysics Data System (ADS)

    Xu, Daniel; Gisby, Todd A.; Xie, Shane; Anderson, Iain A.

    2013-04-01

    Being able to accurately record body motion allows complex movements to be characterised and studied. This is especially important in the film or sport coaching industry. Unfortunately, the human body has over 600 skeletal muscles, giving rise to multiple degrees of freedom. In order to accurately capture motion such as hand gestures, elbow or knee flexion and extension, vast numbers of sensors are required. Dielectric elastomer (DE) sensors are an emerging class of electroactive polymer (EAP) that is soft, lightweight and compliant. These characteristics are ideal for a motion capture suit. One challenge is to design sensing electronics that can simultaneously measure multiple sensors. This paper describes a scalable capacitive sensing device that can measure up to 8 different sensors with an update rate of 20Hz.

  12. Biomechanical analysis using Kinovea for sports application

    NASA Astrophysics Data System (ADS)

    Muaza Nor Adnan, Nor; Patar, Mohd Nor Azmi Ab; Lee, Hokyoo; Yamamoto, Shin-Ichiroh; Jong-Young, Lee; Mahmud, Jamaluddin

    2018-04-01

    This paper assesses the reliability of HD VideoCam–Kinovea as an alternative tool in conducting motion analysis and measuring knee relative angle of drop jump movement. The motion capture and analysis procedure were conducted in the Biomechanics Lab, Shibaura Institute of Technology, Omiya Campus, Japan. A healthy subject without any gait disorder (BMI of 28.60 ± 1.40) was recruited. The volunteered subject was asked to per the drop jump movement on preset platform and the motion was simultaneously recorded using an established infrared motion capture system (Hawk–Cortex) and a HD VideoCam in the sagittal plane only. The capture was repeated for 5 times. The outputs (video recordings) from the HD VideoCam were input into Kinovea (an open-source software) and the drop jump pattern was tracked and analysed. These data are compared with the drop jump pattern tracked and analysed earlier using the Hawk–Cortex system. In general, the results obtained (drop jump pattern) using the HD VideoCam–Kinovea are close to the results obtained using the established motion capture system. Basic statistical analyses show that most average variances are less than 10%, thus proving the repeatability of the protocol and the reliability of the results. It can be concluded that the integration of HD VideoCam–Kinovea has the potential to become a reliable motion capture–analysis system. Moreover, it is low cost, portable and easy to use. As a conclusion, the current study and its findings are found useful and has contributed to enhance significant knowledge pertaining to motion capture-analysis, drop jump movement and HD VideoCam–Kinovea integration.

  13. A Novel Method to Compute Breathing Volumes via Motion Capture Systems: Design and Experimental Trials.

    PubMed

    Massaroni, Carlo; Cassetta, Eugenio; Silvestri, Sergio

    2017-10-01

    Respiratory assessment can be carried out by using motion capture systems. A geometrical model is mandatory in order to compute the breathing volume as a function of time from the markers' trajectories. This study describes a novel model to compute volume changes and calculate respiratory parameters by using a motion capture system. The novel method, ie, prism-based method, computes the volume enclosed within the chest by defining 82 prisms from the 89 markers attached to the subject chest. Volumes computed with this method are compared to spirometry volumes and to volumes computed by a conventional method based on the tetrahedron's decomposition of the chest wall and integrated in a commercial motion capture system. Eight healthy volunteers were enrolled and 30 seconds of quiet breathing data collected from each of them. Results show a better agreement between volumes computed by the prism-based method and the spirometry (discrepancy of 2.23%, R 2  = .94) compared to the agreement between volumes computed by the conventional method and the spirometry (discrepancy of 3.56%, R 2  = .92). The proposed method also showed better performances in the calculation of respiratory parameters. Our findings open up prospects for the further use of the new method in the breathing assessment via motion capture systems.

  14. Magnetic domain wall creep and depinning: A scalar field model approach

    NASA Astrophysics Data System (ADS)

    Caballero, Nirvana B.; Ferrero, Ezequiel E.; Kolton, Alejandro B.; Curiale, Javier; Jeudy, Vincent; Bustingorry, Sebastian

    2018-06-01

    Magnetic domain wall motion is at the heart of new magnetoelectronic technologies and hence the need for a deeper understanding of domain wall dynamics in magnetic systems. In this context, numerical simulations using simple models can capture the main ingredients responsible for the complex observed domain wall behavior. We present a scalar field model for the magnetization dynamics of quasi-two-dimensional systems with a perpendicular easy axis of magnetization which allows a direct comparison with typical experimental protocols, used in polar magneto-optical Kerr effect microscopy experiments. We show that the thermally activated creep and depinning regimes of domain wall motion can be reached and the effect of different quenched disorder implementations can be assessed with the model. In particular, we show that the depinning field increases with the mean grain size of a Voronoi tessellation model for the disorder.

  15. A trillion frames per second: the techniques and applications of light-in-flight photography.

    PubMed

    Faccio, Daniele; Velten, Andreas

    2018-06-14

    Cameras capable of capturing videos at a trillion frames per second allow to freeze light in motion, a very counterintuitive capability when related to our everyday experience in which light appears to travel instantaneously. By combining this capability with computational imaging techniques, new imaging opportunities emerge such as three dimensional imaging of scenes that are hidden behind a corner, the study of relativistic distortion effects, imaging through diffusive media and imaging of ultrafast optical processes such as laser ablation, supercontinuum and plasma generation. We provide an overview of the main techniques that have been developed for ultra-high speed photography with a particular focus on `light in flight' imaging, i.e. applications where the key element is the imaging of light itself at frame rates that allow to freeze it's motion and therefore extract information that would otherwise be blurred out and lost. . © 2018 IOP Publishing Ltd.

  16. Optical Manipulation of Single Magnetic Beads in a Microwell Array on a Digital Microfluidic Chip.

    PubMed

    Decrop, Deborah; Brans, Toon; Gijsenbergh, Pieter; Lu, Jiadi; Spasic, Dragana; Kokalj, Tadej; Beunis, Filip; Goos, Peter; Puers, Robert; Lammertyn, Jeroen

    2016-09-06

    The detection of single molecules in magnetic microbead microwell array formats revolutionized the development of digital bioassays. However, retrieval of individual magnetic beads from these arrays has not been realized until now despite having great potential for studying captured targets at the individual level. In this paper, optical tweezers were implemented on a digital microfluidic platform for accurate manipulation of single magnetic beads seeded in a microwell array. Successful optical trapping of magnetic beads was found to be dependent on Brownian motion of the beads, suggesting a 99% chance of trapping a vibrating bead. A tailor-made experimental design was used to screen the effect of bead type, ionic buffer strength, surfactant type, and concentration on the Brownian activity of beads in microwells. With the optimal conditions, the manipulation of magnetic beads was demonstrated by their trapping, retrieving, transporting, and repositioning to a desired microwell on the array. The presented platform combines the strengths of digital microfluidics, digital bioassays, and optical tweezers, resulting in a powerful dynamic microwell array system for single molecule and single cell studies.

  17. Identification of pre-impact conditions of a cyclist involved in a vehicle-bicycle accident using an optimized MADYMO reconstruction combined with motion capture.

    PubMed

    Sun, Jie; Li, Zhengdong; Pan, Shaoyou; Feng, Hao; Shao, Yu; Liu, Ningguo; Huang, Ping; Zou, Donghua; Chen, Yijiu

    2018-05-01

    The aim of the present study was to develop an improved method, using MADYMO multi-body simulation software combined with an optimization method and three-dimensional (3D) motion capture, for identifying the pre-impact conditions of a cyclist (walking or cycling) involved in a vehicle-bicycle accident. First, a 3D motion capture system was used to analyze coupled motions of a volunteer while walking and cycling. The motion capture results were used to define the posture of the human model during walking and cycling simulations. Then, cyclist, bicycle and vehicle models were developed. Pre-impact parameters of the models were treated as unknown design variables. Finally, a multi-objective genetic algorithm, the nondominated sorting genetic algorithm II, was used to find optimal solutions. The objective functions of the walk parameter were significantly lower than cycle parameter; thus, the cyclist was more likely to have been walking with the bicycle than riding the bicycle. In the most closely matched result found, all observed contact points matched and the injury parameters correlated well with the real injuries sustained by the cyclist. Based on the real accident reconstruction, the present study indicates that MADYMO multi-body simulation software, combined with an optimization method and 3D motion capture, can be used to identify the pre-impact conditions of a cyclist involved in a vehicle-bicycle accident. Copyright © 2018. Published by Elsevier Ltd.

  18. A new calibration methodology for thorax and upper limbs motion capture in children using magneto and inertial sensors.

    PubMed

    Ricci, Luca; Formica, Domenico; Sparaci, Laura; Lasorsa, Francesca Romana; Taffoni, Fabrizio; Tamilia, Eleonora; Guglielmelli, Eugenio

    2014-01-09

    Recent advances in wearable sensor technologies for motion capture have produced devices, mainly based on magneto and inertial measurement units (M-IMU), that are now suitable for out-of-the-lab use with children. In fact, the reduced size, weight and the wireless connectivity meet the requirement of minimum obtrusivity and give scientists the possibility to analyze children's motion in daily life contexts. Typical use of magneto and inertial measurement units (M-IMU) motion capture systems is based on attaching a sensing unit to each body segment of interest. The correct use of this setup requires a specific calibration methodology that allows mapping measurements from the sensors' frames of reference into useful kinematic information in the human limbs' frames of reference. The present work addresses this specific issue, presenting a calibration protocol to capture the kinematics of the upper limbs and thorax in typically developing (TD) children. The proposed method allows the construction, on each body segment, of a meaningful system of coordinates that are representative of real physiological motions and that are referred to as functional frames (FFs). We will also present a novel cost function for the Levenberg-Marquardt algorithm, to retrieve the rotation matrices between each sensor frame (SF) and the corresponding FF. Reported results on a group of 40 children suggest that the method is repeatable and reliable, opening the way to the extensive use of this technology for out-of-the-lab motion capture in children.

  19. Segmenting Continuous Motions with Hidden Semi-markov Models and Gaussian Processes

    PubMed Central

    Nakamura, Tomoaki; Nagai, Takayuki; Mochihashi, Daichi; Kobayashi, Ichiro; Asoh, Hideki; Kaneko, Masahide

    2017-01-01

    Humans divide perceived continuous information into segments to facilitate recognition. For example, humans can segment speech waves into recognizable morphemes. Analogously, continuous motions are segmented into recognizable unit actions. People can divide continuous information into segments without using explicit segment points. This capacity for unsupervised segmentation is also useful for robots, because it enables them to flexibly learn languages, gestures, and actions. In this paper, we propose a Gaussian process-hidden semi-Markov model (GP-HSMM) that can divide continuous time series data into segments in an unsupervised manner. Our proposed method consists of a generative model based on the hidden semi-Markov model (HSMM), the emission distributions of which are Gaussian processes (GPs). Continuous time series data is generated by connecting segments generated by the GP. Segmentation can be achieved by using forward filtering-backward sampling to estimate the model's parameters, including the lengths and classes of the segments. In an experiment using the CMU motion capture dataset, we tested GP-HSMM with motion capture data containing simple exercise motions; the results of this experiment showed that the proposed GP-HSMM was comparable with other methods. We also conducted an experiment using karate motion capture data, which is more complex than exercise motion capture data; in this experiment, the segmentation accuracy of GP-HSMM was 0.92, which outperformed other methods. PMID:29311889

  20. Colloidal crystal grain boundary formation and motion

    PubMed Central

    Edwards, Tara D.; Yang, Yuguang; Beltran-Villegas, Daniel J.; Bevan, Michael A.

    2014-01-01

    The ability to assemble nano- and micro- sized colloidal components into highly ordered configurations is often cited as the basis for developing advanced materials. However, the dynamics of stochastic grain boundary formation and motion have not been quantified, which limits the ability to control and anneal polycrystallinity in colloidal based materials. Here we use optical microscopy, Brownian Dynamic simulations, and a new dynamic analysis to study grain boundary motion in quasi-2D colloidal bicrystals formed within inhomogeneous AC electric fields. We introduce “low-dimensional” models using reaction coordinates for condensation and global order that capture first passage times between critical configurations at each applied voltage. The resulting models reveal that equal sized domains at a maximum misorientation angle show relaxation dominated by friction limited grain boundary diffusion; and in contrast, asymmetrically sized domains with less misorientation display much faster grain boundary migration due to significant thermodynamic driving forces. By quantifying such dynamics vs. compression (voltage), kinetic bottlenecks associated with slow grain boundary relaxation are understood, which can be used to guide the temporal assembly of defect-free single domain colloidal crystals. PMID:25139760

  1. Vertical Jump Height Estimation Algorithm Based on Takeoff and Landing Identification Via Foot-Worn Inertial Sensing.

    PubMed

    Wang, Jianren; Xu, Junkai; Shull, Peter B

    2018-03-01

    Vertical jump height is widely used for assessing motor development, functional ability, and motor capacity. Traditional methods for estimating vertical jump height rely on force plates or optical marker-based motion capture systems limiting assessment to people with access to specialized laboratories. Current wearable designs need to be attached to the skin or strapped to an appendage which can potentially be uncomfortable and inconvenient to use. This paper presents a novel algorithm for estimating vertical jump height based on foot-worn inertial sensors. Twenty healthy subjects performed countermovement jumping trials and maximum jump height was determined via inertial sensors located above the toe and under the heel and was compared with the gold standard maximum jump height estimation via optical marker-based motion capture. Average vertical jump height estimation errors from inertial sensing at the toe and heel were -2.2±2.1 cm and -0.4±3.8 cm, respectively. Vertical jump height estimation with the presented algorithm via inertial sensing showed excellent reliability at the toe (ICC(2,1)=0.98) and heel (ICC(2,1)=0.97). There was no significant bias in the inertial sensing at the toe, but proportional bias (b=1.22) and fixed bias (a=-10.23cm) were detected in inertial sensing at the heel. These results indicate that the presented algorithm could be applied to foot-worn inertial sensors to estimate maximum jump height enabling assessment outside of traditional laboratory settings, and to avoid bias errors, the toe may be a more suitable location for inertial sensor placement than the heel.

  2. Interaction of Perceptual Grouping and Crossmodal Temporal Capture in Tactile Apparent-Motion

    PubMed Central

    Chen, Lihan; Shi, Zhuanghua; Müller, Hermann J.

    2011-01-01

    Previous studies have shown that in tasks requiring participants to report the direction of apparent motion, task-irrelevant mono-beeps can “capture” visual motion perception when the beeps occur temporally close to the visual stimuli. However, the contributions of the relative timing of multimodal events and the event structure, modulating uni- and/or crossmodal perceptual grouping, remain unclear. To examine this question and extend the investigation to the tactile modality, the current experiments presented tactile two-tap apparent-motion streams, with an SOA of 400 ms between successive, left-/right-hand middle-finger taps, accompanied by task-irrelevant, non-spatial auditory stimuli. The streams were shown for 90 seconds, and participants' task was to continuously report the perceived (left- or rightward) direction of tactile motion. In Experiment 1, each tactile stimulus was paired with an auditory beep, though odd-numbered taps were paired with an asynchronous beep, with audiotactile SOAs ranging from −75 ms to 75 ms. Perceived direction of tactile motion varied systematically with audiotactile SOA, indicative of a temporal-capture effect. In Experiment 2, two audiotactile SOAs—one short (75 ms), one long (325 ms)—were compared. The long-SOA condition preserved the crossmodal event structure (so the temporal-capture dynamics should have been similar to that in Experiment 1), but both beeps now occurred temporally close to the taps on one side (even-numbered taps). The two SOAs were found to produce opposite modulations of apparent motion, indicative of an influence of crossmodal grouping. In Experiment 3, only odd-numbered, but not even-numbered, taps were paired with auditory beeps. This abolished the temporal-capture effect and, instead, a dominant percept of apparent motion from the audiotactile side to the tactile-only side was observed independently of the SOA variation. These findings suggest that asymmetric crossmodal grouping leads to an attentional modulation of apparent motion, which inhibits crossmodal temporal-capture effects. PMID:21383834

  3. Relationships of a Circular Singer Arm Gesture to Acoustical and Perceptual Measures of Singing: A Motion Capture Study

    ERIC Educational Resources Information Center

    Brunkan, Melissa C.

    2016-01-01

    The purpose of this study was to validate previous research that suggests using movement in conjunction with singing tasks can affect intonation and perception of the task. Singers (N = 49) were video and audio recorded, using a motion capture system, while singing a phrase from a familiar song, first with no motion, and then while doing a low,…

  4. Registration of Large Motion Blurred Images

    DTIC Science & Technology

    2016-05-09

    in handling the dynamics of the capturing system, for example, a drone. CMOS sensors , used in recent times, when employed in these cameras produce...handling the dynamics of the capturing system, for example, a drone. CMOS sensors , used in recent times, when employed in these cameras produce two types...blur in the captured image when there is camera motion during exposure. However, contemporary CMOS sensors employ an electronic rolling shutter (RS

  5. Automated Quantification of the Landing Error Scoring System With a Markerless Motion-Capture System.

    PubMed

    Mauntel, Timothy C; Padua, Darin A; Stanley, Laura E; Frank, Barnett S; DiStefano, Lindsay J; Peck, Karen Y; Cameron, Kenneth L; Marshall, Stephen W

    2017-11-01

      The Landing Error Scoring System (LESS) can be used to identify individuals with an elevated risk of lower extremity injury. The limitation of the LESS is that raters identify movement errors from video replay, which is time-consuming and, therefore, may limit its use by clinicians. A markerless motion-capture system may be capable of automating LESS scoring, thereby removing this obstacle.   To determine the reliability of an automated markerless motion-capture system for scoring the LESS.   Cross-sectional study.   United States Military Academy.   A total of 57 healthy, physically active individuals (47 men, 10 women; age = 18.6 ± 0.6 years, height = 174.5 ± 6.7 cm, mass = 75.9 ± 9.2 kg).   Participants completed 3 jump-landing trials that were recorded by standard video cameras and a depth camera. Their movement quality was evaluated by expert LESS raters (standard video recording) using the LESS rubric and by software that automates LESS scoring (depth-camera data). We recorded an error for a LESS item if it was present on at least 2 of 3 jump-landing trials. We calculated κ statistics, prevalence- and bias-adjusted κ (PABAK) statistics, and percentage agreement for each LESS item. Interrater reliability was evaluated between the 2 expert rater scores and between a consensus expert score and the markerless motion-capture system score.   We observed reliability between the 2 expert LESS raters (average κ = 0.45 ± 0.35, average PABAK = 0.67 ± 0.34; percentage agreement = 0.83 ± 0.17). The markerless motion-capture system had similar reliability with consensus expert scores (average κ = 0.48 ± 0.40, average PABAK = 0.71 ± 0.27; percentage agreement = 0.85 ± 0.14). However, reliability was poor for 5 LESS items in both LESS score comparisons.   A markerless motion-capture system had the same level of reliability as expert LESS raters, suggesting that an automated system can accurately assess movement. Therefore, clinicians can use the markerless motion-capture system to reliably score the LESS without being limited by the time requirements of manual LESS scoring.

  6. Derivation of capture probabilities for the corotation eccentric mean motion resonances

    NASA Astrophysics Data System (ADS)

    El Moutamid, Maryame; Sicardy, Bruno; Renner, Stéfan

    2017-08-01

    We study in this paper the capture of a massless particle into an isolated, first-order corotation eccentric resonance (CER), in the framework of the planar, eccentric and restricted three-body problem near a m + 1: m mean motion commensurability (m integer). While capture into Lindblad eccentric resonances (where the perturber's orbit is circular) has been investigated years ago, capture into CER (where the perturber's orbit is elliptic) has not yet been investigated in detail. Here, we derive the generic equations of motion near a CER in the general case where both the perturber and the test particle migrate. We derive the probability of capture in that context, and we examine more closely two particular cases: (I) if only the perturber is migrating, capture is possible only if the migration is outward from the primary. Notably, the probability of capture is independent of the way the perturber migrates outward; (II) if only the test particle is migrating, then capture is possible only if the algebraic value of its migration rate is a decreasing function of orbital radius. In this case, the probability of capture is proportional to the radial gradient of migration. These results differ from the capture into Lindblad eccentric resonance (LER), where it is necessary that the orbits of the perturber and the test particle converge for capture to be possible.

  7. Optic flow detection is not influenced by visual-vestibular congruency.

    PubMed

    Holten, Vivian; MacNeilage, Paul R

    2018-01-01

    Optic flow patterns generated by self-motion relative to the stationary environment result in congruent visual-vestibular self-motion signals. Incongruent signals can arise due to object motion, vestibular dysfunction, or artificial stimulation, which are less common. Hence, we are predominantly exposed to congruent rather than incongruent visual-vestibular stimulation. If the brain takes advantage of this probabilistic association, we expect observers to be more sensitive to visual optic flow that is congruent with ongoing vestibular stimulation. We tested this expectation by measuring the motion coherence threshold, which is the percentage of signal versus noise dots, necessary to detect an optic flow pattern. Observers seated on a hexapod motion platform in front of a screen experienced two sequential intervals. One interval contained optic flow with a given motion coherence and the other contained noise dots only. Observers had to indicate which interval contained the optic flow pattern. The motion coherence threshold was measured for detection of laminar and radial optic flow during leftward/rightward and fore/aft linear self-motion, respectively. We observed no dependence of coherence thresholds on vestibular congruency for either radial or laminar optic flow. Prior studies using similar methods reported both decreases and increases in coherence thresholds in response to congruent vestibular stimulation; our results do not confirm either of these prior reports. While methodological differences may explain the diversity of results, another possibility is that motion coherence thresholds are mediated by neural populations that are either not modulated by vestibular stimulation or that are modulated in a manner that does not depend on congruency.

  8. Validity of clinical outcome measures to evaluate ankle range of motion during the weight-bearing lunge test.

    PubMed

    Hall, Emily A; Docherty, Carrie L

    2017-07-01

    To determine the concurrent validity of standard clinical outcome measures compared to laboratory outcome measure while performing the weight-bearing lunge test (WBLT). Cross-sectional study. Fifty participants performed the WBLT to determine dorsiflexion ROM using four different measurement techniques: dorsiflexion angle with digital inclinometer at 15cm distal to the tibial tuberosity (°), dorsiflexion angle with inclinometer at tibial tuberosity (°), maximum lunge distance (cm), and dorsiflexion angle using a 2D motion capture system (°). Outcome measures were recorded concurrently during each trial. To establish concurrent validity, Pearson product-moment correlation coefficients (r) were conducted, comparing each dependent variable to the 2D motion capture analysis (identified as the reference standard). A higher correlation indicates strong concurrent validity. There was a high correlation between each measurement technique and the reference standard. Specifically the correlation between the inclinometer placement at 15cm below the tibial tuberosity (44.9°±5.5°) and the motion capture angle (27.0°±6.0°) was r=0.76 (p=0.001), between the inclinometer placement at the tibial tuberosity angle (39.0°±4.6°) and the motion capture angle was r=0.71 (p=0.001), and between the distance from the wall clinical measure (10.3±3.0cm) to the motion capture angle was r=0.74 (p=0.001). This study determined that the clinical measures used during the WBLT have a high correlation with the reference standard for assessing dorsiflexion range of motion. Therefore, obtaining maximum lunge distance and inclinometer angles are both valid assessments during the weight-bearing lunge test. Copyright © 2016 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.

  9. A novel method to replicate the kinematics of the carpus using a six degree-of-freedom robot.

    PubMed

    Fraysse, François; Costi, John J; Stanley, Richard M; Ding, Boyin; McGuire, Duncan; Eng, Kevin; Bain, Gregory I; Thewlis, Dominic

    2014-03-21

    Understanding the kinematics of the carpus is essential to the understanding and treatment of wrist pathologies. However, many of the previous techniques presented are limited by non-functional motion or the interpolation of points from static images at different postures. We present a method that has the capability of replicating the kinematics of the wrist during activities of daily living using a unique mechanical testing system. To quantify the kinematics of the carpal bones, we used bone pin-mounted markers and optical motion capture methods. In this paper, we present a hammering motion as an example of an activity of daily living. However, the method can be applied to a wide variety of movements. Our method showed good accuracy (1.0-2.6°) of in vivo movement reproduction in our ex vivo model. Most carpal motion during wrist flexion-extension occurs at the radiocarpal level while in ulnar deviation the motion is more equally shared between radiocarpal and midcarpal joints, and in radial deviation the motion happens mainly at the midcarpal joint. For all rotations, there was more rotation of the midcarpal row relative to the lunate than relative to the scaphoid or triquetrum. For the functional motion studied (hammering), there was more midcarpal motion in wrist extension compared to pure wrist extension while radioulnar deviation patterns were similar to those observed in pure wrist radioulnar deviation. Finally, it was found that for the amplitudes studied the amount of carpal rotations was proportional to global wrist rotations. Copyright © 2014 Elsevier Ltd. All rights reserved.

  10. Capture of visual direction in dynamic vergence is reduced with flashed monocular lines.

    PubMed

    Jaschinski, Wolfgang; Jainta, Stephanie; Schürer, Michael

    2006-08-01

    The visual direction of a continuously presented monocular object is captured by the visual direction of a closely adjacent binocular object, which questions the reliability of nonius lines for measuring vergence. This was shown by Erkelens, C. J., and van Ee, R. (1997a,b) [Capture of the visual direction: An unexpected phenomenon in binocular vision. Vision Research, 37, 1193-1196; Capture of the visual direction of monocular objects by adjacent binocular objects. Vision Research, 37, 1735-1745] stimulating dynamic vergence by a counter phase oscillation of two square random-dot patterns (one to each eye) that contained a smaller central dot-free gap (of variable width) with a vertical monocular line oscillating in phase with the random-dot pattern of the respective eye; subjects adjusted the motion-amplitude of the line until it was perceived as (nearly) stationary. With a continuously presented monocular line, we replicated capture of visual direction provided the dot-free gap was narrow: the adjusted motion-amplitude of the line was similar as the motion-amplitude of the random-dot pattern, although large vergence errors occurred. However, when we flashed the line for 67 ms at the moments of maximal and minimal disparity of the vergence stimulus, we found that the adjusted motion-amplitude of the line was smaller; thus, the capture effect appeared to be reduced with flashed nonius lines. Accordingly, we found that the objectively measured vergence gain was significantly correlated (r=0.8) with the motion-amplitude of the flashed monocular line when the separation between the line and the fusion contour was at least 32 min arc. In conclusion, if one wishes to estimate the dynamic vergence response with psychophysical methods, effects of capture of visual direction can be reduced by using flashed nonius lines.

  11. A video-based system for hand-driven stop-motion animation.

    PubMed

    Han, Xiaoguang; Fu, Hongbo; Zheng, Hanlin; Liu, Ligang; Wang, Jue

    2013-01-01

    Stop-motion is a well-established animation technique but is often laborious and requires craft skills. A new video-based system can animate the vast majority of everyday objects in stop-motion style, more flexibly and intuitively. Animators can perform and capture motions continuously instead of breaking them into increments and shooting one still picture per increment. More important, the system permits direct hand manipulation without resorting to rigs, achieving more natural object control for beginners. The system's key component is two-phase keyframe-based capturing and processing, assisted by computer vision techniques. With this system, even amateurs can generate high-quality stop-motion animations.

  12. Estimating 3D L5/S1 moments and ground reaction forces during trunk bending using a full-body ambulatory inertial motion capture system.

    PubMed

    Faber, G S; Chang, C C; Kingma, I; Dennerlein, J T; van Dieën, J H

    2016-04-11

    Inertial motion capture (IMC) systems have become increasingly popular for ambulatory movement analysis. However, few studies have attempted to use these measurement techniques to estimate kinetic variables, such as joint moments and ground reaction forces (GRFs). Therefore, we investigated the performance of a full-body ambulatory IMC system in estimating 3D L5/S1 moments and GRFs during symmetric, asymmetric and fast trunk bending, performed by nine male participants. Using an ambulatory IMC system (Xsens/MVN), L5/S1 moments were estimated based on the upper-body segment kinematics using a top-down inverse dynamics analysis, and GRFs were estimated based on full-body segment accelerations. As a reference, a laboratory measurement system was utilized: GRFs were measured with Kistler force plates (FPs), and L5/S1 moments were calculated using a bottom-up inverse dynamics model based on FP data and lower-body kinematics measured with an optical motion capture system (OMC). Correspondence between the OMC+FP and IMC systems was quantified by calculating root-mean-square errors (RMSerrors) of moment/force time series and the interclass correlation (ICC) of the absolute peak moments/forces. Averaged over subjects, L5/S1 moment RMSerrors remained below 10Nm (about 5% of the peak extension moment) and 3D GRF RMSerrors remained below 20N (about 2% of the peak vertical force). ICCs were high for the peak L5/S1 extension moment (0.971) and vertical GRF (0.998). Due to lower amplitudes, smaller ICCs were found for the peak asymmetric L5/S1 moments (0.690-0.781) and horizontal GRFs (0.559-0.948). In conclusion, close correspondence was found between the ambulatory IMC-based and laboratory-based estimates of back load. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. Using ultrahigh sensitive optical microangiography to achieve comprehensive depth resolved microvasculature mapping for human retina

    NASA Astrophysics Data System (ADS)

    An, Lin; Shen, Tueng T.; Wang, Ruikang K.

    2011-10-01

    This paper presents comprehensive and depth-resolved retinal microvasculature images within human retina achieved by a newly developed ultrahigh sensitive optical microangiography (UHS-OMAG) system. Due to its high flow sensitivity, UHS-OMAG is much more sensitive to tissue motion due to the involuntary movement of the human eye and head compared to the traditional OMAG system. To mitigate these motion artifacts on final imaging results, we propose a new phase compensation algorithm in which the traditional phase-compensation algorithm is repeatedly used to efficiently minimize the motion artifacts. Comparatively, this new algorithm demonstrates at least 8 to 25 times higher motion tolerability, critical for the UHS-OMAG system to achieve retinal microvasculature images with high quality. Furthermore, the new UHS-OMAG system employs a high speed line scan CMOS camera (240 kHz A-line scan rate) to capture 500 A-lines for one B-frame at a 400 Hz frame rate. With this system, we performed a series of in vivo experiments to visualize the retinal microvasculature in humans. Two featured imaging protocols are utilized. The first is of the low lateral resolution (16 μm) and a wide field of view (4 × 3 mm2 with single scan and 7 × 8 mm2 for multiple scans), while the second is of the high lateral resolution (5 μm) and a narrow field of view (1.5 × 1.2 mm2 with single scan). The great imaging performance delivered by our system suggests that UHS-OMAG can be a promising noninvasive alternative to the current clinical retinal microvasculature imaging techniques for the diagnosis of eye diseases with significant vascular involvement, such as diabetic retinopathy and age-related macular degeneration.

  14. A Study of Vicon System Positioning Performance.

    PubMed

    Merriaux, Pierre; Dupuis, Yohan; Boutteau, Rémi; Vasseur, Pascal; Savatier, Xavier

    2017-07-07

    Motion capture setups are used in numerous fields. Studies based on motion capture data can be found in biomechanical, sport or animal science. Clinical science studies include gait analysis as well as balance, posture and motor control. Robotic applications encompass object tracking. Today's life applications includes entertainment or augmented reality. Still, few studies investigate the positioning performance of motion capture setups. In this paper, we study the positioning performance of one player in the optoelectronic motion capture based on markers: Vicon system. Our protocol includes evaluations of static and dynamic performances. Mean error as well as positioning variabilities are studied with calibrated ground truth setups that are not based on other motion capture modalities. We introduce a new setup that enables directly estimating the absolute positioning accuracy for dynamic experiments contrary to state-of-the art works that rely on inter-marker distances. The system performs well on static experiments with a mean absolute error of 0.15 mm and a variability lower than 0.025 mm. Our dynamic experiments were carried out at speeds found in real applications. Our work suggests that the system error is less than 2 mm. We also found that marker size and Vicon sampling rate must be carefully chosen with respect to the speed encountered in the application in order to reach optimal positioning performance that can go to 0.3 mm for our dynamic study.

  15. Mathematical Modeling and Evaluation of Human Motions in Physical Therapy Using Mixture Density Neural Networks

    PubMed Central

    Vakanski, A; Ferguson, JM; Lee, S

    2016-01-01

    Objective The objective of the proposed research is to develop a methodology for modeling and evaluation of human motions, which will potentially benefit patients undertaking a physical rehabilitation therapy (e.g., following a stroke or due to other medical conditions). The ultimate aim is to allow patients to perform home-based rehabilitation exercises using a sensory system for capturing the motions, where an algorithm will retrieve the trajectories of a patient’s exercises, will perform data analysis by comparing the performed motions to a reference model of prescribed motions, and will send the analysis results to the patient’s physician with recommendations for improvement. Methods The modeling approach employs an artificial neural network, consisting of layers of recurrent neuron units and layers of neuron units for estimating a mixture density function over the spatio-temporal dependencies within the human motion sequences. Input data are sequences of motions related to a prescribed exercise by a physiotherapist to a patient, and recorded with a motion capture system. An autoencoder subnet is employed for reducing the dimensionality of captured sequences of human motions, complemented with a mixture density subnet for probabilistic modeling of the motion data using a mixture of Gaussian distributions. Results The proposed neural network architecture produced a model for sets of human motions represented with a mixture of Gaussian density functions. The mean log-likelihood of observed sequences was employed as a performance metric in evaluating the consistency of a subject’s performance relative to the reference dataset of motions. A publically available dataset of human motions captured with Microsoft Kinect was used for validation of the proposed method. Conclusion The article presents a novel approach for modeling and evaluation of human motions with a potential application in home-based physical therapy and rehabilitation. The described approach employs the recent progress in the field of machine learning and neural networks in developing a parametric model of human motions, by exploiting the representational power of these algorithms to encode nonlinear input-output dependencies over long temporal horizons. PMID:28111643

  16. Mathematical Modeling and Evaluation of Human Motions in Physical Therapy Using Mixture Density Neural Networks.

    PubMed

    Vakanski, A; Ferguson, J M; Lee, S

    2016-12-01

    The objective of the proposed research is to develop a methodology for modeling and evaluation of human motions, which will potentially benefit patients undertaking a physical rehabilitation therapy (e.g., following a stroke or due to other medical conditions). The ultimate aim is to allow patients to perform home-based rehabilitation exercises using a sensory system for capturing the motions, where an algorithm will retrieve the trajectories of a patient's exercises, will perform data analysis by comparing the performed motions to a reference model of prescribed motions, and will send the analysis results to the patient's physician with recommendations for improvement. The modeling approach employs an artificial neural network, consisting of layers of recurrent neuron units and layers of neuron units for estimating a mixture density function over the spatio-temporal dependencies within the human motion sequences. Input data are sequences of motions related to a prescribed exercise by a physiotherapist to a patient, and recorded with a motion capture system. An autoencoder subnet is employed for reducing the dimensionality of captured sequences of human motions, complemented with a mixture density subnet for probabilistic modeling of the motion data using a mixture of Gaussian distributions. The proposed neural network architecture produced a model for sets of human motions represented with a mixture of Gaussian density functions. The mean log-likelihood of observed sequences was employed as a performance metric in evaluating the consistency of a subject's performance relative to the reference dataset of motions. A publically available dataset of human motions captured with Microsoft Kinect was used for validation of the proposed method. The article presents a novel approach for modeling and evaluation of human motions with a potential application in home-based physical therapy and rehabilitation. The described approach employs the recent progress in the field of machine learning and neural networks in developing a parametric model of human motions, by exploiting the representational power of these algorithms to encode nonlinear input-output dependencies over long temporal horizons.

  17. Portable pathogen detection system

    DOEpatents

    Colston, Billy W.; Everett, Matthew; Milanovich, Fred P.; Brown, Steve B.; Vendateswaran, Kodumudi; Simon, Jonathan N.

    2005-06-14

    A portable pathogen detection system that accomplishes on-site multiplex detection of targets in biological samples. The system includes: microbead specific reagents, incubation/mixing chambers, a disposable microbead capture substrate, and an optical measurement and decoding arrangement. The basis of this system is a highly flexible Liquid Array that utilizes optically encoded microbeads as the templates for biological assays. Target biological samples are optically labeled and captured on the microbeads, which are in turn captured on an ordered array or disordered array disposable capture substrate and then optically read.

  18. MPCV Exercise Operational Volume Analysis

    NASA Technical Reports Server (NTRS)

    Godfrey, A.; Humphreys, B.; Funk, J.; Perusek, G.; Lewandowski, B. E.

    2017-01-01

    In order to minimize the loss of bone and muscle mass during spaceflight, the Multi-purpose Crew Vehicle (MPCV) will include an exercise device and enough free space within the cabin for astronauts to use the device effectively. The NASA Digital Astronaut Project (DAP) has been tasked with using computational modeling to aid in determining whether or not the available operational volume is sufficient for in-flight exercise.Motion capture data was acquired using a 12-camera Smart DX system (BTS Bioengineering, Brooklyn, NY), while exercisers performed 9 resistive exercises without volume restrictions in a 1g environment. Data were collected from two male subjects, one being in the 99th percentile of height and the other in the 50th percentile of height, using between 25 and 60 motion capture markers. Motion capture data was also recorded as a third subject, also near the 50th percentile in height, performed aerobic rowing during a parabolic flight. A motion capture system and algorithms developed previously and presented at last years HRP-IWS were utilized to collect and process the data from the parabolic flight [1]. These motions were applied to a scaled version of a biomechanical model within the biomechanical modeling software OpenSim [2], and the volume sweeps of the motions were visually assessed against an imported CAD model of the operational volume. Further numerical analysis was performed using Matlab (Mathworks, Natick, MA) and the OpenSim API. This analysis determined the location of every marker in space over the duration of the exercise motion, and the distance of each marker to the nearest surface of the volume. Containment of the exercise motions within the operational volume was determined on a per-exercise and per-subject basis. The orientation of the exerciser and the angle of the footplate were two important factors upon which containment was dependent. Regions where the exercise motion exceeds the bounds of the operational volume have been identified by determining which markers from the motion capture exceed the operational volume and by how much. A credibility assessment of this analysis was performed in accordance with NASA-STD-7009 prior to delivery to the MPCV program.

  19. An automated time and hand motion analysis based on planar motion capture extended to a virtual environment

    NASA Astrophysics Data System (ADS)

    Tinoco, Hector A.; Ovalle, Alex M.; Vargas, Carlos A.; Cardona, María J.

    2015-09-01

    In the context of industrial engineering, the predetermined time systems (PTS) play an important role in workplaces because inefficiencies are found in assembly processes that require manual manipulations. In this study, an approach is proposed with the aim to analyze time and motions in a manual process using a capture motion system embedded to a virtual environment. Capture motion system tracks IR passive markers located on the hands to take the positions of each one. For our purpose, a real workplace is virtually represented by domains to create a virtual workplace based on basic geometries. Motion captured data are combined with the virtual workplace to simulate operations carried out on it, and a time and motion analysis is completed by means of an algorithm. To test the methodology of analysis, a case study was intentionally designed using and violating the principles of motion economy. In the results, it was possible to observe where the hands never crossed as well as where the hands passed by the same place. In addition, the activities done in each zone were observed and some known deficiencies were identified in the distribution of the workplace by computational analysis. Using a frequency analysis of hand velocities, errors in the chosen assembly method were revealed showing differences in the hand velocities. An opportunity is seen to classify some quantifiable aspects that are not identified easily in a traditional time and motion analysis. The automated analysis is considered as the main contribution in this study. In the industrial context, a great application is perceived in terms of monitoring the workplace to analyze repeatability, PTS, workplace and labor activities redistribution using the proposed methodology.

  20. Common-path biodynamic imaging for dynamic fluctuation spectroscopy of 3D living tissue

    NASA Astrophysics Data System (ADS)

    Li, Zhe; Turek, John; Nolte, David D.

    2017-03-01

    Biodynamic imaging is a novel 3D optical imaging technology based on short-coherence digital holography that measures intracellular motions of cells inside their natural microenvironments. Here both common-path and Mach-Zehnder designs are presented. Biological tissues such as tumor spheroids and ex vivo biopsies are used as targets, and backscattered light is collected as signal. Drugs are applied to samples, and their effects are evaluated by identifying biomarkers that capture intracellular dynamics from the reconstructed holograms. Through digital holography and coherence gating, information from different depths of the samples can be extracted, enabling the deep-tissue measurement of the responses to drugs.

  1. Micro-optical system based 3D imaging for full HD depth image capturing

    NASA Astrophysics Data System (ADS)

    Park, Yong-Hwa; Cho, Yong-Chul; You, Jang-Woo; Park, Chang-Young; Yoon, Heesun; Lee, Sang-Hun; Kwon, Jong-Oh; Lee, Seung-Wan

    2012-03-01

    20 Mega-Hertz-switching high speed image shutter device for 3D image capturing and its application to system prototype are presented. For 3D image capturing, the system utilizes Time-of-Flight (TOF) principle by means of 20MHz high-speed micro-optical image modulator, so called 'optical shutter'. The high speed image modulation is obtained using the electro-optic operation of the multi-layer stacked structure having diffractive mirrors and optical resonance cavity which maximizes the magnitude of optical modulation. The optical shutter device is specially designed and fabricated realizing low resistance-capacitance cell structures having small RC-time constant. The optical shutter is positioned in front of a standard high resolution CMOS image sensor and modulates the IR image reflected from the object to capture a depth image. Suggested novel optical shutter device enables capturing of a full HD depth image with depth accuracy of mm-scale, which is the largest depth image resolution among the-state-of-the-arts, which have been limited up to VGA. The 3D camera prototype realizes color/depth concurrent sensing optical architecture to capture 14Mp color and full HD depth images, simultaneously. The resulting high definition color/depth image and its capturing device have crucial impact on 3D business eco-system in IT industry especially as 3D image sensing means in the fields of 3D camera, gesture recognition, user interface, and 3D display. This paper presents MEMS-based optical shutter design, fabrication, characterization, 3D camera system prototype and image test results.

  2. Real-time physics-based 3D biped character animation using an inverted pendulum model.

    PubMed

    Tsai, Yao-Yang; Lin, Wen-Chieh; Cheng, Kuangyou B; Lee, Jehee; Lee, Tong-Yee

    2010-01-01

    We present a physics-based approach to generate 3D biped character animation that can react to dynamical environments in real time. Our approach utilizes an inverted pendulum model to online adjust the desired motion trajectory from the input motion capture data. This online adjustment produces a physically plausible motion trajectory adapted to dynamic environments, which is then used as the desired motion for the motion controllers to track in dynamics simulation. Rather than using Proportional-Derivative controllers whose parameters usually cannot be easily set, our motion tracking adopts a velocity-driven method which computes joint torques based on the desired joint angular velocities. Physically correct full-body motion of the 3D character is computed in dynamics simulation using the computed torques and dynamical model of the character. Our experiments demonstrate that tracking motion capture data with real-time response animation can be achieved easily. In addition, physically plausible motion style editing, automatic motion transition, and motion adaptation to different limb sizes can also be generated without difficulty.

  3. Thermophoretic motion behavior of submicron particles in boundary-layer-separation flow around a droplet.

    PubMed

    Wang, Ao; Song, Qiang; Ji, Bingqiang; Yao, Qiang

    2015-12-01

    As a key mechanism of submicron particle capture in wet deposition and wet scrubbing processes, thermophoresis is influenced by the flow and temperature fields. Three-dimensional direct numerical simulations were conducted to quantify the characteristics of the flow and temperature fields around a droplet at three droplet Reynolds numbers (Re) that correspond to three typical boundary-layer-separation flows (steady axisymmetric, steady plane-symmetric, and unsteady plane-symmetric flows). The thermophoretic motion of submicron particles was simulated in these cases. Numerical results show that the motion of submicron particles around the droplet and the deposition distribution exhibit different characteristics under three typical flow forms. The motion patterns of particles are dependent on their initial positions in the upstream and flow forms. The patterns of particle motion and deposition are diversified as Re increases. The particle motion pattern, initial position of captured particles, and capture efficiency change periodically, especially during periodic vortex shedding. The key effects of flow forms on particle motion are the shape and stability of the wake behind the droplet. The drag force of fluid and the thermophoretic force in the wake contribute jointly to the deposition of submicron particles after the boundary-layer separation around a droplet.

  4. Brownian motion of graphene.

    PubMed

    Maragó, Onofrio M; Bonaccorso, Francesco; Saija, Rosalba; Privitera, Giulia; Gucciardi, Pietro G; Iatì, Maria Antonia; Calogero, Giuseppe; Jones, Philip H; Borghese, Ferdinando; Denti, Paolo; Nicolosi, Valeria; Ferrari, Andrea C

    2010-12-28

    Brownian motion is a manifestation of the fluctuation-dissipation theorem of statistical mechanics. It regulates systems in physics, biology, chemistry, and finance. We use graphene as prototype material to unravel the consequences of the fluctuation-dissipation theorem in two dimensions, by studying the Brownian motion of optically trapped graphene flakes. These orient orthogonal to the light polarization, due to the optical constants anisotropy. We explain the flake dynamics in the optical trap and measure force and torque constants from the correlation functions of the tracking signals, as well as comparing experiments with a full electromagnetic theory of optical trapping. The understanding of optical trapping of two-dimensional nanostructures gained through our Brownian motion analysis paves the way to light-controlled manipulation and all-optical sorting of biological membranes and anisotropic macromolecules.

  5. Biofidelic Human Activity Modeling and Simulation with Large Variability

    DTIC Science & Technology

    2014-11-25

    A systematic approach was developed for biofidelic human activity modeling and simulation by using body scan data and motion capture data to...replicate a human activity in 3D space. Since technologies for simultaneously capturing human motion and dynamic shapes are not yet ready for practical use, a...that can replicate a human activity in 3D space with the true shape and true motion of a human. Using this approach, a model library was built to

  6. Motion capture based identification of the human body inertial parameters.

    PubMed

    Venture, Gentiane; Ayusawa, Ko; Nakamura, Yoshihiko

    2008-01-01

    Identification of body inertia, masses and center of mass is an important data to simulate, monitor and understand dynamics of motion, to personalize rehabilitation programs. This paper proposes an original method to identify the inertial parameters of the human body, making use of motion capture data and contact forces measurements. It allows in-vivo painless estimation and monitoring of the inertial parameters. The method is described and then obtained experimental results are presented and discussed.

  7. Full-Field Spectroscopy at Megahertz-frame-rates: Application of Coherent Time-Stretch Transform

    NASA Astrophysics Data System (ADS)

    DeVore, Peter Thomas Setsuda

    Outliers or rogue events are found extensively in our world and have incredible effects. Also called rare events, they arise in the distribution of wealth (e.g., Pareto index), finance, network traffic, ocean waves, and e-commerce (selling less of more). Interest in rare optical events exploded after the sighting of optical rogue waves in laboratory experiments at UCLA. Detecting such tail events in fast streams of information necessitates real-time measurements. The Coherent Time-Stretch Transform chirps a pulsed source of radiation so that its temporal envelope matches its spectral profile (analogous to the far field regime of spatial diffraction), and the mapped spectral electric field is slow enough to be captured by a real-time digitizer. Combining this technique with spectral encoding, the time stretch technique has enabled a new class of ultra-high performance spectrometers and cameras (30+ MHz), and analog-to-digital converters that have led to the discovery of optical rogue waves and detection of cancer cells in blood with one in a million sensitivity. Conventionally, the Coherent Time-Stretch Transform maps the spectrum into the temporal electric field, but the time-dilation process along with inherent fiber losses results in reduction of peak power and loss of sensitivity, a problem exacerbated by extremely narrow molecular linewidths. The loss issue notwithstanding, in many cases the requisite dispersive optical device is not available. By extending the Coherent Time-Stretch Transform to the temporal near field, I have demonstrated, for the first time, phase-sensitive absorption spectroscopy of a gaseous sample at millions of frames per second. As the Coherent Time-Stretch Transform may capture both near and far field optical waves, it is a complete spectro-temporal optical characterization tool. This is manifested as an amplitude-dependent chirp, which implies the ability to measure the complex refractive index dispersion at megahertz frame rates. This technique is not only four orders of magnitude faster than even the fastest (kHz) spectrometers, but will also enable capture of real-time complex dielectric function dynamics of plasmas and chemical reactions (e.g. combustion). It also has applications in high-energy physics, biology, and monitoring fast high-throughput industrial processes. Adding an electro-optic modulator to the Time-Stretch Transform yields time-to-time mapping of electrical waveforms. Known as TiSER, it is an analog slow-motion processor that uses light to reduce the bandwidth of broadband RF signals for capture by high-sensitivity analog-to-digital converters (ADC). However, the electro-optic modulator limits the electrical bandwidth of TiSER. To solve this, I introduced Optical Sideband-only Amplification, wherein electro-optically generated modulation (containing the RF information) is amplified at the expense of the carrier, addressing the two most important problems plaguing electro-optic modulators: (1) low RF bandwidth and (2) high required RF drive voltages. I demonstrated drive voltage reductions of 5x at 10 GHz and 10x at 50 GHz, while simultaneously increasing the RF bandwidth.

  8. Optical Measurement of In-plane Waves in Mechanical Metamaterials Through Digital Image Correlation

    NASA Astrophysics Data System (ADS)

    Schaeffer, Marshall; Trainiti, Giuseppe; Ruzzene, Massimo

    2017-02-01

    We report on a Digital Image Correlation-based technique for the detection of in-plane elastic waves propagating in structural lattices. The experimental characterization of wave motion in lattice structures is currently of great interest due its relevance to the design of novel mechanical metamaterials with unique/unusual properties such as strongly directional behaviour, negative refractive indexes and topologically protected wave motion. Assessment of these functionalities often requires the detection of highly spatially resolved in-plane wavefields, which for reticulated or porous structural assemblies is an open challenge. A Digital Image Correlation approach is implemented that tracks small displacements of the lattice nodes by centring image subsets about the lattice intersections. A high speed camera records the motion of the points by properly interleaving subse- quent frames thus artificially enhancing the available sampling rate. This, along with an imaging stitching procedure, enables the capturing of a field of view that is sufficiently large for subsequent processing. The transient response is recorded in the form of the full wavefields, which are processed to unveil features of wave motion in a hexagonal lattice. Time snapshots and frequency contours in the spatial Fourier domain are compared with numerical predictions to illustrate the accuracy of the recorded wavefields.

  9. Laser tweezer actuated microphotonic array devices for high resolution imaging and analysis in chip-based biosystems

    NASA Astrophysics Data System (ADS)

    Birkbeck, Aaron L.

    A new technology is developed that functionally integrates arrays of lasers and micro-optics into microfluidic systems for the purpose of imaging, analyzing, and manipulating objects and biological cells. In general, the devices and technologies emerging from this area either lack functionality through the reliance on mechanical systems or provide a serial-based, time consuming approach. As compared to the current state of art, our all-optical design methodology has several distinguishing features, such as parallelism, high efficiency, low power, auto-alignment, and high yield fabrication methods, which all contribute to minimizing the cost of the integration process. The potential use of vertical cavity surface emitting lasers (VCSELs) for the creation of two-dimensional arrays of laser optical tweezers that perform independently controlled, parallel capture, and transport of large numbers of individual objects and biological cells is investigated. One of the primary biological applications for which VCSEL array sourced laser optical tweezers are considered is the formation of engineered tissues through the manipulation and spatial arrangement of different types of cells in a co-culture. Creating devices that combine laser optical tweezers with select micro-optical components permits optical imaging and analysis functions to take place inside the microfluidic channel. One such device is a micro-optical spatial filter whose motion and alignment is controlled using a laser optical tweezer. Unlike conventional spatial filter systems, our device utilizes a refractive optical element that is directly incorporated onto the lithographically patterned spatial filter. This allows the micro-optical spatial filter to automatically align itself in three-dimensions to the focal point of the microscope objective, where it then filters out the higher frequency additive noise components present in the laser beam. As a means of performing high resolution imaging in the microfluidic channel, we developed a novel technique that integrates the capacity of a laser tweezer to optically trap and manipulate objects in three-dimensions with the resolution-enhanced imaging capabilities of a solid immersion lens (SIL). In our design, the SIL is a free-floating device whose imaging beam, motion control and alignment is provided by a laser optical tweezer, which allows the microfluidic SIL to image in areas that are inaccessible to traditional solid immersion microscopes.

  10. Method for Estimating Three-Dimensional Knee Rotations Using Two Inertial Measurement Units: Validation with a Coordinate Measurement Machine

    PubMed Central

    Vitali, Rachel V.; Cain, Stephen M.; Zaferiou, Antonia M.; Ojeda, Lauro V.; Perkins, Noel C.

    2017-01-01

    Three-dimensional rotations across the human knee serve as important markers of knee health and performance in multiple contexts including human mobility, worker safety and health, athletic performance, and warfighter performance. While knee rotations can be estimated using optical motion capture, that method is largely limited to the laboratory and small capture volumes. These limitations may be overcome by deploying wearable inertial measurement units (IMUs). The objective of this study is to present a new IMU-based method for estimating 3D knee rotations and to benchmark the accuracy of the results using an instrumented mechanical linkage. The method employs data from shank- and thigh-mounted IMUs and a vector constraint for the medial-lateral axis of the knee during periods when the knee joint functions predominantly as a hinge. The method is carefully validated using data from high precision optical encoders in a mechanism that replicates 3D knee rotations spanning (1) pure flexion/extension, (2) pure internal/external rotation, (3) pure abduction/adduction, and (4) combinations of all three rotations. Regardless of the movement type, the IMU-derived estimates of 3D knee rotations replicate the truth data with high confidence (RMS error < 4° and correlation coefficient r≥0.94). PMID:28846613

  11. Motion-form interactions beyond the motion integration level: evidence for interactions between orientation and optic flow signals.

    PubMed

    Pavan, Andrea; Marotti, Rosilari Bellacosa; Mather, George

    2013-05-31

    Motion and form encoding are closely coupled in the visual system. A number of physiological studies have shown that neurons in the striate and extrastriate cortex (e.g., V1 and MT) are selective for motion direction parallel to their preferred orientation, but some neurons also respond to motion orthogonal to their preferred spatial orientation. Recent psychophysical research (Mather, Pavan, Bellacosa, & Casco, 2012) has demonstrated that the strength of adaptation to two fields of transparently moving dots is modulated by simultaneously presented orientation signals, suggesting that the interaction occurs at the level of motion integrating receptive fields in the extrastriate cortex. In the present psychophysical study, we investigated whether motion-form interactions take place at a higher level of neural processing where optic flow components are extracted. In Experiment 1, we measured the duration of the motion aftereffect (MAE) generated by contracting or expanding dot fields in the presence of either radial (parallel) or concentric (orthogonal) counterphase pedestal gratings. To tap the stage at which optic flow is extracted, we measured the duration of the phantom MAE (Weisstein, Maguire, & Berbaum, 1977) in which we adapted and tested different parts of the visual field, with orientation signals presented either in the adapting (Experiment 2) or nonadapting (Experiments 3 and 4) sectors. Overall, the results showed that motion adaptation is suppressed most by orientation signals orthogonal to optic flow direction, suggesting that motion-form interactions also take place at the global motion level where optic flow is extracted.

  12. Breaking camouflage and detecting targets require optic flow and image structure information.

    PubMed

    Pan, Jing Samantha; Bingham, Ned; Chen, Chang; Bingham, Geoffrey P

    2017-08-01

    Use of motion to break camouflage extends back to the Cambrian [In the Blink of an Eye: How Vision Sparked the Big Bang of Evolution (New York Basic Books, 2003)]. We investigated the ability to break camouflage and continue to see camouflaged targets after motion stops. This is crucial for the survival of hunting predators. With camouflage, visual targets and distracters cannot be distinguished using only static image structure (i.e., appearance). Motion generates another source of optical information, optic flow, which breaks camouflage and specifies target locations. Optic flow calibrates image structure with respect to spatial relations among targets and distracters, and calibrated image structure makes previously camouflaged targets perceptible in a temporally stable fashion after motion stops. We investigated this proposal using laboratory experiments and compared how many camouflaged targets were identified either with optic flow information alone or with combined optic flow and image structure information. Our results show that the combination of motion-generated optic flow and target-projected image structure information yielded efficient and stable perception of camouflaged targets.

  13. Dynamics of an optically confined nanoparticle diffusing normal to a surface.

    PubMed

    Schein, Perry; O'Dell, Dakota; Erickson, David

    2016-06-01

    Here we measure the hindered diffusion of an optically confined nanoparticle in the direction normal to a surface, and we use this to determine the particle-surface interaction profile in terms of the absolute height. These studies are performed using the evanescent field of an optically excited single-mode silicon nitride waveguide, where the particle is confined in a height-dependent potential energy well generated from the balance of optical gradient and surface forces. Using a high-speed cmos camera, we demonstrate the ability to capture the short time-scale diffusion dominated motion for 800-nm-diam polystyrene particles, with measurement times of only a few seconds per particle. Using established theory, we show how this information can be used to estimate the equilibrium separation of the particle from the surface. As this measurement can be made simultaneously with equilibrium statistical mechanical measurements of the particle-surface interaction energy landscape, we demonstrate the ability to determine these in terms of the absolute rather than relative separation height. This enables the comparison of potential energy landscapes of particle-surface interactions measured under different experimental conditions, enhancing the utility of this technique.

  14. In-motion optical sensing for assessment of animal well-being

    NASA Astrophysics Data System (ADS)

    Atkins, Colton A.; Pond, Kevin R.; Madsen, Christi K.

    2017-05-01

    The application of in-motion optical sensor measurements was investigated for inspecting livestock soundness as a means of animal well-being. An optical sensor-based platform was used to collect in-motion, weight-related information. Eight steers, weighing between 680 and 1134 kg, were evaluated twice. Six of the 8 steers were used for further evaluation and analysis. Hoof impacts caused plate flexion that was optically sensed. Observed kinetic differences between animals' strides at a walking or running/trotting gait with significant force distributions of animals' hoof impacts allowed for observation of real-time, biometric patterns. Overall, optical sensor-based measurements identified hoof differences between and within animals in motion that may allow for diagnosis of musculoskeletal unsoundness without visual evaluation.

  15. A Virtual Reality Dance Training System Using Motion Capture Technology

    ERIC Educational Resources Information Center

    Chan, J. C. P.; Leung, H.; Tang, J. K. T.; Komura, T.

    2011-01-01

    In this paper, a new dance training system based on the motion capture and virtual reality (VR) technologies is proposed. Our system is inspired by the traditional way to learn new movements-imitating the teacher's movements and listening to the teacher's feedback. A prototype of our proposed system is implemented, in which a student can imitate…

  16. A common framework for the analysis of complex motion? Standstill and capture illusions

    PubMed Central

    Dürsteler, Max R.

    2014-01-01

    A series of illusions was created by presenting stimuli, which consisted of two overlapping surfaces each defined by textures of independent visual features (i.e., modulation of luminance, color, depth, etc.). When presented concurrently with a stationary 2-D luminance texture, observers often fail to perceive the motion of an overlapping stereoscopically defined depth-texture. This illusory motion standstill arises due to a failure to represent two independent surfaces (one for luminance and one for depth textures) and motion transparency (the ability to perceive motion of both surfaces simultaneously). Instead the stimulus is represented as a single non-transparent surface taking on the stationary nature of the luminance-defined texture. By contrast, if it is the 2D-luminance defined texture that is in motion, observers often perceive the stationary depth texture as also moving. In this latter case, the failure to represent the motion transparency of the two textures gives rise to illusionary motion capture. Our past work demonstrated that the illusions of motion standstill and motion capture can occur for depth-textures that are rotating, or expanding / contracting, or else spiraling. Here I extend these findings to include stereo-shearing. More importantly, it is the motion (or lack thereof) of the luminance texture that determines how the motion of the depth will be perceived. This observation is strongly in favor of a single pathway for complex motion that operates on luminance-defines texture motion signals only. In addition, these complex motion illusions arise with chromatically-defined textures with smooth transitions between their colors. This suggests that in respect to color motion perception the complex motions' pathway is only able to accurately process signals from isoluminant colored textures with sharp transitions between colors, and/or moving at high speeds, which is conceivable if it relies on inputs from a hypothetical dual opponent color pathway. PMID:25566023

  17. Multi-Sensor Methods for Mobile Radar Motion Capture and Compensation

    NASA Astrophysics Data System (ADS)

    Nakata, Robert

    Remote sensing has many applications, including surveying and mapping, geophysics exploration, military surveillance, search and rescue and counter-terrorism operations. Remote sensor systems typically use visible image, infrared or radar sensors. Camera based image sensors can provide high spatial resolution but are limited to line-of-sight capture during daylight. Infrared sensors have lower resolution but can operate during darkness. Radar sensors can provide high resolution motion measurements, even when obscured by weather, clouds and smoke and can penetrate walls and collapsed structures constructed with non-metallic materials up to 1 m to 2 m in depth depending on the wavelength and transmitter power level. However, any platform motion will degrade the target signal of interest. In this dissertation, we investigate alternative methodologies to capture platform motion, including a Body Area Network (BAN) that doesn't require external fixed location sensors, allowing full mobility of the user. We also investigated platform stabilization and motion compensation techniques to reduce and remove the signal distortion introduced by the platform motion. We evaluated secondary ultrasonic and radar sensors to stabilize the platform resulting in an average 5 dB of Signal to Interference Ratio (SIR) improvement. We also implemented a Digital Signal Processing (DSP) motion compensation algorithm that improved the SIR by 18 dB on average. These techniques could be deployed on a quadcopter platform and enable the detection of respiratory motion using an onboard radar sensor.

  18. Three-dimensional measurement of small inner surface profiles using feature-based 3-D panoramic registration

    PubMed Central

    Gong, Yuanzheng; Seibel, Eric J.

    2017-01-01

    Rapid development in the performance of sophisticated optical components, digital image sensors, and computer abilities along with decreasing costs has enabled three-dimensional (3-D) optical measurement to replace more traditional methods in manufacturing and quality control. The advantages of 3-D optical measurement, such as noncontact, high accuracy, rapid operation, and the ability for automation, are extremely valuable for inline manufacturing. However, most of the current optical approaches are eligible for exterior instead of internal surfaces of machined parts. A 3-D optical measurement approach is proposed based on machine vision for the 3-D profile measurement of tiny complex internal surfaces, such as internally threaded holes. To capture the full topographic extent (peak to valley) of threads, a side-view commercial rigid scope is used to collect images at known camera positions and orientations. A 3-D point cloud is generated with multiview stereo vision using linear motion of the test piece, which is repeated by a rotation to form additional point clouds. Registration of these point clouds into a complete reconstruction uses a proposed automated feature-based 3-D registration algorithm. The resulting 3-D reconstruction is compared with x-ray computed tomography to validate the feasibility of our proposed method for future robotically driven industrial 3-D inspection. PMID:28286351

  19. Three-dimensional measurement of small inner surface profiles using feature-based 3-D panoramic registration

    NASA Astrophysics Data System (ADS)

    Gong, Yuanzheng; Seibel, Eric J.

    2017-01-01

    Rapid development in the performance of sophisticated optical components, digital image sensors, and computer abilities along with decreasing costs has enabled three-dimensional (3-D) optical measurement to replace more traditional methods in manufacturing and quality control. The advantages of 3-D optical measurement, such as noncontact, high accuracy, rapid operation, and the ability for automation, are extremely valuable for inline manufacturing. However, most of the current optical approaches are eligible for exterior instead of internal surfaces of machined parts. A 3-D optical measurement approach is proposed based on machine vision for the 3-D profile measurement of tiny complex internal surfaces, such as internally threaded holes. To capture the full topographic extent (peak to valley) of threads, a side-view commercial rigid scope is used to collect images at known camera positions and orientations. A 3-D point cloud is generated with multiview stereo vision using linear motion of the test piece, which is repeated by a rotation to form additional point clouds. Registration of these point clouds into a complete reconstruction uses a proposed automated feature-based 3-D registration algorithm. The resulting 3-D reconstruction is compared with x-ray computed tomography to validate the feasibility of our proposed method for future robotically driven industrial 3-D inspection.

  20. Real-time tumor motion estimation using respiratory surrogate via memory-based learning

    NASA Astrophysics Data System (ADS)

    Li, Ruijiang; Lewis, John H.; Berbeco, Ross I.; Xing, Lei

    2012-08-01

    Respiratory tumor motion is a major challenge in radiation therapy for thoracic and abdominal cancers. Effective motion management requires an accurate knowledge of the real-time tumor motion. External respiration monitoring devices (optical, etc) provide a noninvasive, non-ionizing, low-cost and practical approach to obtain the respiratory signal. Due to the highly complex and nonlinear relations between tumor and surrogate motion, its ultimate success hinges on the ability to accurately infer the tumor motion from respiratory surrogates. Given their widespread use in the clinic, such a method is critically needed. We propose to use a powerful memory-based learning method to find the complex relations between tumor motion and respiratory surrogates. The method first stores the training data in memory and then finds relevant data to answer a particular query. Nearby data points are assigned high relevance (or weights) and conversely distant data are assigned low relevance. By fitting relatively simple models to local patches instead of fitting one single global model, it is able to capture highly nonlinear and complex relations between the internal tumor motion and external surrogates accurately. Due to the local nature of weighting functions, the method is inherently robust to outliers in the training data. Moreover, both training and adapting to new data are performed almost instantaneously with memory-based learning, making it suitable for dynamically following variable internal/external relations. We evaluated the method using respiratory motion data from 11 patients. The data set consists of simultaneous measurement of 3D tumor motion and 1D abdominal surface (used as the surrogate signal in this study). There are a total of 171 respiratory traces, with an average peak-to-peak amplitude of ∼15 mm and average duration of ∼115 s per trace. Given only 5 s (roughly one breath) pretreatment training data, the method achieved an average 3D error of 1.5 mm and 95th percentile error of 3.4 mm on unseen test data. The average 3D error was further reduced to 1.4 mm when the model was tuned to its optimal setting for each respiratory trace. In one trace where a few outliers are present in the training data, the proposed method achieved an error reduction of as much as ∼50% compared with the best linear model (1.0 mm versus 2.1 mm). The memory-based learning technique is able to accurately capture the highly complex and nonlinear relations between tumor and surrogate motion in an efficient manner (a few milliseconds per estimate). Furthermore, the algorithm is particularly suitable to handle situations where the training data are contaminated by large errors or outliers. These desirable properties make it an ideal candidate for accurate and robust tumor gating/tracking using respiratory surrogates.

  1. A natural user interface to integrate citizen science and physical exercise.

    PubMed

    Palermo, Eduardo; Laut, Jeffrey; Nov, Oded; Cappa, Paolo; Porfiri, Maurizio

    2017-01-01

    Citizen science enables volunteers to contribute to scientific projects, where massive data collection and analysis are often required. Volunteers participate in citizen science activities online from their homes or in the field and are motivated by both intrinsic and extrinsic factors. Here, we investigated the possibility of integrating citizen science tasks within physical exercises envisaged as part of a potential rehabilitation therapy session. The citizen science activity entailed environmental mapping of a polluted body of water using a miniature instrumented boat, which was remotely controlled by the participants through their physical gesture tracked by a low-cost markerless motion capture system. Our findings demonstrate that the natural user interface offers an engaging and effective means for performing environmental monitoring tasks. At the same time, the citizen science activity increases the commitment of the participants, leading to a better motion performance, quantified through an array of objective indices. The study constitutes a first and necessary step toward rehabilitative treatments of the upper limb through citizen science and low-cost markerless optical systems.

  2. Inertial Measures of Motion for Clinical Biomechanics: Comparative Assessment of Accuracy under Controlled Conditions - Effect of Velocity

    PubMed Central

    Lebel, Karina; Boissy, Patrick; Hamel, Mathieu; Duval, Christian

    2013-01-01

    Background Inertial measurement of motion with Attitude and Heading Reference Systems (AHRS) is emerging as an alternative to 3D motion capture systems in biomechanics. The objectives of this study are: 1) to describe the absolute and relative accuracy of multiple units of commercially available AHRS under various types of motion; and 2) to evaluate the effect of motion velocity on the accuracy of these measurements. Methods The criterion validity of accuracy was established under controlled conditions using an instrumented Gimbal table. AHRS modules were carefully attached to the center plate of the Gimbal table and put through experimental static and dynamic conditions. Static and absolute accuracy was assessed by comparing the AHRS orientation measurement to those obtained using an optical gold standard. Relative accuracy was assessed by measuring the variation in relative orientation between modules during trials. Findings Evaluated AHRS systems demonstrated good absolute static accuracy (mean error < 0.5o) and clinically acceptable absolute accuracy under condition of slow motions (mean error between 0.5o and 3.1o). In slow motions, relative accuracy varied from 2o to 7o depending on the type of AHRS and the type of rotation. Absolute and relative accuracy were significantly affected (p<0.05) by velocity during sustained motions. The extent of that effect varied across AHRS. Interpretation Absolute and relative accuracy of AHRS are affected by environmental magnetic perturbations and conditions of motions. Relative accuracy of AHRS is mostly affected by the ability of all modules to locate the same global reference coordinate system at all time. Conclusions Existing AHRS systems can be considered for use in clinical biomechanics under constrained conditions of use. While their individual capacity to track absolute motion is relatively consistent, the use of multiple AHRS modules to compute relative motion between rigid bodies needs to be optimized according to the conditions of operation. PMID:24260324

  3. 4K x 2K pixel color video pickup system

    NASA Astrophysics Data System (ADS)

    Sugawara, Masayuki; Mitani, Kohji; Shimamoto, Hiroshi; Fujita, Yoshihiro; Yuyama, Ichiro; Itakura, Keijirou

    1998-12-01

    This paper describes the development of an experimental super- high-definition color video camera system. During the past several years there has been much interest in super-high- definition images as the next generation image media. One of the difficulties in implementing a super-high-definition motion imaging system is constructing the image-capturing section (camera). Even the state-of-the-art semiconductor technology can not realize the image sensor which has enough pixels and output data rate for super-high-definition images. The present study is an attempt to fill the gap in this respect. The authors intend to solve the problem by using new imaging method in which four HDTV sensors are attached on a new color separation optics so that their pixel sample pattern forms checkerboard pattern. A series of imaging experiments demonstrate that this technique is an effective approach to capturing super-high-definition moving images in the present situation where no image sensors exist for such images.

  4. Motion-form interactions beyond the motion integration level: Evidence for interactions between orientation and optic flow signals

    PubMed Central

    Pavan, Andrea; Marotti, Rosilari Bellacosa; Mather, George

    2013-01-01

    Motion and form encoding are closely coupled in the visual system. A number of physiological studies have shown that neurons in the striate and extrastriate cortex (e.g., V1 and MT) are selective for motion direction parallel to their preferred orientation, but some neurons also respond to motion orthogonal to their preferred spatial orientation. Recent psychophysical research (Mather, Pavan, Bellacosa, & Casco, 2012) has demonstrated that the strength of adaptation to two fields of transparently moving dots is modulated by simultaneously presented orientation signals, suggesting that the interaction occurs at the level of motion integrating receptive fields in the extrastriate cortex. In the present psychophysical study, we investigated whether motion-form interactions take place at a higher level of neural processing where optic flow components are extracted. In Experiment 1, we measured the duration of the motion aftereffect (MAE) generated by contracting or expanding dot fields in the presence of either radial (parallel) or concentric (orthogonal) counterphase pedestal gratings. To tap the stage at which optic flow is extracted, we measured the duration of the phantom MAE (Weisstein, Maguire, & Berbaum, 1977) in which we adapted and tested different parts of the visual field, with orientation signals presented either in the adapting (Experiment 2) or nonadapting (Experiments 3 and 4) sectors. Overall, the results showed that motion adaptation is suppressed most by orientation signals orthogonal to optic flow direction, suggesting that motion-form interactions also take place at the global motion level where optic flow is extracted. PMID:23729767

  5. Capture of intraocular lens optic by residual capsular opening in secondary implantation: long-term follow-up.

    PubMed

    Tian, Tian; Chen, Chunli; Jin, Haiying; Jiao, Lyu; Zhang, Qi; Zhao, Peiquan

    2018-04-02

    To introduce a novel surgical technique for optic capture by residual capsular opening in secondary intraocular lens (IOL) implantation and to report the outcomes of a long follow-up. Twenty patients (20 eyes) who had received secondary IOL implantation with the optic capture technique were retrospectively reviewed. We used the residual capsular opening for capturing the optic and inserted the haptics in the sulcus during surgery. Baseline clinical characteristics and surgical outcomes, including best-corrected visual acuity (BCVA), refractive status, and IOL position were recorded. The postoperative location and stability of IOL were evaluated using the ultrasound biomicroscopy. Optic capture technique was successfully performed in all cases, including 5 cases with large area of posterior capsular opacity, 6 cases with posterior capsular tear or rupture,and 9 cases with adhesive capsules. BCVA improved from 0.60 logMAR at baseline to 0.36 logMAR at the last follow-up (P < 0.001). Spherical equivalent changed from 10.67 ± 4.59 D at baseline to 0.12 ± 1.35 D at 6 months postoperatively (P < 0.001). Centered IOLs were observed in all cases and remained captured through residual capsular opening in 19 (95%) eyes at the last follow-up. In one case, the captured optic of IOL slid into ciliary sulcus at 7 months postoperatively. No other postoperative complications were observed in any cases. This optic capture technique by using residual capsule opening is an efficacious and safe technique and can achieve IOL stability in the long follow-up.

  6. Pre-clinical and clinical walking kinematics in female breeding pigs with lameness: A nested case-control cohort study.

    PubMed

    Stavrakakis, S; Guy, J H; Syranidis, I; Johnson, G R; Edwards, S A

    2015-07-01

    Gait profiles were investigated in a cohort of female pigs experiencing a lameness period prevalence of 29% over 17 months. Gait alterations before and during visually diagnosed lameness were evaluated to identify the best quantitative clinical lameness indicators and early predictors for lameness. Pre-breeding gilts (n= 84) were recruited to the study over a period of 6 months, underwent motion capture every 5 weeks and, depending on their age at entry to the study, were followed for up to three successive gestations. Animals were subject to motion capture in each parity at 8 weeks of gestation and on the day of weaning (28 days postpartum). During kinematic motion capture, the pigs walked on the same concrete walkway and an array of infra-red cameras was used to collect three dimensional coordinate data of reflective skin markers attached to the head, trunk and limb anatomical landmarks. Of 24 pigs diagnosed with lameness, 19 had preclinical gait records, whilst 18 had a motion capture while lame. Depending on availability, data from one or two preclinical motion capture 1-11 months prior to lameness and on the day of lameness were analysed. Lameness was best detected and evaluated using relative spatiotemporal gait parameters, especially vertical head displacement and asymmetric stride phase timing. Irregularity in the step-to-stride length ratio was elevated (deviation  ≥ 0.03) in young pigs which presented lameness in later life (odds ratio 7.2-10.8). Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. Stereoscopic advantages for vection induced by radial, circular, and spiral optic flows.

    PubMed

    Palmisano, Stephen; Summersby, Stephanie; Davies, Rodney G; Kim, Juno

    2016-11-01

    Although observer motions project different patterns of optic flow to our left and right eyes, there has been surprisingly little research into potential stereoscopic contributions to self-motion perception. This study investigated whether visually induced illusory self-motion (i.e., vection) is influenced by the addition of consistent stereoscopic information to radial, circular, and spiral (i.e., combined radial + circular) patterns of optic flow. Stereoscopic vection advantages were found for radial and spiral (but not circular) flows when monocular motion signals were strong. Under these conditions, stereoscopic benefits were greater for spiral flow than for radial flow. These effects can be explained by differences in the motion aftereffects generated by these displays, which suggest that the circular motion component in spiral flow selectively reduced adaptation to stereoscopic motion-in-depth. Stereoscopic vection advantages were not observed for circular flow when monocular motion signals were strong, but emerged when monocular motion signals were weakened. These findings show that stereoscopic information can contribute to visual self-motion perception in multiple ways.

  8. Robust Foot Clearance Estimation Based on the Integration of Foot-Mounted IMU Acceleration Data

    PubMed Central

    Benoussaad, Mourad; Sijobert, Benoît; Mombaur, Katja; Azevedo Coste, Christine

    2015-01-01

    This paper introduces a method for the robust estimation of foot clearance during walking, using a single inertial measurement unit (IMU) placed on the subject’s foot. The proposed solution is based on double integration and drift cancellation of foot acceleration signals. The method is insensitive to misalignment of IMU axes with respect to foot axes. Details are provided regarding calibration and signal processing procedures. Experimental validation was performed on 10 healthy subjects under three walking conditions: normal, fast and with obstacles. Foot clearance estimation results were compared to measurements from an optical motion capture system. The mean error between them is significantly less than 15% under the various walking conditions. PMID:26703622

  9. A Data Set of Human Body Movements for Physical Rehabilitation Exercises.

    PubMed

    Vakanski, Aleksandar; Jun, Hyung-Pil; Paul, David; Baker, Russell

    2018-03-01

    The article presents University of Idaho - Physical Rehabilitation Movement Data (UI-PRMD) - a publically available data set of movements related to common exercises performed by patients in physical rehabilitation programs. For the data collection, 10 healthy subjects performed 10 repetitions of different physical therapy movements, with a Vicon optical tracker and a Microsoft Kinect sensor used for the motion capturing. The data are in a format that includes positions and angles of full-body joints. The objective of the data set is to provide a basis for mathematical modeling of therapy movements, as well as for establishing performance metrics for evaluation of patient consistency in executing the prescribed rehabilitation exercises.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kuramochi, Hikaru; Takeuchi, Satoshi; Tahara, Tahei, E-mail: tahei@riken.jp

    We describe details of the setup for time-resolved impulsive stimulated Raman spectroscopy (TR-ISRS). In this method, snapshot molecular vibrational spectra of the photoreaction transients are captured via time-domain Raman probing using ultrashort pulses. Our instrument features transform-limited sub-7-fs pulses to impulsively excite and probe coherent nuclear wavepacket motions, allowing us to observe vibrational fingerprints of transient species from the terahertz to 3000-cm{sup −1} region with high sensitivity. Key optical components for the best spectroscopic performance are discussed. The TR-ISRS measurements for the excited states of diphenylacetylene in cyclohexane are demonstrated, highlighting the capability of our setup to track femtosecond dynamicsmore » of all the Raman-active fundamental molecular vibrations.« less

  11. Improved optical flow velocity analysis in SO2 camera images of volcanic plumes - implications for emission-rate retrievals investigated at Mt Etna, Italy and Guallatiri, Chile

    NASA Astrophysics Data System (ADS)

    Gliß, Jonas; Stebel, Kerstin; Kylling, Arve; Sudbø, Aasmund

    2018-02-01

    Accurate gas velocity measurements in emission plumes are highly desirable for various atmospheric remote sensing applications. The imaging technique of UV SO2 cameras is commonly used to monitor SO2 emissions from volcanoes and anthropogenic sources (e.g. power plants, ships). The camera systems capture the emission plumes at high spatial and temporal resolution. This allows the gas velocities in the plume to be retrieved directly from the images. The latter can be measured at a pixel level using optical flow (OF) algorithms. This is particularly advantageous under turbulent plume conditions. However, OF algorithms intrinsically rely on contrast in the images and often fail to detect motion in low-contrast image areas. We present a new method to identify ill-constrained OF motion vectors and replace them using the local average velocity vector. The latter is derived based on histograms of the retrieved OF motion fields. The new method is applied to two example data sets recorded at Mt Etna (Italy) and Guallatiri (Chile). We show that in many cases, the uncorrected OF yields significantly underestimated SO2 emission rates. We further show that our proposed correction can account for this and that it significantly improves the reliability of optical-flow-based gas velocity retrievals. In the case of Mt Etna, the SO2 emissions of the north-eastern crater are investigated. The corrected SO2 emission rates range between 4.8 and 10.7 kg s-1 (average of 7.1 ± 1.3 kg s-1) and are in good agreement with previously reported values. For the Guallatiri data, the emissions of the central crater and a fumarolic field are investigated. The retrieved SO2 emission rates are between 0.5 and 2.9 kg s-1 (average of 1.3 ± 0.5 kg s-1) and provide the first report of SO2 emissions from this remotely located and inaccessible volcano.

  12. Validation of the Leap Motion Controller using markered motion capture technology.

    PubMed

    Smeragliuolo, Anna H; Hill, N Jeremy; Disla, Luis; Putrino, David

    2016-06-14

    The Leap Motion Controller (LMC) is a low-cost, markerless motion capture device that tracks hand, wrist and forearm position. Integration of this technology into healthcare applications has begun to occur rapidly, making validation of the LMC׳s data output an important research goal. Here, we perform a detailed evaluation of the kinematic data output from the LMC, and validate this output against gold-standard, markered motion capture technology. We instructed subjects to perform three clinically-relevant wrist (flexion/extension, radial/ulnar deviation) and forearm (pronation/supination) movements. The movements were simultaneously tracked using both the LMC and a marker-based motion capture system from Motion Analysis Corporation (MAC). Adjusting for known inconsistencies in the LMC sampling frequency, we compared simultaneously acquired LMC and MAC data by performing Pearson׳s correlation (r) and root mean square error (RMSE). Wrist flexion/extension and radial/ulnar deviation showed good overall agreement (r=0.95; RMSE=11.6°, and r=0.92; RMSE=12.4°, respectively) with the MAC system. However, when tracking forearm pronation/supination, there were serious inconsistencies in reported joint angles (r=0.79; RMSE=38.4°). Hand posture significantly influenced the quality of wrist deviation (P<0.005) and forearm supination/pronation (P<0.001), but not wrist flexion/extension (P=0.29). We conclude that the LMC is capable of providing data that are clinically meaningful for wrist flexion/extension, and perhaps wrist deviation. It cannot yet return clinically meaningful data for measuring forearm pronation/supination. Future studies should continue to validate the LMC as updated versions of their software are developed. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. Motion data classification on the basis of dynamic time warping with a cloud point distance measure

    NASA Astrophysics Data System (ADS)

    Switonski, Adam; Josinski, Henryk; Zghidi, Hafedh; Wojciechowski, Konrad

    2016-06-01

    The paper deals with the problem of classification of model free motion data. The nearest neighbors classifier which is based on comparison performed by Dynamic Time Warping transform with cloud point distance measure is proposed. The classification utilizes both specific gait features reflected by a movements of subsequent skeleton joints and anthropometric data. To validate proposed approach human gait identification challenge problem is taken into consideration. The motion capture database containing data of 30 different humans collected in Human Motion Laboratory of Polish-Japanese Academy of Information Technology is used. The achieved results are satisfactory, the obtained accuracy of human recognition exceeds 90%. What is more, the applied cloud point distance measure does not depend on calibration process of motion capture system which results in reliable validation.

  14. Health Problems Discovery from Motion-Capture Data of Elderly

    NASA Astrophysics Data System (ADS)

    Pogorelc, B.; Gams, M.

    Rapid aging of the population of the developed countries could exceed the society's capacity for taking care for them. In order to help solving this problem, we propose a system for automatic discovery of health problems from motion-capture data of gait of elderly. The gait of the user is captured with the motion capture system, which consists of tags attached to the body and sensors situated in the apartment. Position of the tags is acquired by the sensors and the resulting time series of position coordinates are analyzed with machine learning algorithms in order to identify the specific health problem. We propose novel features for training a machine learning classifier that classifies the user's gait into: i) normal, ii) with hemiplegia, iii) with Parkinson's disease, iv) with pain in the back and v) with pain in the leg. Results show that naive Bayes needs more tags and less noise to reach classification accuracy of 98 % than support vector machines for 99 %.

  15. Multisensory Integration of Visual and Vestibular Signals Improves Heading Discrimination in the Presence of a Moving Object

    PubMed Central

    Dokka, Kalpana; DeAngelis, Gregory C.

    2015-01-01

    Humans and animals are fairly accurate in judging their direction of self-motion (i.e., heading) from optic flow when moving through a stationary environment. However, an object moving independently in the world alters the optic flow field and may bias heading perception if the visual system cannot dissociate object motion from self-motion. We investigated whether adding vestibular self-motion signals to optic flow enhances the accuracy of heading judgments in the presence of a moving object. Macaque monkeys were trained to report their heading (leftward or rightward relative to straight-forward) when self-motion was specified by vestibular, visual, or combined visual-vestibular signals, while viewing a display in which an object moved independently in the (virtual) world. The moving object induced significant biases in perceived heading when self-motion was signaled by either visual or vestibular cues alone. However, this bias was greatly reduced when visual and vestibular cues together signaled self-motion. In addition, multisensory heading discrimination thresholds measured in the presence of a moving object were largely consistent with the predictions of an optimal cue integration strategy. These findings demonstrate that multisensory cues facilitate the perceptual dissociation of self-motion and object motion, consistent with computational work that suggests that an appropriate decoding of multisensory visual-vestibular neurons can estimate heading while discounting the effects of object motion. SIGNIFICANCE STATEMENT Objects that move independently in the world alter the optic flow field and can induce errors in perceiving the direction of self-motion (heading). We show that adding vestibular (inertial) self-motion signals to optic flow almost completely eliminates the errors in perceived heading induced by an independently moving object. Furthermore, this increased accuracy occurs without a substantial loss in the precision. Our results thus demonstrate that vestibular signals play a critical role in dissociating self-motion from object motion. PMID:26446214

  16. Closed-loop optical stabilization and digital image registration in adaptive optics scanning light ophthalmoscopy

    PubMed Central

    Yang, Qiang; Zhang, Jie; Nozato, Koji; Saito, Kenichi; Williams, David R.; Roorda, Austin; Rossi, Ethan A.

    2014-01-01

    Eye motion is a major impediment to the efficient acquisition of high resolution retinal images with the adaptive optics (AO) scanning light ophthalmoscope (AOSLO). Here we demonstrate a solution to this problem by implementing both optical stabilization and digital image registration in an AOSLO. We replaced the slow scanning mirror with a two-axis tip/tilt mirror for the dual functions of slow scanning and optical stabilization. Closed-loop optical stabilization reduced the amplitude of eye-movement related-image motion by a factor of 10–15. The residual RMS error after optical stabilization alone was on the order of the size of foveal cones: ~1.66–2.56 μm or ~0.34–0.53 arcmin with typical fixational eye motion for normal observers. The full implementation, with real-time digital image registration, corrected the residual eye motion after optical stabilization with an accuracy of ~0.20–0.25 μm or ~0.04–0.05 arcmin RMS, which to our knowledge is more accurate than any method previously reported. PMID:25401030

  17. Development and biological applications of optical tweezers and Raman spectroscopy

    NASA Astrophysics Data System (ADS)

    Xie, Chang'an

    Optical tweezers is a three-dimensional manipulation tool that employs a gradient force that originates from the single highly focused laser beam. Raman spectroscopy is a molecular analytical tool that can give a highly unique "fingerprint" for each substance by measuring the unique vibrations of its molecules. The combination of these two optical techniques offers a new tool for the manipulation and identification of single biological cells and microscopic particles. In this thesis, we designed and implemented a Laser-Tweezers-Raman-Spectroscopy (LTRS) system, also called the Raman-tweezers, for the simultaneous capture and analysis of both biological particles and non-biological particles. We show that microparticles can be conveniently captured at the focus of a laser beam and the Raman spectra of trapped particles can be acquired with high quality. The LTRS system overcomes the intrinsic Brownian motion and cell motility of microparticles in solution and provides a promising tool for in situ identifying suspicious agents. In order to increase the signal to noise ratio, several schemes were employed in LTRS system to reduce the blank noise and the fluorescence signal coming from analytes and the surrounding background. These techniques include near-infrared excitation, optical levitation, confocal microscopy, and frequency-shifted Raman difference. The LTRS system has been applied for the study in cell biology at the single cell level. With the built Raman-tweezers system, we studied the dynamic physiological processes of single living cells, including cell cycle, the transcription and translation of recombinant protein in transgenic yeast cells and the T cell activation. We also studied cell damage and associated biochemical processes in optical traps, UV radiations, and evaluated heating by near-infrared Raman spectroscopy. These studies show that the Raman-tweezers system is feasible to provide rapid and reliable diagnosis of cellular disorders and can be used as a valuable tool to study cellular processes within single living cells or intracellular organelles and may aid research in molecular and cellular biology.

  18. 4D computed tomography scans for conformal thoracic treatment planning: is a single scan sufficient to capture thoracic tumor motion?

    NASA Astrophysics Data System (ADS)

    Tseng, Yolanda D.; Wootton, Landon; Nyflot, Matthew; Apisarnthanarax, Smith; Rengan, Ramesh; Bloch, Charles; Sandison, George; St. James, Sara

    2018-01-01

    Four dimensional computed tomography (4DCT) scans are routinely used in radiation therapy to determine the internal treatment volume for targets that are moving (e.g. lung tumors). The use of these studies has allowed clinicians to create target volumes based upon the motion of the tumor during the imaging study. The purpose of this work is to determine if a target volume based on a single 4DCT scan at simulation is sufficient to capture thoracic motion. Phantom studies were performed to determine expected differences between volumes contoured on 4DCT scans and those on the evaluation CT scans (slow scans). Evaluation CT scans acquired during treatment of 11 patients were compared to the 4DCT scans used for treatment planning. The images were assessed to determine if the target remained within the target volume determined during the first 4DCT scan. A total of 55 slow scans were compared to the 11 planning 4DCT scans. Small differences were observed in phantom between the 4DCT volumes and the slow scan volumes, with a maximum of 2.9%, that can be attributed to minor differences in contouring and the ability of the 4DCT scan to adequately capture motion at the apex and base of the motion trajectory. Larger differences were observed in the patients studied, up to a maximum volume difference of 33.4%. These results demonstrate that a single 4DCT scan is not adequate to capture all thoracic motion throughout treatment.

  19. Apparent diffusive motion of centrin foci in living cells: implications for diffusion-based motion in centriole duplication

    NASA Astrophysics Data System (ADS)

    Rafelski, Susanne M.; Keller, Lani C.; Alberts, Jonathan B.; Marshall, Wallace F.

    2011-04-01

    The degree to which diffusion contributes to positioning cellular structures is an open question. Here we investigate the question of whether diffusive motion of centrin granules would allow them to interact with the mother centriole. The role of centrin granules in centriole duplication remains unclear, but some proposed functions of these granules, for example, in providing pre-assembled centriole subunits, or by acting as unstable 'pre-centrioles' that need to be captured by the mother centriole (La Terra et al 2005 J. Cell Biol. 168 713-22), require the centrin foci to reach the mother. To test whether diffusive motion could permit such interactions in the necessary time scale, we measured the motion of centrin-containing foci in living human U2OS cells. We found that these centrin foci display apparently diffusive undirected motion. Using the apparent diffusion constant obtained from these measurements, we calculated the time scale required for diffusion to capture by the mother centrioles and found that it would greatly exceed the time available in the cell cycle. We conclude that mechanisms invoking centrin foci capture by the mother, whether as a pre-centriole or as a source of components to support later assembly, would require a form of directed motility of centrin foci that has not yet been observed.

  20. Definition of anatomical zero positions for assessing shoulder pose with 3D motion capture during bilateral abduction of the arms.

    PubMed

    Rettig, Oliver; Krautwurst, Britta; Maier, Michael W; Wolf, Sebastian I

    2015-12-09

    Surgical interventions at the shoulder may alter function of the shoulder complex. Clinically, the outcome can be assessed by universal goniometry. Marker-based motion capture may not resemble these results due to differing angle definitions. The clinical inspection of bilateral arm abduction for assessing shoulder dysfunction is performed with a marker based 3D optical measurement method. An anatomical zero position of shoulder pose is proposed to determine absolute angles according to the Neutral-0-Method as used in orthopedic context. Static shoulder positions are documented simultaneously by 3D marker tracking and universal goniometry in 8 young and healthy volunteers. Repetitive bilateral arm abduction movements of at least 150° range of motion are monitored. Similarly a subject with gleno-humeral osteoarthritis is monitored for demonstrating the feasibility of the method and to illustrate possible shoulder dysfunction effects. With mean differences of less than 2°, the proposed anatomical zero position results in good agreement between shoulder elevation/depression angles determined by 3D marker tracking and by universal goniometry in static positions. Lesser agreement is found for shoulder pro-/retraction with systematic deviations of up to 6°. In the bilateral arm abduction movements the volunteers perform a common and specific pattern in clavicula-thoracic and gleno-humeral motion with maximum shoulder angles of 32° elevation, 5° depression and 45° protraction, respectively, whereas retraction is hardly reached. Further, they all show relevant out of (frontal) plane motion with anteversion angles of 30° in overhead position (maximum abduction). With increasing arm anteversion the shoulder is increasingly retroverted, with a maximum of 20° retroversion. The subject with gleno-humeral osteoarthritis shows overall less shoulder abduction range of motion but with increased out-of-plane movement during abduction. The proposed anatomical zero definition for shoulder pose fills the missing link for determining absolute joint angles for shoulder elevation/depression and pro-/retraction. For elevation-/depression the accuracy suits clinical expectations very well with mean differences less than 2° and limits of agreement of 8.6° whereas for pro-/retraction the accuracy in individual cases may be inferior with limits of agreement of up to 24.6°. This has critically to be kept in mind when applying this concept to shoulder intervention studies.

  1. Minimum-variance Brownian motion control of an optically trapped probe.

    PubMed

    Huang, Yanan; Zhang, Zhipeng; Menq, Chia-Hsiang

    2009-10-20

    This paper presents a theoretical and experimental investigation of the Brownian motion control of an optically trapped probe. The Langevin equation is employed to describe the motion of the probe experiencing random thermal force and optical trapping force. Since active feedback control is applied to suppress the probe's Brownian motion, actuator dynamics and measurement delay are included in the equation. The equation of motion is simplified to a first-order linear differential equation and transformed to a discrete model for the purpose of controller design and data analysis. The derived model is experimentally verified by comparing the model prediction to the measured response of a 1.87 microm trapped probe subject to proportional control. It is then employed to design the optimal controller that minimizes the variance of the probe's Brownian motion. Theoretical analysis is derived to evaluate the control performance of a specific optical trap. Both experiment and simulation are used to validate the design as well as theoretical analysis, and to illustrate the performance envelope of the active control. Moreover, adaptive minimum variance control is implemented to maintain the optimal performance in the case in which the system is time varying when operating the actively controlled optical trap in a complex environment.

  2. Evaluation of a Gait Assessment Module Using 3D Motion Capture Technology

    PubMed Central

    Baskwill, Amanda J.; Belli, Patricia; Kelleher, Leila

    2017-01-01

    Background Gait analysis is the study of human locomotion. In massage therapy, this observation is part of an assessment process that informs treatment planning. Massage therapy students must apply the theory of gait assessment to simulated patients. At Humber College, the gait assessment module traditionally consists of a textbook reading and a three-hour, in-class session in which students perform gait assessment on each other. In 2015, Humber College acquired a three-dimensional motion capture system. Purpose The purpose was to evaluate the use of 3D motion capture in a gait assessment module compared to the traditional gait assessment module. Participants Semester 2 massage therapy students who were enrolled in Massage Theory 2 (n = 38). Research Design Quasi-experimental, wait-list comparison study. Intervention The intervention group participated in an in-class session with a Qualisys motion capture system. Main Outcome Measure(s) The outcomes included knowledge and application of gait assessment theory as measured by quizzes, and students’ satisfaction as measured through a questionnaire. Results There were no statistically significant differences in baseline and post-module knowledge between both groups (pre-module: p = .46; post-module: p = .63). There was also no difference between groups on the final application question (p = .13). The intervention group enjoyed the in-class session because they could visualize the content, whereas the comparison group enjoyed the interactivity of the session. The intervention group recommended adding the assessment of gait on their classmates to their experience. Both groups noted more time was needed for the gait assessment module. Conclusions Based on the results of this study, it is recommended that the gait assessment module combine both the traditional in-class session and the 3D motion capture system. PMID:28293329

  3. 3D Human Motion Editing and Synthesis: A Survey

    PubMed Central

    Wang, Xin; Chen, Qiudi; Wang, Wanliang

    2014-01-01

    The ways to compute the kinematics and dynamic quantities of human bodies in motion have been studied in many biomedical papers. This paper presents a comprehensive survey of 3D human motion editing and synthesis techniques. Firstly, four types of methods for 3D human motion synthesis are introduced and compared. Secondly, motion capture data representation, motion editing, and motion synthesis are reviewed successively. Finally, future research directions are suggested. PMID:25045395

  4. Fractional-order information in the visual control of lateral locomotor interception.

    PubMed

    Bootsma, Reinoud J; Ledouit, Simon; Casanova, Remy; Zaal, Frank T J M

    2016-04-01

    Previous work on locomotor interception of a target moving in the transverse plane has suggested that interception is achieved by maintaining the target's bearing angle (often inadvertently confused and/or confounded with the target heading angle) at a constant value. However, dynamics-based model simulations testing the veracity of the underlying control strategy of nulling the rate of change in the bearing angle have been restricted to limited conditions of target motion, and only a few alternatives have been considered. Exploring a wide range of target motion characteristics with straight and curving ball trajectories in a virtual reality setting, we examined how soccer goalkeepers moved along the goal line to intercept long-range shots on goal, a situation in which interception is naturally constrained to movement along a single dimension. Analyses of the movement patterns suggested reliance on combinations of optical position and velocity for straight trajectories and optical velocity and acceleration for curving trajectories. As an alternative to combining such standard integer-order derivatives, we demonstrate with a simple dynamical model that nulling a single informational variable of a self-tuned fractional (rather than integer) order efficiently captures the timing and patterning of the observed interception behaviors. This new perspective could fundamentally change the conception of what perceptual systems may actually provide, both in humans and in other animals. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  5. GN/C translation and rotation control parameters for AR/C (category 2)

    NASA Technical Reports Server (NTRS)

    Henderson, David M.

    1991-01-01

    Detailed analysis of the Automatic Rendezvous and Capture problem indicate a need for three different regions of mathematical description for the GN&C algorithms: (1) multi-vehicle orbital mechanics to the rendezvous interface point, i.e., within 100 n.; (2) relative motion solutions (such as Clohessy-Wiltshire type) from the far-field to the near-field interface, i.e., within 1 nm; and (3) close proximity motion, the nearfield motion where the relative differences in the gravitational and orbit inertial accelerations can be neglected from the equations of motion. This paper defines the reference coordinate frames and control parameters necessary to model the relative motion and attitude of spacecraft in the close proximity of another space system (Region 2 and 3) during the Automatic Rendezvous and Capture phase of an orbit operation.

  6. Alert Response to Motion Onset in the Retina

    PubMed Central

    Chen, Eric Y.; Marre, Olivier; Fisher, Clark; Schwartz, Greg; Levy, Joshua; da Silveira, Rava Azeredo

    2013-01-01

    Previous studies have shown that motion onset is very effective at capturing attention and is more salient than smooth motion. Here, we find that this salience ranking is present already in the firing rate of retinal ganglion cells. By stimulating the retina with a bar that appears, stays still, and then starts moving, we demonstrate that a subset of salamander retinal ganglion cells, fast OFF cells, responds significantly more strongly to motion onset than to smooth motion. We refer to this phenomenon as an alert response to motion onset. We develop a computational model that predicts the time-varying firing rate of ganglion cells responding to the appearance, onset, and smooth motion of a bar. This model, termed the adaptive cascade model, consists of a ganglion cell that receives input from a layer of bipolar cells, represented by individual rectified subunits. Additionally, both the bipolar and ganglion cells have separate contrast gain control mechanisms. This model captured the responses to our different motion stimuli over a wide range of contrasts, speeds, and locations. The alert response to motion onset, together with its computational model, introduces a new mechanism of sophisticated motion processing that occurs early in the visual system. PMID:23283327

  7. Non-classical light generated by quantum-noise-driven cavity optomechanics.

    PubMed

    Brooks, Daniel W C; Botter, Thierry; Schreppler, Sydney; Purdy, Thomas P; Brahms, Nathan; Stamper-Kurn, Dan M

    2012-08-23

    Optomechanical systems, in which light drives and is affected by the motion of a massive object, will comprise a new framework for nonlinear quantum optics, with applications ranging from the storage and transduction of quantum information to enhanced detection sensitivity in gravitational wave detectors. However, quantum optical effects in optomechanical systems have remained obscure, because their detection requires the object’s motion to be dominated by vacuum fluctuations in the optical radiation pressure; so far, direct observations have been stymied by technical and thermal noise. Here we report an implementation of cavity optomechanics using ultracold atoms in which the collective atomic motion is dominantly driven by quantum fluctuations in radiation pressure. The back-action of this motion onto the cavity light field produces ponderomotive squeezing. We detect this quantum phenomenon by measuring sub-shot-noise optical squeezing. Furthermore, the system acts as a low-power, high-gain, nonlinear parametric amplifier for optical fluctuations, demonstrating a gain of 20 dB with a pump corresponding to an average of only seven intracavity photons. These findings may pave the way for low-power quantum optical devices, surpassing quantum limits on position and force sensing, and the control and measurement of motion in quantum gases.

  8. Accuracy and Reliability of the Kinect Version 2 for Clinical Measurement of Motor Function

    PubMed Central

    Kayser, Bastian; Mansow-Model, Sebastian; Verrel, Julius; Paul, Friedemann; Brandt, Alexander U.; Schmitz-Hübsch, Tanja

    2016-01-01

    Background The introduction of low cost optical 3D motion tracking sensors provides new options for effective quantification of motor dysfunction. Objective The present study aimed to evaluate the Kinect V2 sensor against a gold standard motion capture system with respect to accuracy of tracked landmark movements and accuracy and repeatability of derived clinical parameters. Methods Nineteen healthy subjects were concurrently recorded with a Kinect V2 sensor and an optical motion tracking system (Vicon). Six different movement tasks were recorded with 3D full-body kinematics from both systems. Tasks included walking in different conditions, balance and adaptive postural control. After temporal and spatial alignment, agreement of movements signals was described by Pearson’s correlation coefficient and signal to noise ratios per dimension. From these movement signals, 45 clinical parameters were calculated, including ranges of motions, torso sway, movement velocities and cadence. Accuracy of parameters was described as absolute agreement, consistency agreement and limits of agreement. Intra-session reliability of 3 to 5 measurement repetitions was described as repeatability coefficient and standard error of measurement for each system. Results Accuracy of Kinect V2 landmark movements was moderate to excellent and depended on movement dimension, landmark location and performed task. Signal to noise ratio provided information about Kinect V2 landmark stability and indicated larger noise behaviour in feet and ankles. Most of the derived clinical parameters showed good to excellent absolute agreement (30 parameters showed ICC(3,1) > 0.7) and consistency (38 parameters showed r > 0.7) between both systems. Conclusion Given that this system is low-cost, portable and does not require any sensors to be attached to the body, it could provide numerous advantages when compared to established marker- or wearable sensor based system. The Kinect V2 has the potential to be used as a reliable and valid clinical measurement tool. PMID:27861541

  9. Monitoring the Wall Mechanics During Stent Deployment in a Vessel

    PubMed Central

    Steinert, Brian D.; Zhao, Shijia; Gu, Linxia

    2012-01-01

    Clinical trials have reported different restenosis rates for various stent designs1. It is speculated that stent-induced strain concentrations on the arterial wall lead to tissue injury, which initiates restenosis2-7. This hypothesis needs further investigations including better quantifications of non-uniform strain distribution on the artery following stent implantation. A non-contact surface strain measurement method for the stented artery is presented in this work. ARAMIS stereo optical surface strain measurement system uses two optical high speed cameras to capture the motion of each reference point, and resolve three dimensional strains over the deforming surface8,9. As a mesh stent is deployed into a latex vessel with a random contrasting pattern sprayed or drawn on its outer surface, the surface strain is recorded at every instant of the deformation. The calculated strain distributions can then be used to understand the local lesion response, validate the computational models, and formulate hypotheses for further in vivo study. PMID:22588353

  10. Restoration of motion blurred images

    NASA Astrophysics Data System (ADS)

    Gaxiola, Leopoldo N.; Juarez-Salazar, Rigoberto; Diaz-Ramirez, Victor H.

    2017-08-01

    Image restoration is a classic problem in image processing. Image degradations can occur due to several reasons, for instance, imperfections of imaging systems, quantization errors, atmospheric turbulence, relative motion between camera or objects, among others. Motion blur is a typical degradation in dynamic imaging systems. In this work, we present a method to estimate the parameters of linear motion blur degradation from a captured blurred image. The proposed method is based on analyzing the frequency spectrum of a captured image in order to firstly estimate the degradation parameters, and then, to restore the image with a linear filter. The performance of the proposed method is evaluated by processing synthetic and real-life images. The obtained results are characterized in terms of accuracy of image restoration given by an objective criterion.

  11. Animation control of surface motion capture.

    PubMed

    Tejera, Margara; Casas, Dan; Hilton, Adrian

    2013-12-01

    Surface motion capture (SurfCap) of actor performance from multiple view video provides reconstruction of the natural nonrigid deformation of skin and clothing. This paper introduces techniques for interactive animation control of SurfCap sequences which allow the flexibility in editing and interactive manipulation associated with existing tools for animation from skeletal motion capture (MoCap). Laplacian mesh editing is extended using a basis model learned from SurfCap sequences to constrain the surface shape to reproduce natural deformation. Three novel approaches for animation control of SurfCap sequences, which exploit the constrained Laplacian mesh editing, are introduced: 1) space–time editing for interactive sequence manipulation; 2) skeleton-driven animation to achieve natural nonrigid surface deformation; and 3) hybrid combination of skeletal MoCap driven and SurfCap sequence to extend the range of movement. These approaches are combined with high-level parametric control of SurfCap sequences in a hybrid surface and skeleton-driven animation control framework to achieve natural surface deformation with an extended range of movement by exploiting existing MoCap archives. Evaluation of each approach and the integrated animation framework are presented on real SurfCap sequences for actors performing multiple motions with a variety of clothing styles. Results demonstrate that these techniques enable flexible control for interactive animation with the natural nonrigid surface dynamics of the captured performance and provide a powerful tool to extend current SurfCap databases by incorporating new motions from MoCap sequences.

  12. NESDI FY10 Year in Review Report: The Case For Success 2010

    DTIC Science & Technology

    2010-01-01

    36 CASE STUDY: Motion Assisted Environmental Enclosure for Capturing Paint Overspray in Dry Docks...and to outline a means to assess its environmental impact. 8. Motion Assisted Environmental Enclosure for Capturing Paint Overspray in Dry Docks...in dry docks. 9. Cleaning Solvents for the 21st Century. As part of the Department of Defense’s (DoD) response to eliminating the use of volatile

  13. Validation of an inertial measurement unit for the measurement of jump count and height.

    PubMed

    MacDonald, Kerry; Bahr, Roald; Baltich, Jennifer; Whittaker, Jackie L; Meeuwisse, Willem H

    2017-05-01

    To validate the use of an inertial measurement unit (IMU) for the collection of total jump count and assess the validity of an IMU for the measurement of jump height against 3-D motion analysis. Cross sectional validation study. 3D motion-capture laboratory and field based settings. Thirteen elite adolescent volleyball players. Participants performed structured drills, played a 4 set volleyball match and performed twelve counter movement jumps. Jump counts from structured drills and match play were validated against visual count from recorded video. Jump height during the counter movement jumps was validated against concurrent 3-D motion-capture data. The IMU device captured more total jumps (1032) than visual inspection (977) during match play. During structured practice, device jump count sensitivity was strong (96.8%) while specificity was perfect (100%). The IMU underestimated jump height compared to 3D motion-capture with mean differences for maximal and submaximal jumps of 2.5 cm (95%CI: 1.3 to 3.8) and 4.1 cm (3.1-5.1), respectively. The IMU offers a valid measuring tool for jump count. Although the IMU underestimates maximal and submaximal jump height, our findings demonstrate its practical utility for field-based measurement of jump load. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. Deblurring for spatial and temporal varying motion with optical computing

    NASA Astrophysics Data System (ADS)

    Xiao, Xiao; Xue, Dongfeng; Hui, Zhao

    2016-05-01

    A way to estimate and remove spatially and temporally varying motion blur is proposed, which is based on an optical computing system. The translation and rotation motion can be independently estimated from the joint transform correlator (JTC) system without iterative optimization. The inspiration comes from the fact that the JTC system is immune to rotation motion in a Cartesian coordinate system. The work scheme of the JTC system is designed to keep switching between the Cartesian coordinate system and polar coordinate system in different time intervals with the ping-pang handover. In the ping interval, the JTC system works in the Cartesian coordinate system to obtain a translation motion vector with optical computing speed. In the pang interval, the JTC system works in the polar coordinate system. The rotation motion is transformed to the translation motion through coordinate transformation. Then the rotation motion vector can also be obtained from JTC instantaneously. To deal with continuous spatially variant motion blur, submotion vectors based on the projective motion path blur model are proposed. The submotion vectors model is more effective and accurate at modeling spatially variant motion blur than conventional methods. The simulation and real experiment results demonstrate its overall effectiveness.

  15. A simple 5-DoF MR-compatible motion signal measurement system.

    PubMed

    Chung, Soon-Cheol; Kim, Hyung-Sik; Yang, Jae-Woong; Lee, Su-Jeong; Choi, Mi-Hyun; Kim, Ji-Hye; Yeon, Hong-Won; Park, Jang-Yeon; Yi, Jeong-Han; Tack, Gye-Rae

    2011-09-01

    The purpose of this study was to develop a simple motion measurement system with magnetic resonance (MR) compatibility and safety. The motion measurement system proposed here can measure 5-DoF motion signals without deteriorating the MR images, and it has no effect on the intense and homogeneous main magnetic field, the temporal-gradient magnetic field (which varies rapidly with time), the transceiver radio frequency (RF) coil, and the RF pulse during MR data acquisition. A three-axis accelerometer and a two-axis gyroscope were used to measure 5-DoF motion signals, and Velcro was used to attach a sensor module to a finger or wrist. To minimize the interference between the MR imaging system and the motion measurement system, nonmagnetic materials were used for all electric circuit components in an MR shield room. To remove the effect of RF pulse, an amplifier, modulation circuit, and power supply were located in a shielded case, which was made of copper and aluminum. The motion signal was modulated to an optic signal using pulse width modulation, and the modulated optic signal was transmitted outside the MR shield room using a high-intensity light-emitting diode and an optic cable. The motion signal was recorded on a PC by demodulating the transmitted optic signal into an electric signal. Various kinematic variables, such as angle, acceleration, velocity, and jerk, can be measured or calculated by using the motion measurement system developed here. This system also enables motion tracking by extracting the position information from the motion signals. It was verified that MR images and motion signals could reliably be measured simultaneously.

  16. Optically driven oscillations of ellipsoidal particles. Part I: experimental observations.

    PubMed

    Mihiretie, B M; Snabre, P; Loudet, J-C; Pouligny, B

    2014-12-01

    We report experimental observations of the mechanical effects of light on ellipsoidal micrometre-sized dielectric particles, in water as the continuous medium. The particles, made of polystyrene, have shapes varying between near disk-like (aspect ratio k = 0.2) to very elongated needle-like (k = 8). Rather than the very tightly focused beam geometry of optical tweezers, we use a moderately focused laser beam to manipulate particles individually by optical levitation. The geometry allows us varying the longitudinal position of the particle, and to capture images perpendicular to the beam axis. Experiments show that moderate-k particles are radially trapped with their long axis lying parallel to the beam. Conversely, elongated (k > 3) or flattened (k < 0.3) ellipsoids never come to rest, and permanently "dance" around the beam, through coupled translation-rotation motions. The oscillations are shown to occur in general, be the particle in bulk water or close to a solid boundary, and may be periodic or irregular. We provide evidence for two bifurcations between static and oscillating states, at k ≈ 0.33 and k ≈ 3 for oblate and prolate ellipsoids, respectively. Based on a recently developed 2-dimensional ray-optics simulation (Mihiretie et al., EPL 100, 48005 (2012)), we propose a simple model that allows understanding the physical origin of the oscillations.

  17. Optical Flow Estimation for Flame Detection in Videos

    PubMed Central

    Mueller, Martin; Karasev, Peter; Kolesov, Ivan; Tannenbaum, Allen

    2014-01-01

    Computational vision-based flame detection has drawn significant attention in the past decade with camera surveillance systems becoming ubiquitous. Whereas many discriminating features, such as color, shape, texture, etc., have been employed in the literature, this paper proposes a set of motion features based on motion estimators. The key idea consists of exploiting the difference between the turbulent, fast, fire motion, and the structured, rigid motion of other objects. Since classical optical flow methods do not model the characteristics of fire motion (e.g., non-smoothness of motion, non-constancy of intensity), two optical flow methods are specifically designed for the fire detection task: optimal mass transport models fire with dynamic texture, while a data-driven optical flow scheme models saturated flames. Then, characteristic features related to the flow magnitudes and directions are computed from the flow fields to discriminate between fire and non-fire motion. The proposed features are tested on a large video database to demonstrate their practical usefulness. Moreover, a novel evaluation method is proposed by fire simulations that allow for a controlled environment to analyze parameter influences, such as flame saturation, spatial resolution, frame rate, and random noise. PMID:23613042

  18. Fiber optic sensor based on reflectivity configurations to detect heart rate

    NASA Astrophysics Data System (ADS)

    Yunianto, M.; Marzuki, A.; Riyatun, R.; Lestari, D.

    2016-11-01

    Research of optical fiber-based heart rate detection sensor has been conducted using the reflection configurationon the thorax motion modified. Optical fiber used in this research was Plastic Optical Fiber (POF) with a diameter of 0.5. Optical fiber system is made with two pieces of fiber, the first fiber is to serve as a transmitter transmitting light from the source to the reflector membrane, the second fiber serves as a receiver. One of the endsfrom the two fibersis pressed and positioned perpendicular of reflector membrane which is placed on the surface of the chest. The sensor works on the principle of intensity changes captured by the receiver fiber when the reflector membrane gets the vibe from the heart. The light source used is in the form of Light Emitting Diode (LED) and Light Dependent Resistor (LDR) as a light sensor. Variations are performed on the reflector membrane diameter. The light intensity received by the detector increases along with the increasing width of the reflector membrane diameter. The results show that this sensor can detect the harmonic peak at a frequency of 1.5 Hz; 7.5 Hz; 10.5 Hz; and 22.5 Hz in a healthy human heart with an average value of Beat Per Minute (BPM) by 78 times, a prototype sensor that is made can work and function properly.

  19. Local Dynamic Stability Assessment of Motion Impaired Elderly Using Electronic Textile Pants.

    PubMed

    Liu, Jian; Lockhart, Thurmon E; Jones, Mark; Martin, Tom

    2008-10-01

    A clear association has been demonstrated between gait stability and falls in the elderly. Integration of wearable computing and human dynamic stability measures into home automation systems may help differentiate fall-prone individuals in a residential environment. The objective of the current study was to evaluate the capability of a pair of electronic textile (e-textile) pants system to assess local dynamic stability and to differentiate motion-impaired elderly from their healthy counterparts. A pair of e-textile pants comprised of numerous e-TAGs at locations corresponding to lower extremity joints was developed to collect acceleration, angular velocity and piezoelectric data. Four motion-impaired elderly together with nine healthy individuals (both young and old) participated in treadmill walking with a motion capture system simultaneously collecting kinematic data. Local dynamic stability, characterized by maximum Lyapunov exponent, was computed based on vertical acceleration and angular velocity at lower extremity joints for the measurements from both e-textile and motion capture systems. Results indicated that the motion-impaired elderly had significantly higher maximum Lyapunov exponents (computed from vertical acceleration data) than healthy individuals at the right ankle and hip joints. In addition, maximum Lyapunov exponents assessed by the motion capture system were found to be significantly higher than those assessed by the e-textile system. Despite the difference between these measurement techniques, attaching accelerometers at the ankle and hip joints was shown to be an effective sensor configuration. It was concluded that the e-textile pants system, via dynamic stability assessment, has the potential to identify motion-impaired elderly.

  20. The Accuracy of Conventional 2D Video for Quantifying Upper Limb Kinematics in Repetitive Motion Occupational Tasks

    PubMed Central

    Chen, Chia-Hsiung; Azari, David; Hu, Yu Hen; Lindstrom, Mary J.; Thelen, Darryl; Yen, Thomas Y.; Radwin, Robert G.

    2015-01-01

    Objective Marker-less 2D video tracking was studied as a practical means to measure upper limb kinematics for ergonomics evaluations. Background Hand activity level (HAL) can be estimated from speed and duty cycle. Accuracy was measured using a cross correlation template-matching algorithm for tracking a region of interest on the upper extremities. Methods Ten participants performed a paced load transfer task while varying HAL (2, 4, and 5) and load (2.2 N, 8.9 N and 17.8 N). Speed and acceleration measured from 2D video were compared against ground truth measurements using 3D infrared motion capture. Results The median absolute difference between 2D video and 3D motion capture was 86.5 mm/s for speed, and 591 mm/s2 for acceleration, and less than 93 mm/s for speed and 656 mm/s2 for acceleration when camera pan and tilt were within ±30 degrees. Conclusion Single-camera 2D video had sufficient accuracy (< 100 mm/s) for evaluating HAL. Practitioner Summary This study demonstrated that 2D video tracking had sufficient accuracy to measure HAL for ascertaining the American Conference of Government Industrial Hygienists Threshold Limit Value® for repetitive motion when the camera is located within ±30 degrees off the plane of motion when compared against 3D motion capture for a simulated repetitive motion task. PMID:25978764

  1. Optically gated beating-heart imaging

    PubMed Central

    Taylor, Jonathan M.

    2014-01-01

    The constant motion of the beating heart presents an obstacle to clear optical imaging, especially 3D imaging, in small animals where direct optical imaging would otherwise be possible. Gating techniques exploit the periodic motion of the heart to computationally “freeze” this movement and overcome motion artifacts. Optically gated imaging represents a recent development of this, where image analysis is used to synchronize acquisition with the heartbeat in a completely non-invasive manner. This article will explain the concept of optical gating, discuss a range of different implementation strategies and their strengths and weaknesses. Finally we will illustrate the usefulness of the technique by discussing applications where optical gating has facilitated novel biological findings by allowing 3D in vivo imaging of cardiac myocytes in their natural environment of the beating heart. PMID:25566083

  2. Hand motion modeling for psychology analysis in job interview using optical flow-history motion image: OF-HMI

    NASA Astrophysics Data System (ADS)

    Khalifa, Intissar; Ejbali, Ridha; Zaied, Mourad

    2018-04-01

    To survive the competition, companies always think about having the best employees. The selection is depended on the answers to the questions of the interviewer and the behavior of the candidate during the interview session. The study of this behavior is always based on a psychological analysis of the movements accompanying the answers and discussions. Few techniques are proposed until today to analyze automatically candidate's non verbal behavior. This paper is a part of a work psychology recognition system; it concentrates in spontaneous hand gesture which is very significant in interviews according to psychologists. We propose motion history representation of hand based on an hybrid approach that merges optical flow and history motion images. The optical flow technique is used firstly to detect hand motions in each frame of a video sequence. Secondly, we use the history motion images (HMI) to accumulate the output of the optical flow in order to have finally a good representation of the hand`s local movement in a global temporal template.

  3. Validation of a stereo camera system to quantify brain deformation due to breathing and pulsatility.

    PubMed

    Faria, Carlos; Sadowsky, Ofri; Bicho, Estela; Ferrigno, Giancarlo; Joskowicz, Leo; Shoham, Moshe; Vivanti, Refael; De Momi, Elena

    2014-11-01

    A new stereo vision system is presented to quantify brain shift and pulsatility in open-skull neurosurgeries. The system is endowed with hardware and software synchronous image acquisition with timestamp embedding in the captured images, a brain surface oriented feature detection, and a tracking subroutine robust to occlusions and outliers. A validation experiment for the stereo vision system was conducted against a gold-standard optical tracking system, Optotrak CERTUS. A static and dynamic analysis of the stereo camera tracking error was performed tracking a customized object in different positions, orientations, linear, and angular speeds. The system is able to detect an immobile object position and orientation with a maximum error of 0.5 mm and 1.6° in all depth of field, and tracking a moving object until 3 mm/s with a median error of 0.5 mm. Three stereo video acquisitions were recorded from a patient, immediately after the craniotomy. The cortical pulsatile motion was captured and is represented in the time and frequency domain. The amplitude of motion of the cloud of features' center of mass was inferior to 0.8 mm. Three distinct peaks are identified in the fast Fourier transform analysis related to the sympathovagal balance, breathing, and blood pressure with 0.03-0.05, 0.2, and 1 Hz, respectively. The stereo vision system presented is a precise and robust system to measure brain shift and pulsatility with an accuracy superior to other reported systems.

  4. Computer Controlled Optical Surfacing With Orbital Tool Motion

    NASA Astrophysics Data System (ADS)

    Jones, Robert A.

    1985-11-01

    Asymmetric aspheric optical surfaces are very difficult to fabricate using classical techniques and laps the same size as the workpiece. Opticians can produce such surfaces by hand grinding and polishing, using small laps with orbital tool motion. However, this is a time consuming process unsuitable for large optical elements.

  5. A painless and constraint-free method to estimate viscoelastic passive dynamics of limbs' joints to support diagnosis of neuromuscular diseases.

    PubMed

    Venture, Gentiane; Nakamura, Yoshihiko; Yamane, Katsu; Hirashima, Masaya

    2007-01-01

    Though seldom identified, the human joints dynamics is important in the fields of medical robotics and medical research. We present a general solution to estimate in-vivo and simultaneously the passive dynamics of the human limbs' joints. It is based on the use of the multi-body description of the human body and its kinematics and dynamics computations. The linear passive joint dynamics of the shoulders and the elbows: stiffness, viscosity and friction, is estimated simultaneously using the linear least squares method. Acquisition of movements is achieved with an optical motion capture studio on one examinee during the clinical diagnosis of neuromuscular diseases. Experimental results are given and discussed.

  6. A light field microscope imaging spectrometer based on the microlens array

    NASA Astrophysics Data System (ADS)

    Yao, Yu-jia; Xu, Feng; Xia, Yin-xiang

    2017-10-01

    A new light field spectrometry microscope imaging system, which was composed by microscope objective, microlens array and spectrometry system was designed in this paper. 5-D information (4-D light field and 1-D spectrometer) of the sample could be captured by the snapshot system in only one exposure, avoiding the motion blur and aberration caused by the scanning imaging process of the traditional imaging spectrometry. Microscope objective had been used as the former group while microlens array used as the posterior group. The optical design of the system was simulated by Zemax, the parameter matching condition between microscope objective and microlens array was discussed significantly during the simulation process. The result simulated in the image plane was analyzed and discussed.

  7. Improved optical flow motion estimation for digital image stabilization

    NASA Astrophysics Data System (ADS)

    Lai, Lijun; Xu, Zhiyong; Zhang, Xuyao

    2015-11-01

    Optical flow is the instantaneous motion vector at each pixel in the image frame at a time instant. The gradient-based approach for optical flow computation can't work well when the video motion is too large. To alleviate such problem, we incorporate this algorithm into a pyramid multi-resolution coarse-to-fine search strategy. Using pyramid strategy to obtain multi-resolution images; Using iterative relationship from the highest level to the lowest level to obtain inter-frames' affine parameters; Subsequence frames compensate back to the first frame to obtain stabilized sequence. The experiment results demonstrate that the promoted method has good performance in global motion estimation.

  8. Kinematic analysis of basic rhythmic movements of hip-hop dance: motion characteristics common to expert dancers.

    PubMed

    Sato, Nahoko; Nunome, Hiroyuki; Ikegami, Yasuo

    2015-02-01

    In hip-hop dance contests, a procedure for evaluating performances has not been clearly defined, and objective criteria for evaluation are necessary. It is assumed that most hip-hop dance techniques have common motion characteristics by which judges determine the dancer's skill level. This study aimed to extract motion characteristics that may be linked to higher evaluations by judges. Ten expert and 12 nonexpert dancers performed basic rhythmic movements at a rate of 100 beats per minute. Their movements were captured using a motion capture system, and eight judges evaluated the performances. Four kinematic parameters, including the amplitude of the body motions and the phase delay, which indicates the phase difference between two joint angles, were calculated. The two groups showed no significant differences in terms of the amplitudes of the body motions. In contrast, the phase delay between the head motion and the other body parts' motions of expert dancers who received higher scores from the judges, which was approximately a quarter cycle, produced a loop-shaped motion of the head. It is suggested that this slight phase delay was related to the judges' evaluations and that these findings may help in constructing an objective evaluation system.

  9. Children's Understanding of Large-Scale Mapping Tasks: An Analysis of Talk, Drawings, and Gesture

    ERIC Educational Resources Information Center

    Kotsopoulos, Donna; Cordy, Michelle; Langemeyer, Melanie

    2015-01-01

    This research examined how children represent motion in large-scale mapping tasks that we referred to as "motion maps". The underlying mathematical content was transformational geometry. In total, 19 children, 8- to 10-year-old, created motion maps and captured their motion maps with accompanying verbal description digitally. Analysis of…

  10. Concurrent validation of Xsens MVN measurement of lower limb joint angular kinematics.

    PubMed

    Zhang, Jun-Tian; Novak, Alison C; Brouwer, Brenda; Li, Qingguo

    2013-08-01

    This study aims to validate a commercially available inertial sensor based motion capture system, Xsens MVN BIOMECH using its native protocols, against a camera-based motion capture system for the measurement of joint angular kinematics. Performance was evaluated by comparing waveform similarity using range of motion, mean error and a new formulation of the coefficient of multiple correlation (CMC). Three dimensional joint angles of the lower limbs were determined for ten healthy subjects while they performed three daily activities: level walking, stair ascent, and stair descent. Under all three walking conditions, the Xsens system most accurately determined the flexion/extension joint angle (CMC > 0.96) for all joints. The joint angle measurements associated with the other two joint axes had lower correlation including complex CMC values. The poor correlation in the other two joint axes is most likely due to differences in the anatomical frame definition of limb segments used by the Xsens and Optotrak systems. Implementation of a protocol to align these two systems is necessary when comparing joint angle waveforms measured by the Xsens and other motion capture systems.

  11. Multimodal transport and dispersion of organelles in narrow tubular cells

    NASA Astrophysics Data System (ADS)

    Mogre, Saurabh S.; Koslover, Elena F.

    2018-04-01

    Intracellular components explore the cytoplasm via active motor-driven transport in conjunction with passive diffusion. We model the motion of organelles in narrow tubular cells using analytical techniques and numerical simulations to study the efficiency of different transport modes in achieving various cellular objectives. Our model describes length and time scales over which each transport mode dominates organelle motion, along with various metrics to quantify exploration of intracellular space. For organelles that search for a specific target, we obtain the average capture time for given transport parameters and show that diffusion and active motion contribute to target capture in the biologically relevant regime. Because many organelles have been found to tether to microtubules when not engaged in active motion, we study the interplay between immobilization due to tethering and increased probability of active transport. We derive parameter-dependent conditions under which tethering enhances long-range transport and improves the target capture time. These results shed light on the optimization of intracellular transport machinery and provide experimentally testable predictions for the effects of transport regulation mechanisms such as tethering.

  12. Motion cues that make an impression: Predicting perceived personality by minimal motion information.

    PubMed

    Koppensteiner, Markus

    2013-11-01

    The current study presents a methodology to analyze first impressions on the basis of minimal motion information. In order to test the applicability of the approach brief silent video clips of 40 speakers were presented to independent observers (i.e., did not know speakers) who rated them on measures of the Big Five personality traits. The body movements of the speakers were then captured by placing landmarks on the speakers' forehead, one shoulder and the hands. Analysis revealed that observers ascribe extraversion to variations in the speakers' overall activity, emotional stability to the movements' relative velocity, and variation in motion direction to openness. Although ratings of openness and conscientiousness were related to biographical data of the speakers (i.e., measures of career progress), measures of body motion failed to provide similar results. In conclusion, analysis of motion behavior might be done on the basis of a small set of landmarks that seem to capture important parts of relevant nonverbal information.

  13. NASA Tech Briefs, October 2004

    NASA Technical Reports Server (NTRS)

    2004-01-01

    Topics include: Relative-Motion Sensors and Actuators for Two Optical Tables; Improved Position Sensor for Feedback Control of Levitation; Compact Tactile Sensors for Robot Fingers; Improved Ion-Channel Biosensors; Suspended-Patch Antenna With Inverted, EM-Coupled Feed; System Would Predictively Preempt Traffic Lights for Emergency Vehicles; Optical Position Encoders for High or Low Temperatures; Inter-Valence-Subband/Conduction-Band-Transport IR Detectors; Additional Drive Circuitry for Piezoelectric Screw Motors; Software for Use with Optoelectronic Measuring Tool; Coordinating Shared Activities; Software Reduces Radio-Interference Effects in Radar Data; Using Iron to Treat Chlorohydrocarbon-Contaminated Soil; Thermally Insulating, Kinematic Tensioned-Fiber Suspension; Back Actuators for Segmented Mirrors and Other Applications; Mechanism for Self-Reacted Friction Stir Welding; Lightweight Exoskeletons with Controllable Actuators; Miniature Robotic Submarine for Exploring Harsh Environments; Electron-Spin Filters Based on the Rashba Effect; Diffusion-Cooled Tantalum Hot-Electron Bolometer Mixers; Tunable Optical True-Time Delay Devices Would Exploit EIT; Fast Query-Optimized Kernel-Machine Classification; Indentured Parts List Maintenance and Part Assembly Capture Tool - IMPACT; An Architecture for Controlling Multiple Robots; Progress in Fabrication of Rocket Combustion Chambers by VPS; CHEM-Based Self-Deploying Spacecraft Radar Antennas; Scalable Multiprocessor for High-Speed Computing in Space; and Simple Systems for Detecting Spacecraft Meteoroid Punctures.

  14. Optical Enhancement of Exoskeleton-Based Estimation of Glenohumeral Angles

    PubMed Central

    Cortés, Camilo; Unzueta, Luis; de los Reyes-Guzmán, Ana; Ruiz, Oscar E.; Flórez, Julián

    2016-01-01

    In Robot-Assisted Rehabilitation (RAR) the accurate estimation of the patient limb joint angles is critical for assessing therapy efficacy. In RAR, the use of classic motion capture systems (MOCAPs) (e.g., optical and electromagnetic) to estimate the Glenohumeral (GH) joint angles is hindered by the exoskeleton body, which causes occlusions and magnetic disturbances. Moreover, the exoskeleton posture does not accurately reflect limb posture, as their kinematic models differ. To address the said limitations in posture estimation, we propose installing the cameras of an optical marker-based MOCAP in the rehabilitation exoskeleton. Then, the GH joint angles are estimated by combining the estimated marker poses and exoskeleton Forward Kinematics. Such hybrid system prevents problems related to marker occlusions, reduced camera detection volume, and imprecise joint angle estimation due to the kinematic mismatch of the patient and exoskeleton models. This paper presents the formulation, simulation, and accuracy quantification of the proposed method with simulated human movements. In addition, a sensitivity analysis of the method accuracy to marker position estimation errors, due to system calibration errors and marker drifts, has been carried out. The results show that, even with significant errors in the marker position estimation, method accuracy is adequate for RAR. PMID:27403044

  15. Reference equations of motion for automatic rendezvous and capture

    NASA Technical Reports Server (NTRS)

    Henderson, David M.

    1992-01-01

    The analysis presented in this paper defines the reference coordinate frames, equations of motion, and control parameters necessary to model the relative motion and attitude of spacecraft in close proximity with another space system during the Automatic Rendezvous and Capture phase of an on-orbit operation. The relative docking port target position vector and the attitude control matrix are defined based upon an arbitrary spacecraft design. These translation and rotation control parameters could be used to drive the error signal input to the vehicle flight control system. Measurements for these control parameters would become the bases for an autopilot or feedback control system (FCS) design for a specific spacecraft.

  16. KSC-08pd1899

    NASA Image and Video Library

    2008-07-02

    CAPE CANAVERAL, Fla. – NYIT MOCAP (Motion Capture) team Project Manager Jon Squitieri attaches a retro reflective marker to a motion capture suit worn by a technician who will be assembling the Orion Crew Module mockup. The motion tracking aims to improve efficiency of assembly processes and identify potential ergonomic risks for technicians assembling the mockup. The work is being performed in United Space Alliance's Human Engineering Modeling and Performance Lab in the RLV Hangar at NASA's Kennedy Space Center. Part of NASA's Constellation Program, the Orion spacecraft will return humans to the moon and prepare for future voyages to Mars and other destinations in our solar system.

  17. Motion compensation for in vivo subcellular optical microscopy.

    PubMed

    Lucotte, B; Balaban, R S

    2014-04-01

    In this review, we focus on the impact of tissue motion on attempting to conduct subcellular resolution optical microscopy, in vivo. Our position is that tissue motion is one of the major barriers in conducting these studies along with light induced damage, optical probe loading as well as absorbing and scattering effects on the excitation point spread function and collection of emitted light. Recent developments in the speed of image acquisition have reached the limit, in most cases, where the signal from a subcellular voxel limits the speed and not the scanning rate of the microscope. Different schemes for compensating for tissue displacements due to rigid body and deformation are presented from tissue restriction, gating, adaptive gating and active tissue tracking. We argue that methods that minimally impact the natural physiological motion of the tissue are desirable because the major reason to perform in vivo studies is to evaluate normal physiological functions. Towards this goal, active tracking using the optical imaging data itself to monitor tissue displacement and either prospectively or retrospectively correct for the motion without affecting physiological processes is desirable. Critical for this development was the implementation of near real time image processing in conjunction with the control of the microscope imaging parameters. Clearly, the continuing development of methods of motion compensation as well as significant technological solutions to the other barriers to tissue subcellular optical imaging in vivo, including optical aberrations and overall signal-to-noise ratio, will make major contributions to the understanding of cell biology within the body.

  18. Using motion capture technology to measure the effects of magnification loupes on dental operator posture: A pilot study.

    PubMed

    Branson, B G; Abnos, R M; Simmer-Beck, M L; King, G W; Siddicky, S F

    2018-01-01

    Motion analysis has great potential for quantitatively evaluating dental operator posture and the impact of interventions such as magnification loupes on posture and subsequent development of musculoskeletal disorders. This study sought to determine the feasibility of motion capture technology for measurement of dental operator posture and examine the impact that different styles of magnification loupes had on dental operator posture. Forward and lateral head flexion were measured for two different operators while completing a periodontal probing procedure. Each was measured while wearing magnification loupes (flip up-FL and through the lens-TTL) and basic safety lenses. Operators both exhibited reduced forward flexion range of motion (ROM) when using loupes (TTL or FL) compared to a baseline lens (BL). In contrast to forward flexion, no consistent trends were observed for lateral flexion between subjects. The researchers can report that it is possible to measure dental operator posture using motion capture technology. More study is needed to determine which type of magnification loupes (FL or TTL) are superior in improving dental operator posture. Some evidence was found supporting that the quality of operator posture may more likely be related to the use of magnification loupes, rather than the specific type of lenses worn.

  19. Imaging of optically diffusive media by use of opto-elastography

    NASA Astrophysics Data System (ADS)

    Bossy, Emmanuel; Funke, Arik R.; Daoudi, Khalid; Tanter, Mickael; Fink, Mathias; Boccara, Claude

    2007-02-01

    We present a camera-based optical detection scheme designed to detect the transient motion created by the acoustic radiation force in elastic media. An optically diffusive tissue mimicking phantom was illuminated with coherent laser light, and a high speed camera (2 kHz frame rate) was used to acquire and cross-correlate consecutive speckle patterns. Time-resolved transient decorrelations of the optical speckle were measured as the results of localised motion induced in the medium by the radiation force and subsequent propagating shear waves. As opposed to classical acousto-optic techniques which are sensitive to vibrations induced by compressional waves at ultrasonic frequencies, the proposed technique is sensitive only to the low frequency transient motion induced in the medium by the radiation force. It therefore provides a way to assess both optical and shear mechanical properties.

  20. A natural user interface to integrate citizen science and physical exercise

    PubMed Central

    Palermo, Eduardo; Laut, Jeffrey; Nov, Oded; Porfiri, Maurizio

    2017-01-01

    Citizen science enables volunteers to contribute to scientific projects, where massive data collection and analysis are often required. Volunteers participate in citizen science activities online from their homes or in the field and are motivated by both intrinsic and extrinsic factors. Here, we investigated the possibility of integrating citizen science tasks within physical exercises envisaged as part of a potential rehabilitation therapy session. The citizen science activity entailed environmental mapping of a polluted body of water using a miniature instrumented boat, which was remotely controlled by the participants through their physical gesture tracked by a low-cost markerless motion capture system. Our findings demonstrate that the natural user interface offers an engaging and effective means for performing environmental monitoring tasks. At the same time, the citizen science activity increases the commitment of the participants, leading to a better motion performance, quantified through an array of objective indices. The study constitutes a first and necessary step toward rehabilitative treatments of the upper limb through citizen science and low-cost markerless optical systems. PMID:28231261

  1. Integration of time as a factor in ergonomic simulation.

    PubMed

    Walther, Mario; Muñoz, Begoña Toledo

    2012-01-01

    The paper describes the application of a simulation based ergonomic evaluation. Within a pilot project, the algorithms of the screening method of the European Assembly Worksheet were transferred into an existing digital human model. Movement data was recorded with an especially developed hybrid Motion Capturing system. A prototype of the system was built and is currently being tested at the Volkswagen Group. First results showed the feasibility of the simulation based ergonomic evaluation with Motion Capturing.

  2. Independent motion detection with a rival penalized adaptive particle filter

    NASA Astrophysics Data System (ADS)

    Becker, Stefan; Hübner, Wolfgang; Arens, Michael

    2014-10-01

    Aggregation of pixel based motion detection into regions of interest, which include views of single moving objects in a scene is an essential pre-processing step in many vision systems. Motion events of this type provide significant information about the object type or build the basis for action recognition. Further, motion is an essential saliency measure, which is able to effectively support high level image analysis. When applied to static cameras, background subtraction methods achieve good results. On the other hand, motion aggregation on freely moving cameras is still a widely unsolved problem. The image flow, measured on a freely moving camera is the result from two major motion types. First the ego-motion of the camera and second object motion, that is independent from the camera motion. When capturing a scene with a camera these two motion types are adverse blended together. In this paper, we propose an approach to detect multiple moving objects from a mobile monocular camera system in an outdoor environment. The overall processing pipeline consists of a fast ego-motion compensation algorithm in the preprocessing stage. Real-time performance is achieved by using a sparse optical flow algorithm as an initial processing stage and a densely applied probabilistic filter in the post-processing stage. Thereby, we follow the idea proposed by Jung and Sukhatme. Normalized intensity differences originating from a sequence of ego-motion compensated difference images represent the probability of moving objects. Noise and registration artefacts are filtered out, using a Bayesian formulation. The resulting a posteriori distribution is located on image regions, showing strong amplitudes in the difference image which are in accordance with the motion prediction. In order to effectively estimate the a posteriori distribution, a particle filter is used. In addition to the fast ego-motion compensation, the main contribution of this paper is the design of the probabilistic filter for real-time detection and tracking of independently moving objects. The proposed approach introduces a competition scheme between particles in order to ensure an improved multi-modality. Further, the filter design helps to generate a particle distribution which is homogenous even in the presence of multiple targets showing non-rigid motion patterns. The effectiveness of the method is shown on exemplary outdoor sequences.

  3. Example-Based Automatic Music-Driven Conventional Dance Motion Synthesis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Songhua; Fan, Rukun; Geng, Weidong

    We introduce a novel method for synthesizing dance motions that follow the emotions and contents of a piece of music. Our method employs a learning-based approach to model the music to motion mapping relationship embodied in example dance motions along with those motions' accompanying background music. A key step in our method is to train a music to motion matching quality rating function through learning the music to motion mapping relationship exhibited in synchronized music and dance motion data, which were captured from professional human dance performance. To generate an optimal sequence of dance motion segments to match with amore » piece of music, we introduce a constraint-based dynamic programming procedure. This procedure considers both music to motion matching quality and visual smoothness of a resultant dance motion sequence. We also introduce a two-way evaluation strategy, coupled with a GPU-based implementation, through which we can execute the dynamic programming process in parallel, resulting in significant speedup. To evaluate the effectiveness of our method, we quantitatively compare the dance motions synthesized by our method with motion synthesis results by several peer methods using the motions captured from professional human dancers' performance as the gold standard. We also conducted several medium-scale user studies to explore how perceptually our dance motion synthesis method can outperform existing methods in synthesizing dance motions to match with a piece of music. These user studies produced very positive results on our music-driven dance motion synthesis experiments for several Asian dance genres, confirming the advantages of our method.« less

  4. Spacesuit and Space Vehicle Comparative Ergonomic Evaluation

    NASA Technical Reports Server (NTRS)

    England, Scott; Benson, Elizabeth; Cowley, Matthew; Harvill, Lauren; Blackledge, Christopher; Perez, Esau; Rajulu, Sudhakar

    2011-01-01

    With the advent of the latest manned spaceflight objectives, a series of prototype launch and reentry spacesuit architectures were evaluated for eventual down selection by NASA based on the performance of a set of designated tasks. A consolidated approach was taken to testing, concurrently collecting suit mobility data, seat-suit-vehicle interface clearances and movement strategies within the volume of a Multi-Purpose Crew Vehicle mockup. To achieve the objectives of the test, a requirement was set forth to maintain high mockup fidelity while using advanced motion capture technologies. These seemingly mutually exclusive goals were accommodated with the construction of an optically transparent and fully adjustable frame mockup. The mockup was constructed such that it could be dimensionally validated rapidly with the motion capture system. This paper will describe the method used to create a motion capture compatible space vehicle mockup, the consolidated approach for evaluating spacesuits in action, as well as the various methods for generating hardware requirements for an entire population from the resulting complex data set using a limited number of test subjects. Kinematics, hardware clearance, suited anthropometry, and subjective feedback data were recorded on fifteen unsuited and five suited subjects. Unsuited subjects were selected chiefly by anthropometry, in an attempt to find subjects who fell within predefined criteria for medium male, large male and small female subjects. The suited subjects were selected as a subset of the unsuited subjects and tested in both unpressurized and pressurized conditions. Since the prototype spacesuits were fabricated in a single size to accommodate an approximately average sized male, the findings from the suit testing were systematically extrapolated to the extremes of the population to anticipate likely problem areas. This extrapolation was achieved by first performing population analysis through a comparison of suited subjects performance to their unsuited performance and then applying the results to the entire range of population. The use of a transparent space vehicle mockup enabled the collection of large amounts of data during human-in-the-loop testing. Mobility data revealed that most of the tested spacesuits had sufficient ranges of motion for tasks to be performed successfully. A failed tasked by a suited subject most often stemmed from a combination of poor field of view while seated and poor dexterity of the gloves when pressurized or from suit/vehicle interface issues. Seat ingress/egress testing showed that problems with anthropometric accommodation does not exclusively occur with the largest or smallest subjects, but rather specific combinations of measurements that lead to narrower seat ingress/egress clearance.

  5. Motion dazzle and camouflage as distinct anti-predator defenses.

    PubMed

    Stevens, Martin; Searle, W Tom L; Seymour, Jenny E; Marshall, Kate L A; Ruxton, Graeme D

    2011-11-25

    Camouflage patterns that hinder detection and/or recognition by antagonists are widely studied in both human and animal contexts. Patterns of contrasting stripes that purportedly degrade an observer's ability to judge the speed and direction of moving prey ('motion dazzle') are, however, rarely investigated. This is despite motion dazzle having been fundamental to the appearance of warships in both world wars and often postulated as the selective agent leading to repeated patterns on many animals (such as zebra and many fish, snake, and invertebrate species). Such patterns often appear conspicuous, suggesting that protection while moving by motion dazzle might impair camouflage when stationary. However, the relationship between motion dazzle and camouflage is unclear because disruptive camouflage relies on high-contrast markings. In this study, we used a computer game with human subjects detecting and capturing either moving or stationary targets with different patterns, in order to provide the first empirical exploration of the interaction of these two protective coloration mechanisms. Moving targets with stripes were caught significantly less often and missed more often than targets with camouflage patterns. However, when stationary, targets with camouflage markings were captured less often and caused more false detections than those with striped patterns, which were readily detected. Our study provides the clearest evidence to date that some patterns inhibit the capture of moving targets, but that camouflage and motion dazzle are not complementary strategies. Therefore, the specific coloration that evolves in animals will depend on how the life history and ontogeny of each species influence the trade-off between the costs and benefits of motion dazzle and camouflage.

  6. Towards breaking the spatial resolution barriers: An optical flow and super-resolution approach for sea ice motion estimation

    NASA Astrophysics Data System (ADS)

    Petrou, Zisis I.; Xian, Yang; Tian, YingLi

    2018-04-01

    Estimation of sea ice motion at fine scales is important for a number of regional and local level applications, including modeling of sea ice distribution, ocean-atmosphere and climate dynamics, as well as safe navigation and sea operations. In this study, we propose an optical flow and super-resolution approach to accurately estimate motion from remote sensing images at a higher spatial resolution than the original data. First, an external example learning-based super-resolution method is applied on the original images to generate higher resolution versions. Then, an optical flow approach is applied on the higher resolution images, identifying sparse correspondences and interpolating them to extract a dense motion vector field with continuous values and subpixel accuracies. Our proposed approach is successfully evaluated on passive microwave, optical, and Synthetic Aperture Radar data, proving appropriate for multi-sensor applications and different spatial resolutions. The approach estimates motion with similar or higher accuracy than the original data, while increasing the spatial resolution of up to eight times. In addition, the adopted optical flow component outperforms a state-of-the-art pattern matching method. Overall, the proposed approach results in accurate motion vectors with unprecedented spatial resolutions of up to 1.5 km for passive microwave data covering the entire Arctic and 20 m for radar data, and proves promising for numerous scientific and operational applications.

  7. Accurate band-to-band registration of AOTF imaging spectrometer using motion detection technology

    NASA Astrophysics Data System (ADS)

    Zhou, Pengwei; Zhao, Huijie; Jin, Shangzhong; Li, Ningchuan

    2016-05-01

    This paper concerns the problem of platform vibration induced band-to-band misregistration with acousto-optic imaging spectrometer in spaceborne application. Registrating images of different bands formed at different time or different position is difficult, especially for hyperspectral images form acousto-optic tunable filter (AOTF) imaging spectrometer. In this study, a motion detection method is presented using the polychromatic undiffracted beam of AOTF. The factors affecting motion detect accuracy are analyzed theoretically, and calculations show that optical distortion is an easily overlooked factor to achieve accurate band-to-band registration. Hence, a reflective dual-path optical system has been proposed for the first time, with reduction of distortion and chromatic aberration, indicating the potential of higher registration accuracy. Consequently, a spectra restoration experiment using additional motion detect channel is presented for the first time, which shows the accurate spectral image registration capability of this technique.

  8. Tuning self-motion perception in virtual reality with visual illusions.

    PubMed

    Bruder, Gerd; Steinicke, Frank; Wieland, Phil; Lappe, Markus

    2012-07-01

    Motion perception in immersive virtual environments significantly differs from the real world. For example, previous work has shown that users tend to underestimate travel distances in virtual environments (VEs). As a solution to this problem, researchers proposed to scale the mapped virtual camera motion relative to the tracked real-world movement of a user until real and virtual motion are perceived as equal, i.e., real-world movements could be mapped with a larger gain to the VE in order to compensate for the underestimation. However, introducing discrepancies between real and virtual motion can become a problem, in particular, due to misalignments of both worlds and distorted space cognition. In this paper, we describe a different approach that introduces apparent self-motion illusions by manipulating optic flow fields during movements in VEs. These manipulations can affect self-motion perception in VEs, but omit a quantitative discrepancy between real and virtual motions. In particular, we consider to which regions of the virtual view these apparent self-motion illusions can be applied, i.e., the ground plane or peripheral vision. Therefore, we introduce four illusions and show in experiments that optic flow manipulation can significantly affect users' self-motion judgments. Furthermore, we show that with such manipulations of optic flow fields the underestimation of travel distances can be compensated.

  9. Optical pseudomotors for soft x-ray beamlines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pedreira, P., E-mail: ppedreira@cells.es; Sics, I.; Sorrentino, A.

    2016-05-15

    Optical elements of soft x-ray beamlines usually have motorized translations and rotations that allow for the fine alignment of the beamline. This is to steer the photon beam at some positions and to correct the focus on slits or on sample. Generally, each degree of freedom of a mirror induces a change of several parameters of the beam. Inversely, several motions are required to actuate on a single optical parameter, keeping the others unchanged. We define optical pseudomotors as combinations of physical motions of the optical elements of a beamline, which allow modifying one optical parameter without affecting the others.more » We describe a method to obtain analytic relationships between physical motions of mirrors and the corresponding variations of the beam parameters. This method has been implemented and tested at two beamlines at ALBA, where it is used to control the focus of the photon beam and its position independently.« less

  10. Video repairing under variable illumination using cyclic motions.

    PubMed

    Jia, Jiaya; Tai, Yu-Wing; Wu, Tai-Pang; Tang, Chi-Keung

    2006-05-01

    This paper presents a complete system capable of synthesizing a large number of pixels that are missing due to occlusion or damage in an uncalibrated input video. These missing pixels may correspond to the static background or cyclic motions of the captured scene. Our system employs user-assisted video layer segmentation, while the main processing in video repair is fully automatic. The input video is first decomposed into the color and illumination videos. The necessary temporal consistency is maintained by tensor voting in the spatio-temporal domain. Missing colors and illumination of the background are synthesized by applying image repairing. Finally, the occluded motions are inferred by spatio-temporal alignment of collected samples at multiple scales. We experimented on our system with some difficult examples with variable illumination, where the capturing camera can be stationary or in motion.

  11. Applied research of embedded WiFi technology in the motion capture system

    NASA Astrophysics Data System (ADS)

    Gui, Haixia

    2012-04-01

    Embedded wireless WiFi technology is one of the current wireless hot spots in network applications. This paper firstly introduces the definition and characteristics of WiFi. With the advantages of WiFi such as using no wiring, simple operation and stable transmission, this paper then gives a system design for the application of embedded wireless WiFi technology in the motion capture system. Also, it verifies the effectiveness of design in the WiFi-based wireless sensor hardware and software program.

  12. Biodynamic Doppler imaging of subcellular motion inside 3D living tissue culture and biopsies (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Nolte, David D.

    2016-03-01

    Biodynamic imaging is an emerging 3D optical imaging technology that probes up to 1 mm deep inside three-dimensional living tissue using short-coherence dynamic light scattering to measure the intracellular motions of cells inside their natural microenvironments. Biodynamic imaging is label-free and non-invasive. The information content of biodynamic imaging is captured through tissue dynamics spectroscopy that displays the changes in the Doppler signatures from intracellular constituents in response to applied compounds. The affected dynamic intracellular mechanisms include organelle transport, membrane undulations, cytoskeletal restructuring, strain at cellular adhesions, cytokinesis, mitosis, exo- and endo-cytosis among others. The development of 3D high-content assays such as biodynamic profiling can become a critical new tool for assessing efficacy of drugs and the suitability of specific types of tissue growth for drug discovery and development. The use of biodynamic profiling to predict clinical outcome of living biopsies to cancer therapeutics can be developed into a phenotypic companion diagnostic, as well as a new tool for therapy selection in personalized medicine. This invited talk will present an overview of the optical, physical and physiological processes involved in biodynamic imaging. Several different biodynamic imaging modalities include motility contrast imaging (MCI), tissue-dynamics spectroscopy (TDS) and tissue-dynamics imaging (TDI). A wide range of potential applications will be described that include process monitoring for 3D tissue culture, drug discovery and development, cancer therapy selection, embryo assessment for in-vitro fertilization and artificial reproductive technologies, among others.

  13. Optical coherence tomography as a novel tool for in-line monitoring of a pharmaceutical film-coating process.

    PubMed

    Markl, Daniel; Hannesschläger, Günther; Sacher, Stephan; Leitner, Michael; Khinast, Johannes G

    2014-05-13

    Optical coherence tomography (OCT) is a contact-free non-destructive high-resolution imaging technique based on low-coherence interferometry. This study investigates the application of spectral-domain OCT as an in-line quality control tool for monitoring pharmaceutical film-coated tablets. OCT images of several commercially-available film-coated tablets of different shapes, formulations and coating thicknesses were captured off-line using two OCT systems with centre wavelengths of 830nm and 1325nm. Based on the off-line image evaluation, another OCT system operating at a shorter wavelength was selected to study the feasibility of OCT as an in-line monitoring method. Since in spectral-domain OCT motion artefacts can occur as a result of the tablet or sensor head movement, a basic understanding of the relationship between the tablet speed and the motion effects is essential for correct quantifying and qualifying of the tablet coating. Experimental data was acquired by moving the sensor head of the OCT system across a static tablet bed. Although examining the homogeneity of the coating turned more difficult with increasing transverse speed of the tablets, the determination of the coating thickness was still highly accurate at a speed up to 0.7m/s. The presented OCT setup enables the investigation of the intra- and inter-tablet coating uniformity in-line during the coating process. Copyright © 2014 Elsevier B.V. All rights reserved.

  14. Differential Responses to a Visual Self-Motion Signal in Human Medial Cortical Regions Revealed by Wide-View Stimulation

    PubMed Central

    Wada, Atsushi; Sakano, Yuichi; Ando, Hiroshi

    2016-01-01

    Vision is important for estimating self-motion, which is thought to involve optic-flow processing. Here, we investigated the fMRI response profiles in visual area V6, the precuneus motion area (PcM), and the cingulate sulcus visual area (CSv)—three medial brain regions recently shown to be sensitive to optic-flow. We used wide-view stereoscopic stimulation to induce robust self-motion processing. Stimuli included static, randomly moving, and coherently moving dots (simulating forward self-motion). We varied the stimulus size and the presence of stereoscopic information. A combination of univariate and multi-voxel pattern analyses (MVPA) revealed that fMRI responses in the three regions differed from each other. The univariate analysis identified optic-flow selectivity and an effect of stimulus size in V6, PcM, and CSv, among which only CSv showed a significantly lower response to random motion stimuli compared with static conditions. Furthermore, MVPA revealed an optic-flow specific multi-voxel pattern in the PcM and CSv, where the discrimination of coherent motion from both random motion and static conditions showed above-chance prediction accuracy, but that of random motion from static conditions did not. Additionally, while area V6 successfully classified different stimulus sizes regardless of motion pattern, this classification was only partial in PcM and was absent in CSv. This may reflect the known retinotopic representation in V6 and the absence of such clear visuospatial representation in CSv. We also found significant correlations between the strength of subjective self-motion and univariate activation in all examined regions except for primary visual cortex (V1). This neuro-perceptual correlation was significantly higher for V6, PcM, and CSv when compared with V1, and higher for CSv when compared with the visual motion area hMT+. Our convergent results suggest the significant involvement of CSv in self-motion processing, which may give rise to its percept. PMID:26973588

  15. Periscopic Spine Surgery

    DTIC Science & Technology

    2005-03-01

    Guided Technologies, Boulder, CO; motion path built from three orthogonal sinusoidal paths is Optotrak , Northern Digital, Waterloo, ON) optical tracking...Hopkins University using an Optotrak to evaluate the simulated motions. The Optotrak (Northern Digital, Inc.) is an optical high- precision 3-D motion...verify the accuracy of the RMS, tests were carried out using the Optotrak , which was placed about 2 m from the simulator. For each test, two sets of data

  16. Expressive facial animation synthesis by learning speech coarticulation and expression spaces.

    PubMed

    Deng, Zhigang; Neumann, Ulrich; Lewis, J P; Kim, Tae-Yong; Bulut, Murtaza; Narayanan, Shrikanth

    2006-01-01

    Synthesizing expressive facial animation is a very challenging topic within the graphics community. In this paper, we present an expressive facial animation synthesis system enabled by automated learning from facial motion capture data. Accurate 3D motions of the markers on the face of a human subject are captured while he/she recites a predesigned corpus, with specific spoken and visual expressions. We present a novel motion capture mining technique that "learns" speech coarticulation models for diphones and triphones from the recorded data. A Phoneme-Independent Expression Eigenspace (PIEES) that encloses the dynamic expression signals is constructed by motion signal processing (phoneme-based time-warping and subtraction) and Principal Component Analysis (PCA) reduction. New expressive facial animations are synthesized as follows: First, the learned coarticulation models are concatenated to synthesize neutral visual speech according to novel speech input, then a texture-synthesis-based approach is used to generate a novel dynamic expression signal from the PIEES model, and finally the synthesized expression signal is blended with the synthesized neutral visual speech to create the final expressive facial animation. Our experiments demonstrate that the system can effectively synthesize realistic expressive facial animation.

  17. An Extended Passive Motion Paradigm for Human-Like Posture and Movement Planning in Redundant Manipulators

    PubMed Central

    Tommasino, Paolo; Campolo, Domenico

    2017-01-01

    A major challenge in robotics and computational neuroscience is relative to the posture/movement problem in presence of kinematic redundancy. We recently addressed this issue using a principled approach which, in conjunction with nonlinear inverse optimization, allowed capturing postural strategies such as Donders' law. In this work, after presenting this general model specifying it as an extension of the Passive Motion Paradigm, we show how, once fitted to capture experimental postural strategies, the model is actually able to also predict movements. More specifically, the passive motion paradigm embeds two main intrinsic components: joint damping and joint stiffness. In previous work we showed that joint stiffness is responsible for static postures and, in this sense, its parameters are regressed to fit to experimental postural strategies. Here, we show how joint damping, in particular its anisotropy, directly affects task-space movements. Rather than using damping parameters to fit a posteriori task-space motions, we make the a priori hypothesis that damping is proportional to stiffness. This remarkably allows a postural-fitted model to also capture dynamic performance such as curvature and hysteresis of task-space trajectories during wrist pointing tasks, confirming and extending previous findings in literature. PMID:29249954

  18. Involvement of the ventral premotor cortex in controlling image motion of the hand during performance of a target-capturing task.

    PubMed

    Ochiai, Tetsuji; Mushiake, Hajime; Tanji, Jun

    2005-07-01

    The ventral premotor cortex (PMv) has been implicated in the visual guidance of movement. To examine whether neuronal activity in the PMv is involved in controlling the direction of motion of a visual image of the hand or the actual movement of the hand, we trained a monkey to capture a target that was presented on a video display using the same side of its hand as was displayed on the video display. We found that PMv neurons predominantly exhibited premovement activity that reflected the image motion to be controlled, rather than the physical motion of the hand. We also found that the activity of half of such direction-selective PMv neurons depended on which side (left versus right) of the video image of the hand was used to capture the target. Furthermore, this selectivity for a portion of the hand was not affected by changing the starting position of the hand movement. These findings suggest that PMv neurons play a crucial role in determining which part of the body moves in which direction, at least under conditions in which a visual image of a limb is used to guide limb movements.

  19. Utilizing Commercial Hardware and Open Source Computer Vision Software to Perform Motion Capture for Reduced Gravity Flight

    NASA Technical Reports Server (NTRS)

    Humphreys, Brad; Bellisario, Brian; Gallo, Christopher; Thompson, William K.; Lewandowski, Beth

    2016-01-01

    Long duration space travel to Mars or to an asteroid will expose astronauts to extended periods of reduced gravity. Since gravity is not present to aid loading, astronauts will use resistive and aerobic exercise regimes for the duration of the space flight to minimize the loss of bone density, muscle mass and aerobic capacity that occurs during exposure to a reduced gravity environment. Unlike the International Space Station (ISS), the area available for an exercise device in the next generation of spacecraft is limited. Therefore, compact resistance exercise device prototypes are being developed. The NASA Digital Astronaut Project (DAP) is supporting the Advanced Exercise Concepts (AEC) Project, Exercise Physiology and Countermeasures (ExPC) project and the National Space Biomedical Research Institute (NSBRI) funded researchers by developing computational models of exercising with these new advanced exercise device concepts. To perform validation of these models and to support the Advanced Exercise Concepts Project, several candidate devices have been flown onboard NASAs Reduced Gravity Aircraft. In terrestrial laboratories, researchers typically have available to them motion capture systems for the measurement of subject kinematics. Onboard the parabolic flight aircraft it is not practical to utilize the traditional motion capture systems due to the large working volume they require and their relatively high replacement cost if damaged. To support measuring kinematics on board parabolic aircraft, a motion capture system is being developed utilizing open source computer vision code with commercial off the shelf (COTS) video camera hardware. While the systems accuracy is lower than lab setups, it provides a means to produce quantitative comparison motion capture kinematic data. Additionally, data such as required exercise volume for small spaces such as the Orion capsule can be determined. METHODS: OpenCV is an open source computer vision library that provides the ability to perform multi-camera 3 dimensional reconstruction. Utilizing OpenCV, via the Python programming language, a set of tools has been developed to perform motion capture in confined spaces using commercial cameras. Four Sony Video Cameras were intrinsically calibrated prior to flight. Intrinsic calibration provides a set of camera specific parameters to remove geometric distortion of the lens and sensor (specific to each individual camera). A set of high contrast markers were placed on the exercising subject (safety also necessitated that they be soft in case they become detached during parabolic flight); small yarn balls were used. Extrinsic calibration, the determination of camera location and orientation parameters, is performed using fixed landmark markers shared by the camera scenes. Additionally a wand calibration, the sweeping of the camera scenes simultaneously, was also performed. Techniques have been developed to perform intrinsic calibration, extrinsic calibration, isolation of the markers in the scene, calculation of marker 2D centroids, and 3D reconstruction from multiple cameras. These methods have been tested in the laboratory side-by-side comparison to a traditional motion capture system and also on a parabolic flight.

  20. A motion capture library for the study of identity, gender, and emotion perception from biological motion.

    PubMed

    Ma, Yingliang; Paterson, Helena M; Pollick, Frank E

    2006-02-01

    We present the methods that were used in capturing a library of human movements for use in computer-animated displays of human movement. The library is an attempt to systematically tap into and represent the wide range of personal properties, such as identity, gender, and emotion, that are available in a person's movements. The movements from a total of 30 nonprofessional actors (15 of them female) were captured while they performed walking, knocking, lifting, and throwing actions, as well as their combination in angry, happy, neutral, and sad affective styles. From the raw motion capture data, a library of 4,080 movements was obtained, using techniques based on Character Studio (plug-ins for 3D Studio MAX, AutoDesk, Inc.), MATLAB The MathWorks, Inc.), or a combination of these two. For the knocking, lifting, and throwing actions, 10 repetitions of the simple action unit were obtained for each affect, and for the other actions, two longer movement recordings were obtained for each affect. We discuss the potential use of the library for computational and behavioral analyses of movement variability, of human character animation, and of how gender, emotion, and identity are encoded and decoded from human movement.

  1. Optical surface profiling of orb-web spider capture silks.

    PubMed

    Kane, D M; Joyce, A M; Staib, G R; Herberstein, M E

    2010-09-01

    Much spider silk research to date has focused on its mechanical properties. However, the webs of many orb-web spiders have evolved for over 136 million years to evade visual detection by insect prey. It is therefore a photonic device in addition to being a mechanical device. Herein we use optical surface profiling of capture silks from the webs of adult female St Andrews cross spiders (Argiope keyserlingi) to successfully measure the geometry of adhesive silk droplets and to show a bowing in the aqueous layer on the spider capture silk between adhesive droplets. Optical surface profiling shows geometric features of the capture silk that have not been previously measured and contributes to understanding the links between the physical form and biological function. The research also demonstrates non-standard use of an optical surface profiler to measure the maximum width of a transparent micro-sized droplet (microlens).

  2. Motion cues that make an impression☆

    PubMed Central

    Koppensteiner, Markus

    2013-01-01

    The current study presents a methodology to analyze first impressions on the basis of minimal motion information. In order to test the applicability of the approach brief silent video clips of 40 speakers were presented to independent observers (i.e., did not know speakers) who rated them on measures of the Big Five personality traits. The body movements of the speakers were then captured by placing landmarks on the speakers' forehead, one shoulder and the hands. Analysis revealed that observers ascribe extraversion to variations in the speakers' overall activity, emotional stability to the movements' relative velocity, and variation in motion direction to openness. Although ratings of openness and conscientiousness were related to biographical data of the speakers (i.e., measures of career progress), measures of body motion failed to provide similar results. In conclusion, analysis of motion behavior might be done on the basis of a small set of landmarks that seem to capture important parts of relevant nonverbal information. PMID:24223432

  3. Biomechanical Evaluation of an Electric Power-Assisted Bicycle by a Musculoskeletal Model

    NASA Astrophysics Data System (ADS)

    Takehara, Shoichiro; Murakami, Musashi; Hase, Kazunori

    In this study, we construct an evaluation system for the muscular activity of the lower limbs when a human pedals an electric power-assisted bicycle. The evaluation system is composed of an electric power-assisted bicycle, a numerical simulator and a motion capture system. The electric power-assisted bicycle in this study has a pedal with an attached force sensor. The numerical simulator for pedaling motion is a musculoskeletal model of a human. The motion capture system measures the joint angles of the lower limb. We examine the influence of the electric power-assisted force on each muscle of the human trunk and legs. First, an experiment of pedaling motion is performed. Then, the musculoskeletal model is calculated by using the experimental data. We discuss the influence on each muscle by electric power-assist. It is found that the muscular activity is decreased by the electric power-assist bicycle, and the reduction of the muscular force required for pedaling motion was quantitatively shown for every muscle.

  4. An experimental protocol for the definition of upper limb anatomical frames on children using magneto-inertial sensors.

    PubMed

    Ricci, L; Formica, D; Tamilia, E; Taffoni, F; Sparaci, L; Capirci, O; Guglielmelli, E

    2013-01-01

    Motion capture based on magneto-inertial sensors is a technology enabling data collection in unstructured environments, allowing "out of the lab" motion analysis. This technology is a good candidate for motion analysis of children thanks to the reduced weight and size as well as the use of wireless communication that has improved its wearability and reduced its obtrusivity. A key issue in the application of such technology for motion analysis is its calibration, i.e. a process that allows mapping orientation information from each sensor to a physiological reference frame. To date, even if there are several calibration procedures available for adults, no specific calibration procedures have been developed for children. This work addresses this specific issue presenting a calibration procedure for motion capture of thorax and upper limbs on healthy children. Reported results suggest comparable performance with similar studies on adults and emphasize some critical issues, opening the way to further improvements.

  5. 1200130

    NASA Image and Video Library

    2012-03-19

    PETER MA, EV74, WEARS A SUIT COVERED WITH SPHERICAL REFLECTORS THAT ENABLE HIS MOTIONS TO BE TRACKED BY THE MOTION CAPTURE SYSTEM. THE HUMAN MODEL IN RED ON THE SCREEN IN THE BACKGROUND REPRESENTS THE SYSTEM-GENERATED IMAGE OF PETER'S POSITION.

  6. A motion detection system for AXAF X-ray ground testing

    NASA Technical Reports Server (NTRS)

    Arenberg, Jonathan W.; Texter, Scott C.

    1993-01-01

    The concept, implementation, and performance of the motion detection system (MDS) designed as a diagnostic for X-ray ground testing for AXAF are described. The purpose of the MDS is to measure the magnitude of a relative rigid body motion among the AXAF test optic, the X-ray source, and X-ray focal plane detector. The MDS consists of a point source, lens, centroid detector, transimpedance amplifier, and computer system. Measurement of the centroid position of the image of the optical point source provides a direct measure of the motions of the X-ray optical system. The outputs from the detector and filter/amplifier are digitized and processed using the calibration with a 50 Hz bandwidth to give the centroid's location on the detector. Resolution of 0.008 arcsec has been achieved by this system. Data illustrating the performance of the motion detection system are also presented.

  7. Correction of motion artifacts in endoscopic optical coherence tomography and autofluorescence images based on azimuthal en face image registration.

    PubMed

    Abouei, Elham; Lee, Anthony M D; Pahlevaninezhad, Hamid; Hohert, Geoffrey; Cua, Michelle; Lane, Pierre; Lam, Stephen; MacAulay, Calum

    2018-01-01

    We present a method for the correction of motion artifacts present in two- and three-dimensional in vivo endoscopic images produced by rotary-pullback catheters. This method can correct for cardiac/breathing-based motion artifacts and catheter-based motion artifacts such as nonuniform rotational distortion (NURD). This method assumes that en face tissue imaging contains slowly varying structures that are roughly parallel to the pullback axis. The method reduces motion artifacts using a dynamic time warping solution through a cost matrix that measures similarities between adjacent frames in en face images. We optimize and demonstrate the suitability of this method using a real and simulated NURD phantom and in vivo endoscopic pulmonary optical coherence tomography and autofluorescence images. Qualitative and quantitative evaluations of the method show an enhancement of the image quality. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  8. SU-E-J-194: Continuous Patient Surface Monitoring and Motion Analysis During Lung SBRT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chung, E; Rioux, A; Benedict, S

    2015-06-15

    Purpose: Continuous monitoring of the SBRT lung patient motion during delivery is critical for ensuring adequate target volume margins in stereotactic body radiotherapy (SBRT). This work assesses the deviation of the patient surface motion using a real-time surface tracking system throughout treatment delivery. Methods: Our SBRT protocol employs abdominal compression to reduce the diaphragm movement to within 1 cm, and this is confirmed daily with fluoroscopy. Most patients are prescribed 3–5 fractions, and on treatment day a repeat motion analysis with fluoroscopy is performed, followed by a kV CBCT is aligned with the original planning CT image for 3D setupmore » confirmation. During this entire process a patient surface data restricted to whole chest or the sternum at the middle of the breathing cycle was captured using AlignRT optical surface tracking system and defined as a reference surface. For 10 patients, the deviation of the patient position from the reference surface was recorded during the SBRT delivery in the anterior-posterior (AP) direction at 3–6 measurements per second. Results: On average, the patient position deviated from the reference surface more than 4 mm, 3 mm and 2 mm in the AP direction for 0.95%, 3.7% and 11.1% of the total treatment time, respectively. Only one of the 10 patients showed that the maximum deviation of the patient surface during the SBRT delivery was greater than 1 cm. The average deviation of the patient surface from the reference surface during the SBRT delivery was not greater than 1.6 mm for any patient. Conclusion: This investigation indicates that AP motion can be significant even though the frequency is low. Continuous monitoring during SBRT has demonstrated value in monitoring patient motion ensuring that margins selected for SBRT are appropriate, and the use of non-ionizing and high-frequency imaging can provide useful indicators of motion during treatment.« less

  9. Optical evaluation of the wave filtering properties of graded undulated lattices

    NASA Astrophysics Data System (ADS)

    Trainiti, G.; Rimoli, J. J.; Ruzzene, M.

    2018-03-01

    We investigate and experimentally demonstrate the elastic wave filtering properties of graded undulated lattices. Square reticulates composed of curved beams are characterized by graded mechanical properties which result from the spatial modulation of the curvature parameter. Among such properties, the progressive formation of frequency bandgaps leads to strong wave attenuation over a broad frequency range. The experimental investigation of wave transmission and the detection of full wavefields effectively illustrate this behavior. Transmission measurements are conducted using a scanning laser Doppler vibrometer, while a dedicated digital image correlation procedure is implemented to capture in-plane wave motion at selected frequencies. The presented results illustrate the broadband attenuation characteristics resulting from spatial grading of the lattice curvature, whose in-depth investigation is enabled by the presented experimental procedures.

  10. Thoracic respiratory motion estimation from MRI using a statistical model and a 2-D image navigator.

    PubMed

    King, A P; Buerger, C; Tsoumpas, C; Marsden, P K; Schaeffter, T

    2012-01-01

    Respiratory motion models have potential application for estimating and correcting the effects of motion in a wide range of applications, for example in PET-MR imaging. Given that motion cycles caused by breathing are only approximately repeatable, an important quality of such models is their ability to capture and estimate the intra- and inter-cycle variability of the motion. In this paper we propose and describe a technique for free-form nonrigid respiratory motion correction in the thorax. Our model is based on a principal component analysis of the motion states encountered during different breathing patterns, and is formed from motion estimates made from dynamic 3-D MRI data. We apply our model using a data-driven technique based on a 2-D MRI image navigator. Unlike most previously reported work in the literature, our approach is able to capture both intra- and inter-cycle motion variability. In addition, the 2-D image navigator can be used to estimate how applicable the current motion model is, and hence report when more imaging data is required to update the model. We also use the motion model to decide on the best positioning for the image navigator. We validate our approach using MRI data acquired from 10 volunteers and demonstrate improvements of up to 40.5% over other reported motion modelling approaches, which corresponds to 61% of the overall respiratory motion present. Finally we demonstrate one potential application of our technique: MRI-based motion correction of real-time PET data for simultaneous PET-MRI acquisition. Copyright © 2011 Elsevier B.V. All rights reserved.

  11. Note: A resonating reflector-based optical system for motion measurement in micro-cantilever arrays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sathishkumar, P.; Punyabrahma, P.; Sri Muthu Mrinalini, R.

    A robust, compact optical measurement unit for motion measurement in micro-cantilever arrays enables development of portable micro-cantilever sensors. This paper reports on an optical beam deflection-based system to measure the deflection of micro-cantilevers in an array that employs a single laser source, a single detector, and a resonating reflector to scan the measurement laser across the array. A strategy is also proposed to extract the deflection of individual cantilevers from the acquired data. The proposed system and measurement strategy are experimentally evaluated and demonstrated to measure motion of multiple cantilevers in an array.

  12. Understanding and Visualizing Multitasking and Task Switching Activities: A Time Motion Study to Capture Nursing Workflow

    PubMed Central

    Yen, Po-Yin; Kelley, Marjorie; Lopetegui, Marcelo; Rosado, Amber L.; Migliore, Elaina M.; Chipps, Esther M.; Buck, Jacalyn

    2016-01-01

    A fundamental understanding of multitasking within nursing workflow is important in today’s dynamic and complex healthcare environment. We conducted a time motion study to understand nursing workflow, specifically multitasking and task switching activities. We used TimeCaT, a comprehensive electronic time capture tool, to capture observational data. We established inter-observer reliability prior to data collection. We completed 56 hours of observation of 10 registered nurses. We found, on average, nurses had 124 communications and 208 hands-on tasks per 4-hour block of time. They multitasked (having communication and hands-on tasks simultaneously) 131 times, representing 39.48% of all times; the total multitasking duration ranges from 14.6 minutes to 109 minutes, 44.98 minutes (18.63%) on average. We also reviewed workflow visualization to uncover the multitasking events. Our study design and methods provide a practical and reliable approach to conducting and analyzing time motion studies from both quantitative and qualitative perspectives. PMID:28269924

  13. On the correlation between motion data captured from low-cost gaming controllers and high precision encoders.

    PubMed

    Purkayastha, Sagar N; Byrne, Michael D; O'Malley, Marcia K

    2012-01-01

    Gaming controllers are attractive devices for research due to their onboard sensing capabilities and low-cost. However, a proper quantitative analysis regarding their suitability for use in motion capture, rehabilitation and as input devices for teleoperation and gesture recognition has yet to be conducted. In this paper, a detailed analysis of the sensors of two of these controllers, the Nintendo Wiimote and the Sony Playstation 3 Sixaxis, is presented. The acceleration and angular velocity data from the sensors of these controllers were compared and correlated with computed acceleration and angular velocity data derived from a high resolution encoder. The results show high correlation between the sensor data from the controllers and the computed data derived from the position data of the encoder. From these results, it can be inferred that the Wiimote is more consistent and better suited for motion capture applications and as an input device than the Sixaxis. The applications of the findings are discussed with respect to potential research ventures.

  14. Musculoskeletal Simulation Model Generation from MRI Data Sets and Motion Capture Data

    NASA Astrophysics Data System (ADS)

    Schmid, Jérôme; Sandholm, Anders; Chung, François; Thalmann, Daniel; Delingette, Hervé; Magnenat-Thalmann, Nadia

    Today computer models and computer simulations of the musculoskeletal system are widely used to study the mechanisms behind human gait and its disorders. The common way of creating musculoskeletal models is to use a generic musculoskeletal model based on data derived from anatomical and biomechanical studies of cadaverous specimens. To adapt this generic model to a specific subject, the usual approach is to scale it. This scaling has been reported to introduce several errors because it does not always account for subject-specific anatomical differences. As a result, a novel semi-automatic workflow is proposed that creates subject-specific musculoskeletal models from magnetic resonance imaging (MRI) data sets and motion capture data. Based on subject-specific medical data and a model-based automatic segmentation approach, an accurate modeling of the anatomy can be produced while avoiding the scaling operation. This anatomical model coupled with motion capture data, joint kinematics information, and muscle-tendon actuators is finally used to create a subject-specific musculoskeletal model.

  15. Understanding and Visualizing Multitasking and Task Switching Activities: A Time Motion Study to Capture Nursing Workflow.

    PubMed

    Yen, Po-Yin; Kelley, Marjorie; Lopetegui, Marcelo; Rosado, Amber L; Migliore, Elaina M; Chipps, Esther M; Buck, Jacalyn

    2016-01-01

    A fundamental understanding of multitasking within nursing workflow is important in today's dynamic and complex healthcare environment. We conducted a time motion study to understand nursing workflow, specifically multitasking and task switching activities. We used TimeCaT, a comprehensive electronic time capture tool, to capture observational data. We established inter-observer reliability prior to data collection. We completed 56 hours of observation of 10 registered nurses. We found, on average, nurses had 124 communications and 208 hands-on tasks per 4-hour block of time. They multitasked (having communication and hands-on tasks simultaneously) 131 times, representing 39.48% of all times; the total multitasking duration ranges from 14.6 minutes to 109 minutes, 44.98 minutes (18.63%) on average. We also reviewed workflow visualization to uncover the multitasking events. Our study design and methods provide a practical and reliable approach to conducting and analyzing time motion studies from both quantitative and qualitative perspectives.

  16. 3D kinematic measurement of human movement using low cost fish-eye cameras

    NASA Astrophysics Data System (ADS)

    Islam, Atiqul; Asikuzzaman, Md.; Garratt, Matthew A.; Pickering, Mark R.

    2017-02-01

    3D motion capture is difficult when the capturing is performed in an outdoor environment without controlled surroundings. In this paper, we propose a new approach of using two ordinary cameras arranged in a special stereoscopic configuration and passive markers on a subject's body to reconstruct the motion of the subject. Firstly for each frame of the video, an adaptive thresholding algorithm is applied for extracting the markers on the subject's body. Once the markers are extracted, an algorithm for matching corresponding markers in each frame is applied. Zhang's planar calibration method is used to calibrate the two cameras. As the cameras use the fisheye lens, they cannot be well estimated using a pinhole camera model which makes it difficult to estimate the depth information. In this work, to restore the 3D coordinates we use a unique calibration method for fisheye lenses. The accuracy of the 3D coordinate reconstruction is evaluated by comparing with results from a commercially available Vicon motion capture system.

  17. The weaker effects of First-order mean motion resonances in intermediate inclinations

    NASA Astrophysics Data System (ADS)

    Chen, YuanYuan; Quillen, Alice C.; Ma, Yuehua; Chinese Scholar Council, the National Natural Science Foundation of China, the Natural Science Foundation of Jiangsu Province, the Minor Planet Foundation of the Purple Mountain Observatory

    2017-10-01

    During planetary migration, a planet or planetesimal can be captured into a low-order mean motion resonance with another planet. Using a second-order expansion of the disturbing function in eccentricity and inclination, we explore the sensitivity of the capture probability of first-order mean motion resonances to orbital inclination. We find that second-order inclination contributions affect the resonance strengths, reducing them at intermediate inclinations of around 10-40° for major first-order resonances. We also integrated the Hamilton's equations with arbitrary initial arguments, and provided the varying tendencies of resonance capture probabilities versus orbital inclinations for different resonances and different particle or planetary eccentricities. Resonance-weaker ranges in inclinations generally appear at the places where resonance strengths are low, around 10-40° in general. The weaker ranges disappear with a higher particle eccentricity (≳0.05) or planetary eccentricity (≳0.05). These resonance-weaker ranges in inclinations implies that intermediate-inclination objects are less likely to be disturbed or captured into the first-order resonances, which would make them entering into the chaotic area around Neptune with a larger fraction than those with low inclinations, during the epoch of Neptune's outward migration. The privilege of high-inclination particles leave them to be more likely captured into Neptune Trojans, which might be responsible for the unexpected high fraction of high-inclination Neptune Trojans.

  18. Computational simulation of extravehicular activity dynamics during a satellite capture attempt.

    PubMed

    Schaffner, G; Newman, D J; Robinson, S K

    2000-01-01

    A more quantitative approach to the analysis of astronaut extravehicular activity (EVA) tasks is needed because of their increasing complexity, particularly in preparation for the on-orbit assembly of the International Space Station. Existing useful EVA computer analyses produce either high-resolution three-dimensional computer images based on anthropometric representations or empirically derived predictions of astronaut strength based on lean body mass and the position and velocity of body joints but do not provide multibody dynamic analysis of EVA tasks. Our physics-based methodology helps fill the current gap in quantitative analysis of astronaut EVA by providing a multisegment human model and solving the equations of motion in a high-fidelity simulation of the system dynamics. The simulation work described here improves on the realism of previous efforts by including three-dimensional astronaut motion, incorporating joint stops to account for the physiological limits of range of motion, and incorporating use of constraint forces to model interaction with objects. To demonstrate the utility of this approach, the simulation is modeled on an actual EVA task, namely, the attempted capture of a spinning Intelsat VI satellite during STS-49 in May 1992. Repeated capture attempts by an EVA crewmember were unsuccessful because the capture bar could not be held in contact with the satellite long enough for the capture latches to fire and successfully retrieve the satellite.

  19. Near-Field, On-Chip Optical Brownian Ratchets.

    PubMed

    Wu, Shao-Hua; Huang, Ningfeng; Jaquay, Eric; Povinelli, Michelle L

    2016-08-10

    Nanoparticles in aqueous solution are subject to collisions with solvent molecules, resulting in random, Brownian motion. By breaking the spatiotemporal symmetry of the system, the motion can be rectified. In nature, Brownian ratchets leverage thermal fluctuations to provide directional motion of proteins and enzymes. In man-made systems, Brownian ratchets have been used for nanoparticle sorting and manipulation. Implementations based on optical traps provide a high degree of tunability along with precise spatiotemporal control. Here, we demonstrate an optical Brownian ratchet based on the near-field traps of an asymmetrically patterned photonic crystal. The system yields over 25 times greater trap stiffness than conventional optical tweezers. Our technique opens up new possibilities for particle manipulation in a microfluidic, lab-on-chip environment.

  20. Effects of background stimulation upon eye-movement information.

    PubMed

    Nakamura, S

    1996-04-01

    To investigate the effects of background stimulation upon eye-movement information (EMI), the perceived deceleration of the target motion during pursuit eye movement (Aubert-Fleishl paradox) was analyzed. In the experiment, a striped pattern was used as a background stimulus with various brightness contrasts and spatial frequencies for serially manipulating the attributions of the background stimulus. Analysis showed that the retinal-image motion of the background stimulus (optic flow) affected eye-movement information and that the effects of optic flow became stronger when high contrast and low spatial frequency stripes were presented as the background stimulus. In conclusion, optic flow is one source of eye-movement information in determining real object motion, and the effectiveness of optic flow depends on the attributes of the background stimulus.

  1. Biomechanics Analysis of Combat Sport (Silat) By Using Motion Capture System

    NASA Astrophysics Data System (ADS)

    Zulhilmi Kaharuddin, Muhammad; Badriah Khairu Razak, Siti; Ikram Kushairi, Muhammad; Syawal Abd. Rahman, Mohamed; An, Wee Chang; Ngali, Z.; Siswanto, W. A.; Salleh, S. M.; Yusup, E. M.

    2017-01-01

    ‘Silat’ is a Malay traditional martial art that is practiced in both amateur and in professional levels. The intensity of the motion spurs the scientific research in biomechanics. The main purpose of this abstract is to present the biomechanics method used in the study of ‘silat’. By using the 3D Depth Camera motion capture system, two subjects are to perform ‘Jurus Satu’ in three repetitions each. One subject is set as the benchmark for the research. The videos are captured and its data is processed using the 3D Depth Camera server system in the form of 16 3D body joint coordinates which then will be transformed into displacement, velocity and acceleration components by using Microsoft excel for data calculation and Matlab software for simulation of the body. The translated data obtained serves as an input to differentiate both subjects’ execution of the ‘Jurus Satu’. Nine primary movements with the addition of five secondary movements are observed visually frame by frame from the simulation obtained to get the exact frame that the movement takes place. Further analysis involves the differentiation of both subjects’ execution by referring to the average mean and standard deviation of joints for each parameter stated. The findings provide useful data for joints kinematic parameters as well as to improve the execution of ‘Jurus Satu’ and to exhibit the process of learning a movement that is relatively unknown by the use of a motion capture system.

  2. Development of esMOCA Biomechanic, Motion Capture Instrumentation for Biomechanics Analysis

    NASA Astrophysics Data System (ADS)

    Arendra, A.; Akhmad, S.

    2018-01-01

    This study aims to build motion capture instruments using inertial measurement unit sensors to assist in the analysis of biomechanics. Sensors used are accelerometer and gyroscope. Estimation of orientation sensors is done by digital motion processing in each sensor nodes. There are nine sensor nodes attached to the upper limbs. This sensor is connected to the pc via a wireless sensor network. The development of kinematics and inverse dynamamic models of the upper limb is done in simulink simmechanic. The kinematic model receives streaming data of sensor nodes mounted on the limbs. The output of the kinematic model is the pose of each limbs and visualized on display. The dynamic inverse model outputs the reaction force and reaction moment of each joint based on the limb motion input. Model validation in simulink with mathematical model of mechanical analysis showed results that did not differ significantly

  3. Using automatic generation of Labanotation to protect folk dance

    NASA Astrophysics Data System (ADS)

    Wang, Jiaji; Miao, Zhenjiang; Guo, Hao; Zhou, Ziming; Wu, Hao

    2017-01-01

    Labanotation uses symbols to describe human motion and is an effective means of protecting folk dance. We use motion capture data to automatically generate Labanotation. First, we convert the motion capture data of the biovision hierarchy file into three-dimensional coordinate data. Second, we divide human motion into element movements. Finally, we analyze each movement and find the corresponding notation. Our work has been supervised by an expert in Labanotation to ensure the correctness of the results. At present, the work deals with a subset of symbols in Labanotation that correspond to several basic movements. Labanotation contains many symbols and several new symbols may be introduced for improvement in the future. We will refine our work to handle more symbols. The automatic generation of Labanotation can greatly improve the work efficiency of documenting movements. Thus, our work will significantly contribute to the protection of folk dance and other action arts.

  4. KSC-08pd1901

    NASA Image and Video Library

    2008-07-02

    CAPE CANAVERAL, Fla. – Professor Peter Voci, NYIT MOCAP (Motion Capture) team director, (left) hands a component of the Orion Crew Module mockup to one of three technicians inside the mockup. The technicians wear motion capture suits. The motion tracking aims to improve efficiency of assembly processes and identify potential ergonomic risks for technicians assembling the mockup. The work is being performed in United Space Alliance's Human Engineering Modeling and Performance Lab in the RLV Hangar at NASA's Kennedy Space Center. The motion tracking aims to improve efficiency of assembly processes and identify potential ergonomic risks for technicians assembling the mockup. The work is being performed in United Space Alliance's Human Engineering Modeling and Performance Lab in the RLV Hangar at NASA's Kennedy Space Center. Part of NASA's Constellation Program, the Orion spacecraft will return humans to the moon and prepare for future voyages to Mars and other destinations in our solar system.

  5. Local motion adaptation enhances the representation of spatial structure at EMD arrays

    PubMed Central

    Lindemann, Jens P.; Egelhaaf, Martin

    2017-01-01

    Neuronal representation and extraction of spatial information are essential for behavioral control. For flying insects, a plausible way to gain spatial information is to exploit distance-dependent optic flow that is generated during translational self-motion. Optic flow is computed by arrays of local motion detectors retinotopically arranged in the second neuropile layer of the insect visual system. These motion detectors have adaptive response characteristics, i.e. their responses to motion with a constant or only slowly changing velocity decrease, while their sensitivity to rapid velocity changes is maintained or even increases. We analyzed by a modeling approach how motion adaptation affects signal representation at the output of arrays of motion detectors during simulated flight in artificial and natural 3D environments. We focused on translational flight, because spatial information is only contained in the optic flow induced by translational locomotion. Indeed, flies, bees and other insects segregate their flight into relatively long intersaccadic translational flight sections interspersed with brief and rapid saccadic turns, presumably to maximize periods of translation (80% of the flight). With a novel adaptive model of the insect visual motion pathway we could show that the motion detector responses to background structures of cluttered environments are largely attenuated as a consequence of motion adaptation, while responses to foreground objects stay constant or even increase. This conclusion even holds under the dynamic flight conditions of insects. PMID:29281631

  6. Spherical Coordinate Systems for Streamlining Suited Mobility Analysis

    NASA Technical Reports Server (NTRS)

    Benson, Elizabeth; Cowley, Matthew; Harvill, Lauren; Rajulu. Sudhakar

    2015-01-01

    Introduction: When describing human motion, biomechanists generally report joint angles in terms of Euler angle rotation sequences. However, there are known limitations in using this method to describe complex motions such as the shoulder joint during a baseball pitch. Euler angle notation uses a series of three rotations about an axis where each rotation is dependent upon the preceding rotation. As such, the Euler angles need to be regarded as a set to get accurate angle information. Unfortunately, it is often difficult to visualize and understand these complex motion representations. It has been shown that using a spherical coordinate system allows Anthropometry and Biomechanics Facility (ABF) personnel to increase their ability to transmit important human mobility data to engineers, in a format that is readily understandable and directly translatable to their design efforts. Objectives: The goal of this project was to use innovative analysis and visualization techniques to aid in the examination and comprehension of complex motions. Methods: This project consisted of a series of small sub-projects, meant to validate and verify a new method before it was implemented in the ABF's data analysis practices. A mechanical test rig was built and tracked in 3D using an optical motion capture system. Its position and orientation were reported in both Euler and spherical reference systems. In the second phase of the project, the ABF estimated the error inherent in a spherical coordinate system, and evaluated how this error would vary within the reference frame. This stage also involved expanding a kinematic model of the shoulder to include the rest of the joints of the body. The third stage of the project involved creating visualization methods to assist in interpreting motion in a spherical frame. These visualization methods will be incorporated in a tool to evaluate a database of suited mobility data, which is currently in development. Results: Initial results demonstrated that a spherical coordinate system is helpful in describing and visualizing the motion of a space suit. The system is particularly useful in describing the motion of the shoulder, where multiple degrees of freedom can lead to very complex motion paths.

  7. Induced transducer orientation during ultrasound imaging: effects on abdominal muscle thickness and bladder position.

    PubMed

    Whittaker, Jackie L; Warner, Martin B; Stokes, Maria J

    2009-11-01

    The use of ultrasound imaging (USI) by physiotherapists to assess muscle behavior in clinical settings is increasing. However, there is relatively little evidence of whether the clinical environment is conducive to valid and reliable measurements. Accurate USI measurements depend on maintaining a relatively stationary transducer position, because motion may distort the image and lead to erroneous conclusions. This would seem particularly important during dynamic studies typical of a physiotherapy assessment. What is not known is how much transducer motion can occur before error is introduced. The aim of this study is to shed some light on this question. Eight healthy volunteers (19 to 52 y) participated. USI images were taken of the lateral abdominal wall (LAW) and bladder base (midline suprapubic) at various manually induced transducer orientations (approximately -10 to 10 degrees about 3 axes of rotation), which were quantified by a digital optical motion capture system. Measurements of transversus abdominis (TrA) thickness and bladder base position (cranial /caudal and anterior/posterior) were calculated. Repeated measures analysis of variance was performed to determine if the measurements obtained at the induced transducer orientations were statistically different (p<0.05) from an image corresponding to a reference or starting transducer orientation. Motion analysis data corresponding to measurements that did not differ from reference image measurements were summarized to provide a range of acceptable transducer motion (relative to the pelvis) for clockwise (CW)/counter-clockwise (CCW) rotation, cranial/caudal tilting, medial/lateral tilting and inward/outward displacement. There were no significant changes in TrA thickness measurements if CW/CCW transducer motion was <9 degrees and cranial/caudal or medial/lateral transducer tilting was <5 degrees . Further, there were no significant changes in measurements of bladder base position if CW/CCW transducer motion was <10 degrees , cranial/caudal or medial/lateral transducer tilting was <10 degrees and 8 degrees , respectively and inward/outward motion was <8 mm. These findings provide guidance on acceptable amounts of transducer motion relative to the pelvis when generating measurements of TrA thickness and bladder base position. Future sonographic studies and clinical assessment investigating these parameters could take these findings into account to improve imaging technique reliability.

  8. In vivo study of rat cortical hemodynamics using a stereotaxic-apparatus-compatible photoacoustic microscope.

    PubMed

    Guo, Heng; Chen, Qian; Qi, Weizhi; Chen, Xingxing; Xi, Lei

    2018-04-19

    Brain imaging is an important technique in cognitive neuroscience. In this article, we designed a stereotaxic-apparatus-compatible photoacoustic microscope for the studies of rat cortical hemodynamics. Compared with existing optical resolution photoacoustic microscopy (ORPAM) systems, the probe owns feature of fast, light and miniature. In this microscope, we integrated a miniaturized ultrasound transducer with a center frequency of 10 MHz to detect photoacoustic signals and a 2-dimensional (2D) microelectromechanical system (MEMS) scanner to achieve raster scanning of the optical focus. Based on phantom evaluation, this imaging probe has a high lateral resolution of 3.8 μm and an effective imaging domain of 2 × 2 mm 2 . Different from conventional ORPAMs, combining with standard stereotaxic apparatus enables broad studies of rodent brains without any motion artifact. To show its capability, we successfully captured red blood cell flow in the capillary, monitored the vascular changes during bleeding and blood infusion and visualized cortical hemodynamics induced by middle cerebral artery occlusion. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. Suppression of extraneous thermal noise in cavity optomechanics.

    PubMed

    Zhao, Yi; Wilson, Dalziel J; Ni, K-K; Kimble, H J

    2012-02-13

    Extraneous thermal motion can limit displacement sensitivity and radiation pressure effects, such as optical cooling, in a cavity-optomechanical system. Here we present an active noise suppression scheme and its experimental implementation. The main challenge is to selectively sense and suppress extraneous thermal noise without affecting motion of the oscillator. Our solution is to monitor two modes of the optical cavity, each with different sensitivity to the oscillator's motion but similar sensitivity to the extraneous thermal motion. This information is used to imprint "anti-noise" onto the frequency of the incident laser field. In our system, based on a nano-mechanical membrane coupled to a Fabry-Pérot cavity, simulation and experiment demonstrate that extraneous thermal noise can be selectively suppressed and that the associated limit on optical cooling can be reduced.

  10. Video Analysis of Rolling Cylinders

    ERIC Educational Resources Information Center

    Phommarach, S.; Wattanakasiwich, P.; Johnston, I.

    2012-01-01

    In this work, we studied the rolling motion of solid and hollow cylinders down an inclined plane at different angles. The motions were captured on video at 300 frames s[superscript -1], and the videos were analyzed frame by frame using video analysis software. Data from the real motion were compared with the theory of rolling down an inclined…

  11. Quantum correlations from a room-temperature optomechanical cavity

    NASA Astrophysics Data System (ADS)

    Purdy, T. P.; Grutter, K. E.; Srinivasan, K.; Taylor, J. M.

    2017-06-01

    The act of position measurement alters the motion of an object being measured. This quantum measurement backaction is typically much smaller than the thermal motion of a room-temperature object and thus difficult to observe. By shining laser light through a nanomechanical beam, we measure the beam’s thermally driven vibrations and perturb its motion with optical force fluctuations at a level dictated by the Heisenberg measurement-disturbance uncertainty relation. We demonstrate a cross-correlation technique to distinguish optically driven motion from thermally driven motion, observing this quantum backaction signature up to room temperature. We use the scale of the quantum correlations, which is determined by fundamental constants, to gauge the size of thermal motion, demonstrating a path toward absolute thermometry with quantum mechanically calibrated ticks.

  12. Measures and Relative Motions of Some Mostly F. G. W. Struve Doubles

    NASA Astrophysics Data System (ADS)

    Wiley, E. O.

    2012-04-01

    Measures of 59 pairs of double stars with long observational histories using "lucky imaging" techniques are reported. Relative motions of 59 pairs are investigated using histories of observation, scatter plots of relative motion, ordinary least-squares (OLS) and total proper motion analyses performed in "R," an open source programming language. A scatter plot of the coefficient of determinations derived from the OLS y|epoch and OLS x|epoch clearly separates common proper motion pairs from optical pairs and what are termed "long-period binary candidates." Differences in proper motion separate optical pairs from long-term binary candidates. An Appendix is provided that details how to use known rectilinear pairs as calibration pairs for the program REDUC.

  13. Video Super-Resolution via Bidirectional Recurrent Convolutional Networks.

    PubMed

    Huang, Yan; Wang, Wei; Wang, Liang

    2018-04-01

    Super resolving a low-resolution video, namely video super-resolution (SR), is usually handled by either single-image SR or multi-frame SR. Single-Image SR deals with each video frame independently, and ignores intrinsic temporal dependency of video frames which actually plays a very important role in video SR. Multi-Frame SR generally extracts motion information, e.g., optical flow, to model the temporal dependency, but often shows high computational cost. Considering that recurrent neural networks (RNNs) can model long-term temporal dependency of video sequences well, we propose a fully convolutional RNN named bidirectional recurrent convolutional network for efficient multi-frame SR. Different from vanilla RNNs, 1) the commonly-used full feedforward and recurrent connections are replaced with weight-sharing convolutional connections. So they can greatly reduce the large number of network parameters and well model the temporal dependency in a finer level, i.e., patch-based rather than frame-based, and 2) connections from input layers at previous timesteps to the current hidden layer are added by 3D feedforward convolutions, which aim to capture discriminate spatio-temporal patterns for short-term fast-varying motions in local adjacent frames. Due to the cheap convolutional operations, our model has a low computational complexity and runs orders of magnitude faster than other multi-frame SR methods. With the powerful temporal dependency modeling, our model can super resolve videos with complex motions and achieve well performance.

  14. Feature tracking for automated volume of interest stabilization on 4D-OCT images

    NASA Astrophysics Data System (ADS)

    Laves, Max-Heinrich; Schoob, Andreas; Kahrs, Lüder A.; Pfeiffer, Tom; Huber, Robert; Ortmaier, Tobias

    2017-03-01

    A common representation of volumetric medical image data is the triplanar view (TV), in which the surgeon manually selects slices showing the anatomical structure of interest. In addition to common medical imaging such as MRI or computed tomography, recent advances in the field of optical coherence tomography (OCT) have enabled live processing and volumetric rendering of four-dimensional images of the human body. Due to the region of interest undergoing motion, it is challenging for the surgeon to simultaneously keep track of an object by continuously adjusting the TV to desired slices. To select these slices in subsequent frames automatically, it is necessary to track movements of the volume of interest (VOI). This has not been addressed with respect to 4DOCT images yet. Therefore, this paper evaluates motion tracking by applying state-of-the-art tracking schemes on maximum intensity projections (MIP) of 4D-OCT images. Estimated VOI location is used to conveniently show corresponding slices and to improve the MIPs by calculating thin-slab MIPs. Tracking performances are evaluated on an in-vivo sequence of human skin, captured at 26 volumes per second. Among investigated tracking schemes, our recently presented tracking scheme for soft tissue motion provides highest accuracy with an error of under 2.2 voxels for the first 80 volumes. Object tracking on 4D-OCT images enables its use for sub-epithelial tracking of microvessels for image-guidance.

  15. Crossed beam roof target for motion tracking

    NASA Technical Reports Server (NTRS)

    Olczak, Eugene (Inventor)

    2009-01-01

    A system for detecting motion between a first body and a second body includes first and second detector-emitter pairs, disposed on the first body, and configured to transmit and receive first and second optical beams, respectively. At least a first optical rotator is disposed on the second body and configured to receive and reflect at least one of the first and second optical beams. First and second detectors of the detector-emitter pairs are configured to detect the first and second optical beams, respectively. Each of the first and second detectors is configured to detect motion between the first and second bodies in multiple degrees of freedom (DOFs). The first optical rotator includes a V-notch oriented to form an apex of an isosceles triangle with respect to a base of the isosceles triangle formed by the first and second detector-emitter pairs. The V-notch is configured to receive the first optical beam and reflect the first optical beam to both the first and second detectors. The V-notch is also configured to receive the second optical beam and reflect the second optical beam to both the first and second detectors.

  16. LTBP Program's Literature Review on Weigh-in-Motion System

    DOT National Transportation Integrated Search

    2016-06-01

    Truck size and weight are regulated using Federal and State legislation and policies to ensure safety and preserve bridge and high infrastructure. Weigh-in-motion (WIM) systems can capture the weight and other defining characteristics of the vehicles...

  17. Eye-motion-corrected optical coherence tomography angiography using Lissajous scanning.

    PubMed

    Chen, Yiwei; Hong, Young-Joo; Makita, Shuichi; Yasuno, Yoshiaki

    2018-03-01

    To correct eye motion artifacts in en face optical coherence tomography angiography (OCT-A) images, a Lissajous scanning method with subsequent software-based motion correction is proposed. The standard Lissajous scanning pattern is modified to be compatible with OCT-A and a corresponding motion correction algorithm is designed. The effectiveness of our method was demonstrated by comparing en face OCT-A images with and without motion correction. The method was further validated by comparing motion-corrected images with scanning laser ophthalmoscopy images, and the repeatability of the method was evaluated using a checkerboard image. A motion-corrected en face OCT-A image from a blinking case is presented to demonstrate the ability of the method to deal with eye blinking. Results show that the method can produce accurate motion-free en face OCT-A images of the posterior segment of the eye in vivo .

  18. Light ray field capture using focal plane sweeping and its optical reconstruction using 3D displays.

    PubMed

    Park, Jae-Hyeung; Lee, Sung-Keun; Jo, Na-Young; Kim, Hee-Jae; Kim, Yong-Soo; Lim, Hong-Gi

    2014-10-20

    We propose a method to capture light ray field of three-dimensional scene using focal plane sweeping. Multiple images are captured using a usual camera at different focal distances, spanning the three-dimensional scene. The captured images are then back-projected to four-dimensional spatio-angular space to obtain the light ray field. The obtained light ray field can be visualized either using digital processing or optical reconstruction using various three-dimensional display techniques including integral imaging, layered display, and holography.

  19. Real-Time Observation of Internal Motion within Ultrafast Dissipative Optical Soliton Molecules

    NASA Astrophysics Data System (ADS)

    Krupa, Katarzyna; Nithyanandan, K.; Andral, Ugo; Tchofo-Dinda, Patrice; Grelu, Philippe

    2017-06-01

    Real-time access to the internal ultrafast dynamics of complex dissipative optical systems opens new explorations of pulse-pulse interactions and dynamic patterns. We present the first direct experimental evidence of the internal motion of a dissipative optical soliton molecule generated in a passively mode-locked erbium-doped fiber laser. We map the internal motion of a soliton pair molecule by using a dispersive Fourier-transform imaging technique, revealing different categories of internal pulsations, including vibrationlike and phase drifting dynamics. Our experiments agree well with numerical predictions and bring insights to the analogy between self-organized states of lights and states of the matter.

  20. Vibration measurements of the Daniel K. Inouye Solar Telescope mount, Coudé rotator, and enclosure assemblies

    NASA Astrophysics Data System (ADS)

    McBride, William R.; McBride, Daniel R.

    2016-08-01

    The Daniel K. Inouye Solar Telescope (DKIST) will be the largest solar telescope in the world, with a 4-meter off-axis primary mirror and 16 meter rotating Coudé laboratory within the telescope pier. The off-axis design requires a mount similar to an 8-meter on-axis telescope. Both the telescope mount and the Coudé laboratory utilize a roller bearing technology in place of the more commonly used hydrostatic bearings. The telescope enclosure utilizes a crawler mechanism for the altitude axis. As these mechanisms have not previously been used in a telescope, understanding the vibration characteristics and the potential impact on the telescope image is important. This paper presents the methodology used to perform jitter measurements of the enclosure and the mount bearings and servo system in a high-noise environment utilizing seismic accelerometers and high dynamic-range data acquisition equipment, along with digital signal processing (DSP) techniques. Data acquisition and signal processing were implemented in MATLAB. In the factory acceptance testing of the telescope mount, multiple accelerometers were strategically located to capture the six axes-of-motion of the primary and secondary mirror dummies. The optical sensitivity analysis was used to map these mirror mount displacements and rotations into units of image motion on the focal plane. Similarly, tests were done with the Coudé rotator, treating the entire rotating instrument lab as a rigid body. Testing was performed by recording accelerometer data while the telescope control system performed tracking operations typical of various observing scenarios. The analysis of the accelerometer data utilized noise-averaging fast Fourier transform (FFT) routines, spectrograms, and periodograms. To achieve adequate dynamic range at frequencies as low as 3Hz, the use of special filters and advanced windowing functions were necessary. Numerous identical automated tests were compared to identify and select the data sets with the lowest level of external interference. Similar testing was performed on the telescope enclosure during the factory test campaign. The vibration of the enclosure altitude and azimuth mechanisms were characterized. This paper details jitter tests using accelerometers placed in locations that allowed the motion of the assemblies to be measured while the control system performed various moves typical of on-sky observations. The measurements were converted into the rigid body motion of the structures and mapped into image motion using the telescope's optical sensitivity analysis.

  1. SU-G-JeP1-04: Characterization of a High-Definition Optical Patient Surface Tracking System Across Five Installations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, T; Ayan, A; Cochran, E

    Purpose: To assess the performance of Varian’s real-time, Optical Surface Monitoring System (OSMS) by measuring relative regular and irregular surface detection accuracy in 6 degrees of motion (6DoM), across multiple installations. Methods: Varian’s Intracranial SRS Package includes OSMS, which utilizes 3 HD camera/projector pods to map a patient surface, track intra-fraction motion, and gate the treatment beam if motion exceeds a threshold. To evaluate motion-detection accuracy of OSMS, we recorded shifts of a cube-shaped phantom on a single Varian TrueBeam linear accelerator as known displacements were performed incrementally across 6DoM. A subset of these measurements was repeated on identical OSMSmore » installations. Phantom motion was driven using the TrueBeam treatment couch, and incremented across ±2cm in steps of 0.1mm, 1mm, and 1cm in the cardinal planes, and across ±40° in steps of 0.1°, 1°, and 5° in the rotational (couch kick) direction. Pitch and Roll were evaluated across ±2.5° in steps of 0.1° and 1°. We then repeated this procedure with a frameless SRS setup with a head phantom in a QFix Encompass mask. Results: Preliminary data show OSMS is capable of detecting regular-surfaced phantom displacement within 0.03±0.04mm in the cardinal planes, and within 0.01±0.03° rotation across all planes for multiple installations. In a frameless SRS setup, OSMS is accurate to within 0.10±0.07mm and 0.04±0.07° across 6DoM. Additionally, a reproducible “thermal drift” was observed during the first 15min of monitoring each day, and characterized by recording displacement of a stationary phantom each minute for 25min. Drift settled after 15min to an average delta of 0.26±0.03mm and 0.38±0.03mm from the initial capture in the Y and Z directions, respectively. Conclusion: For both regular surfaces and clinical SRS situations, OSMS exceeds quoted detection accuracy. To reduce error, a warm-up period should be employed to allow camera/projector pod thermal stabilization.« less

  2. Local motion compensation in image sequences degraded by atmospheric turbulence: a comparative analysis of optical flow vs. block matching methods

    NASA Astrophysics Data System (ADS)

    Huebner, Claudia S.

    2016-10-01

    As a consequence of fluctuations in the index of refraction of the air, atmospheric turbulence causes scintillation, spatial and temporal blurring as well as global and local image motion creating geometric distortions. To mitigate these effects many different methods have been proposed. Global as well as local motion compensation in some form or other constitutes an integral part of many software-based approaches. For the estimation of motion vectors between consecutive frames simple methods like block matching are preferable to more complex algorithms like optical flow, at least when challenged with near real-time requirements. However, the processing power of commercially available computers continues to increase rapidly and the more powerful optical flow methods have the potential to outperform standard block matching methods. Therefore, in this paper three standard optical flow algorithms, namely Horn-Schunck (HS), Lucas-Kanade (LK) and Farnebäck (FB), are tested for their suitability to be employed for local motion compensation as part of a turbulence mitigation system. Their qualitative performance is evaluated and compared with that of three standard block matching methods, namely Exhaustive Search (ES), Adaptive Rood Pattern Search (ARPS) and Correlation based Search (CS).

  3. Electro-Optic Segment-Segment Sensors for Radio and Optical Telescopes

    NASA Technical Reports Server (NTRS)

    Abramovici, Alex

    2012-01-01

    A document discusses an electro-optic sensor that consists of a collimator, attached to one segment, and a quad diode, attached to an adjacent segment. Relative segment-segment motion causes the beam from the collimator to move across the quad diode, thus generating a measureable electric signal. This sensor type, which is relatively inexpensive, can be configured as an edge sensor, or as a remote segment-segment motion sensor.

  4. Quantitative Analysis of Intracellular Motility Based on Optical Flow Model

    PubMed Central

    Li, Heng

    2017-01-01

    Analysis of cell mobility is a key issue for abnormality identification and classification in cell biology research. However, since cell deformation induced by various biological processes is random and cell protrusion is irregular, it is difficult to measure cell morphology and motility in microscopic images. To address this dilemma, we propose an improved variation optical flow model for quantitative analysis of intracellular motility, which not only extracts intracellular motion fields effectively but also deals with optical flow computation problem at the border by taking advantages of the formulation based on L1 and L2 norm, respectively. In the energy functional of our proposed optical flow model, the data term is in the form of L2 norm; the smoothness of the data changes with regional features through an adaptive parameter, using L1 norm near the edge of the cell and L2 norm away from the edge. We further extract histograms of oriented optical flow (HOOF) after optical flow field of intracellular motion is computed. Then distances of different HOOFs are calculated as the intracellular motion features to grade the intracellular motion. Experimental results show that the features extracted from HOOFs provide new insights into the relationship between the cell motility and the special pathological conditions. PMID:29065574

  5. Validation of Clinical Observations of Mastication in Persons with ALS.

    PubMed

    Simione, Meg; Wilson, Erin M; Yunusova, Yana; Green, Jordan R

    2016-06-01

    Amyotrophic lateral sclerosis (ALS) is a progressive neurological disease that can result in difficulties with mastication leading to malnutrition, choking or aspiration, and reduced quality of life. When evaluating mastication, clinicians primarily observe spatial and temporal aspects of jaw motion. The reliability and validity of clinical observations for detecting jaw movement abnormalities is unknown. The purpose of this study is to determine the reliability and validity of clinician-based ratings of chewing performance in neuro-typical controls and persons with varying degrees of chewing impairments due to ALS. Adults chewed a solid food consistency while full-face video were recorded along with jaw kinematic data using a 3D optical motion capture system. Five experienced speech-language pathologists watched the videos and rated the spatial and temporal aspects of chewing performance. The jaw kinematic data served as the gold-standard for validating the clinicians' ratings. Results showed that the clinician-based rating of temporal aspects of chewing performance had strong inter-rater reliability and correlated well with comparable kinematic measures. In contrast, the reliability of rating the spatial and spatiotemporal aspects of chewing (i.e., range of motion of the jaw, consistency of the chewing pattern) was mixed. Specifically, ratings of range of motion were at best only moderately reliable. Ratings of chewing movement consistency were reliable but only weakly correlated with comparable measures of jaw kinematics. These findings suggest that clinician ratings of temporal aspects of chewing are appropriate for clinical use, whereas ratings of the spatial and spatiotemporal aspects of chewing may not be reliable or valid.

  6. Quantitative evaluation of toothbrush and arm-joint motion during tooth brushing.

    PubMed

    Inada, Emi; Saitoh, Issei; Yu, Yong; Tomiyama, Daisuke; Murakami, Daisuke; Takemoto, Yoshihiko; Morizono, Ken; Iwasaki, Tomonori; Iwase, Yoko; Yamasaki, Youichi

    2015-07-01

    It is very difficult for dental professionals to objectively assess tooth brushing skill of patients, because an obvious index to assess the brushing motion of patients has not been established. The purpose of this study was to quantitatively evaluate toothbrush and arm-joint motion during tooth brushing. Tooth brushing motion, performed by dental hygienists for 15 s, was captured using a motion-capture system that continuously calculates the three-dimensional coordinates of object's motion relative to the floor. The dental hygienists performed the tooth brushing on the buccal and palatal sides of their right and left upper molars. The frequencies and power spectra of toothbrush motion and joint angles of the shoulder, elbow, and wrist were calculated and analyzed statistically. The frequency of toothbrush motion was higher on the left side (both buccal and palatal areas) than on the right side. There were no significant differences among joint angle frequencies within each brushing area. The inter- and intra-individual variations of the power spectrum of the elbow flexion angle when brushing were smaller than for any of the other angles. This study quantitatively confirmed that dental hygienists have individual distinctive rhythms during tooth brushing. All arm joints moved synchronously during brushing, and tooth brushing motion was controlled by coordinated movement of the joints. The elbow generated an individual's frequency through a stabilizing movement. The shoulder and wrist control the hand motion, and the elbow generates the cyclic rhythm during tooth brushing.

  7. Microsystem enabled photovoltaic modules and systems

    DOEpatents

    Nielson, Gregory N; Sweatt, William C; Okandan, Murat

    2015-05-12

    A microsystem enabled photovoltaic (MEPV) module including: an absorber layer; a fixed optic layer coupled to the absorber layer; a translatable optic layer; a translation stage coupled between the fixed and translatable optic layers; and a motion processor electrically coupled to the translation stage to controls motion of the translatable optic layer relative to the fixed optic layer. The absorber layer includes an array of photovoltaic (PV) elements. The fixed optic layer includes an array of quasi-collimating (QC) micro-optical elements designed and arranged to couple incident radiation from an intermediate image formed by the translatable optic layer into one of the PV elements such that it is quasi-collimated. The translatable optic layer includes an array of focusing micro-optical elements corresponding to the QC micro-optical element array. Each focusing micro-optical element is designed to produce a quasi-telecentric intermediate image from substantially collimated radiation incident within a predetermined field of view.

  8. Drifting while stepping in place in old adults: Association of self-motion perception with reference frame reliance and ground optic flow sensitivity.

    PubMed

    Agathos, Catherine P; Bernardin, Delphine; Baranton, Konogan; Assaiante, Christine; Isableu, Brice

    2017-04-07

    Optic flow provides visual self-motion information and is shown to modulate gait and provoke postural reactions. We have previously reported an increased reliance on the visual, as opposed to the somatosensory-based egocentric, frame of reference (FoR) for spatial orientation with age. In this study, we evaluated FoR reliance for self-motion perception with respect to the ground surface. We examined how effects of ground optic flow direction on posture may be enhanced by an intermittent podal contact with the ground, and reliance on the visual FoR and aging. Young, middle-aged and old adults stood quietly (QS) or stepped in place (SIP) for 30s under static stimulation, approaching and receding optic flow on the ground and a control condition. We calculated center of pressure (COP) translation and optic flow sensitivity was defined as the ratio of COP translation velocity over absolute optic flow velocity: the visual self-motion quotient (VSQ). COP translation was more influenced by receding flow during QS and by approaching flow during SIP. In addition, old adults drifted forward while SIP without any imposed visual stimulation. Approaching flow limited this natural drift and receding flow enhanced it, as indicated by the VSQ. The VSQ appears to be a motor index of reliance on the visual FoR during SIP and is associated with greater reliance on the visual and reduced reliance on the egocentric FoR. Exploitation of the egocentric FoR for self-motion perception with respect to the ground surface is compromised by age and associated with greater sensitivity to optic flow. Copyright © 2017 IBRO. Published by Elsevier Ltd. All rights reserved.

  9. Inertial Motion Capture Costume Design Study

    PubMed Central

    Szczęsna, Agnieszka; Skurowski, Przemysław; Lach, Ewa; Pruszowski, Przemysław; Pęszor, Damian; Paszkuta, Marcin; Słupik, Janusz; Lebek, Kamil; Janiak, Mateusz; Polański, Andrzej; Wojciechowski, Konrad

    2017-01-01

    The paper describes a scalable, wearable multi-sensor system for motion capture based on inertial measurement units (IMUs). Such a unit is composed of accelerometer, gyroscope and magnetometer. The final quality of an obtained motion arises from all the individual parts of the described system. The proposed system is a sequence of the following stages: sensor data acquisition, sensor orientation estimation, system calibration, pose estimation and data visualisation. The construction of the system’s architecture with the dataflow programming paradigm makes it easy to add, remove and replace the data processing steps. The modular architecture of the system allows an effortless introduction of a new sensor orientation estimation algorithms. The original contribution of the paper is the design study of the individual components used in the motion capture system. The two key steps of the system design are explored in this paper: the evaluation of sensors and algorithms for the orientation estimation. The three chosen algorithms have been implemented and investigated as part of the experiment. Due to the fact that the selection of the sensor has a significant impact on the final result, the sensor evaluation process is also explained and tested. The experimental results confirmed that the choice of sensor and orientation estimation algorithm affect the quality of the final results. PMID:28304337

  10. Markerless identification of key events in gait cycle using image flow.

    PubMed

    Vishnoi, Nalini; Duric, Zoran; Gerber, Naomi Lynn

    2012-01-01

    Gait analysis has been an interesting area of research for several decades. In this paper, we propose image-flow-based methods to compute the motion and velocities of different body segments automatically, using a single inexpensive video camera. We then identify and extract different events of the gait cycle (double-support, mid-swing, toe-off and heel-strike) from video images. Experiments were conducted in which four walking subjects were captured from the sagittal plane. Automatic segmentation was performed to isolate the moving body from the background. The head excursion and the shank motion were then computed to identify the key frames corresponding to different events in the gait cycle. Our approach does not require calibrated cameras or special markers to capture movement. We have also compared our method with the Optotrak 3D motion capture system and found our results in good agreement with the Optotrak results. The development of our method has potential use in the markerless and unencumbered video capture of human locomotion. Monitoring gait in homes and communities provides a useful application for the aged and the disabled. Our method could potentially be used as an assessment tool to determine gait symmetry or to establish the normal gait pattern of an individual.

  11. Ultrafast large-amplitude relocation of electronic charge in ionic crystals

    PubMed Central

    Zamponi, Flavio; Rothhardt, Philip; Stingl, Johannes; Woerner, Michael; Elsaesser, Thomas

    2012-01-01

    The interplay of vibrational motion and electronic charge relocation in an ionic hydrogen-bonded crystal is mapped by X-ray powder diffraction with a 100 fs time resolution. Photoexcitation of the prototype material KH2PO4 induces coherent low-frequency motions of the PO4 tetrahedra in the electronically excited state of the crystal while the average atomic positions remain unchanged. Time-dependent maps of electron density derived from the diffraction data demonstrate an oscillatory relocation of electronic charge with a spatial amplitude two orders of magnitude larger than the underlying vibrational lattice motions. Coherent longitudinal optical and tranverse optical phonon motions that dephase on a time scale of several picoseconds, drive the charge relocation, similar to a soft (transverse optical) mode driven phase transition between the ferro- and paraelectric phase of KH2PO4. PMID:22431621

  12. Relative-Motion Sensors and Actuators for Two Optical Tables

    NASA Technical Reports Server (NTRS)

    Gursel, Yekta; McKenney, Elizabeth

    2004-01-01

    Optoelectronic sensors and magnetic actuators have been developed as parts of a system for controlling the relative position and attitude of two massive optical tables that float on separate standard air suspensions that attenuate ground vibrations. In the specific application for which these sensors and actuators were developed, one of the optical tables holds an optical system that mimics distant stars, while the other optical table holds a test article that simulates a spaceborne stellar interferometer that would be used to observe the stars. The control system is designed to suppress relative motion of the tables or, on demand, to impose controlled relative motion between the tables. The control system includes a sensor system that detects relative motion of the tables in six independent degrees of freedom and a drive system that can apply force to the star-simulator table in the six degrees of freedom. The sensor system includes (1) a set of laser heterodyne gauges and (2) a set of four diode lasers on the star-simulator table, each aimed at one of four quadrant photodiodes at nominal corresponding positions on the test-article table. The heterodyne gauges are used to measure relative displacements along the x axis.

  13. Plenoptic Image Motion Deblurring.

    PubMed

    Chandramouli, Paramanand; Jin, Meiguang; Perrone, Daniele; Favaro, Paolo

    2018-04-01

    We propose a method to remove motion blur in a single light field captured with a moving plenoptic camera. Since motion is unknown, we resort to a blind deconvolution formulation, where one aims to identify both the blur point spread function and the latent sharp image. Even in the absence of motion, light field images captured by a plenoptic camera are affected by a non-trivial combination of both aliasing and defocus, which depends on the 3D geometry of the scene. Therefore, motion deblurring algorithms designed for standard cameras are not directly applicable. Moreover, many state of the art blind deconvolution algorithms are based on iterative schemes, where blurry images are synthesized through the imaging model. However, current imaging models for plenoptic images are impractical due to their high dimensionality. We observe that plenoptic cameras introduce periodic patterns that can be exploited to obtain highly parallelizable numerical schemes to synthesize images. These schemes allow extremely efficient GPU implementations that enable the use of iterative methods. We can then cast blind deconvolution of a blurry light field image as a regularized energy minimization to recover a sharp high-resolution scene texture and the camera motion. Furthermore, the proposed formulation can handle non-uniform motion blur due to camera shake as demonstrated on both synthetic and real light field data.

  14. Vision System Measures Motions of Robot and External Objects

    NASA Technical Reports Server (NTRS)

    Talukder, Ashit; Matthies, Larry

    2008-01-01

    A prototype of an advanced robotic vision system both (1) measures its own motion with respect to a stationary background and (2) detects other moving objects and estimates their motions, all by use of visual cues. Like some prior robotic and other optoelectronic vision systems, this system is based partly on concepts of optical flow and visual odometry. Whereas prior optoelectronic visual-odometry systems have been limited to frame rates of no more than 1 Hz, a visual-odometry subsystem that is part of this system operates at a frame rate of 60 to 200 Hz, given optical-flow estimates. The overall system operates at an effective frame rate of 12 Hz. Moreover, unlike prior machine-vision systems for detecting motions of external objects, this system need not remain stationary: it can detect such motions while it is moving (even vibrating). The system includes a stereoscopic pair of cameras mounted on a moving robot. The outputs of the cameras are digitized, then processed to extract positions and velocities. The initial image-data-processing functions of this system are the same as those of some prior systems: Stereoscopy is used to compute three-dimensional (3D) positions for all pixels in the camera images. For each pixel of each image, optical flow between successive image frames is used to compute the two-dimensional (2D) apparent relative translational motion of the point transverse to the line of sight of the camera. The challenge in designing this system was to provide for utilization of the 3D information from stereoscopy in conjunction with the 2D information from optical flow to distinguish between motion of the camera pair and motions of external objects, compute the motion of the camera pair in all six degrees of translational and rotational freedom, and robustly estimate the motions of external objects, all in real time. To meet this challenge, the system is designed to perform the following image-data-processing functions: The visual-odometry subsystem (the subsystem that estimates the motion of the camera pair relative to the stationary background) utilizes the 3D information from stereoscopy and the 2D information from optical flow. It computes the relationship between the 3D and 2D motions and uses a least-mean-squares technique to estimate motion parameters. The least-mean-squares technique is suitable for real-time implementation when the number of external-moving-object pixels is smaller than the number of stationary-background pixels.

  15. Markerless motion capture systems as training device in neurological rehabilitation: a systematic review of their use, application, target population and efficacy.

    PubMed

    Knippenberg, Els; Verbrugghe, Jonas; Lamers, Ilse; Palmaers, Steven; Timmermans, Annick; Spooren, Annemie

    2017-06-24

    Client-centred task-oriented training is important in neurological rehabilitation but is time consuming and costly in clinical practice. The use of technology, especially motion capture systems (MCS) which are low cost and easy to apply in clinical practice, may be used to support this kind of training, but knowledge and evidence of their use for training is scarce. The present review aims to investigate 1) which motion capture systems are used as training devices in neurological rehabilitation, 2) how they are applied, 3) in which target population, 4) what the content of the training and 5) efficacy of training with MCS is. A computerised systematic literature review was conducted in four databases (PubMed, Cinahl, Cochrane Database and IEEE). The following MeSH terms and key words were used: Motion, Movement, Detection, Capture, Kinect, Rehabilitation, Nervous System Diseases, Multiple Sclerosis, Stroke, Spinal Cord, Parkinson Disease, Cerebral Palsy and Traumatic Brain Injury. The Van Tulder's Quality assessment was used to score the methodological quality of the selected studies. The descriptive analysis is reported by MCS, target population, training parameters and training efficacy. Eighteen studies were selected (mean Van Tulder score = 8.06 ± 3.67). Based on methodological quality, six studies were selected for analysis of training efficacy. Most commonly used MCS was Microsoft Kinect, training was mostly conducted in upper limb stroke rehabilitation. Training programs varied in intensity, frequency and content. None of the studies reported an individualised training program based on client-centred approach. Motion capture systems are training devices with potential in neurological rehabilitation to increase the motivation during training and may assist improvement on one or more International Classification of Functioning, Disability and Health (ICF) levels. Although client-centred task-oriented training is important in neurological rehabilitation, the client-centred approach was not included. Future technological developments should take up the challenge to combine MCS with the principles of a client-centred task-oriented approach and prove efficacy using randomised controlled trials with long-term follow-up. Prospero registration number 42016035582 .

  16. Nonlinear finite element analysis of liquid sloshing in complex vehicle motion scenarios

    NASA Astrophysics Data System (ADS)

    Nicolsen, Brynne; Wang, Liang; Shabana, Ahmed

    2017-09-01

    The objective of this investigation is to develop a new total Lagrangian continuum-based liquid sloshing model that can be systematically integrated with multibody system (MBS) algorithms in order to allow for studying complex motion scenarios. The new approach allows for accurately capturing the effect of the sloshing forces during curve negotiation, rapid lane change, and accelerating and braking scenarios. In these motion scenarios, the liquid experiences large displacements and significant changes in shape that can be captured effectively using the finite element (FE) absolute nodal coordinate formulation (ANCF). ANCF elements are used in this investigation to describe complex mesh geometries, to capture the change in inertia due to the change in the fluid shape, and to accurately calculate the centrifugal forces, which for flexible bodies do not take the simple form used in rigid body dynamics. A penalty formulation is used to define the contact between the rigid tank walls and the fluid. A fully nonlinear MBS truck model that includes a suspension system and Pacejka's brush tire model is developed. Specified motion trajectories are used to examine the vehicle dynamics in three different scenarios - deceleration during straight-line motion, rapid lane change, and curve negotiation. It is demonstrated that the liquid sloshing changes the contact forces between the tires and the ground - increasing the forces on certain wheels and decreasing the forces on other wheels. In cases of extreme sloshing, this dynamic behavior can negatively impact the vehicle stability by increasing the possibility of wheel lift and vehicle rollover.

  17. Nearly automatic motion capture system for tracking octopus arm movements in 3D space.

    PubMed

    Zelman, Ido; Galun, Meirav; Akselrod-Ballin, Ayelet; Yekutieli, Yoram; Hochner, Binyamin; Flash, Tamar

    2009-08-30

    Tracking animal movements in 3D space is an essential part of many biomechanical studies. The most popular technique for human motion capture uses markers placed on the skin which are tracked by a dedicated system. However, this technique may be inadequate for tracking animal movements, especially when it is impossible to attach markers to the animal's body either because of its size or shape or because of the environment in which the animal performs its movements. Attaching markers to an animal's body may also alter its behavior. Here we present a nearly automatic markerless motion capture system that overcomes these problems and successfully tracks octopus arm movements in 3D space. The system is based on three successive tracking and processing stages. The first stage uses a recently presented segmentation algorithm to detect the movement in a pair of video sequences recorded by two calibrated cameras. In the second stage, the results of the first stage are processed to produce 2D skeletal representations of the moving arm. Finally, the 2D skeletons are used to reconstruct the octopus arm movement as a sequence of 3D curves varying in time. Motion tracking, segmentation and reconstruction are especially difficult problems in the case of octopus arm movements because of the deformable, non-rigid structure of the octopus arm and the underwater environment in which it moves. Our successful results suggest that the motion-tracking system presented here may be used for tracking other elongated objects.

  18. Control of a Quadcopter Aerial Robot Using Optic Flow Sensing

    NASA Astrophysics Data System (ADS)

    Hurd, Michael Brandon

    This thesis focuses on the motion control of a custom-built quadcopter aerial robot using optic flow sensing. Optic flow sensing is a vision-based approach that can provide a robot the ability to fly in global positioning system (GPS) denied environments, such as indoor environments. In this work, optic flow sensors are used to stabilize the motion of quadcopter robot, where an optic flow algorithm is applied to provide odometry measurements to the quadcopter's central processing unit to monitor the flight heading. The optic-flow sensor and algorithm are capable of gathering and processing the images at 250 frames/sec, and the sensor package weighs 2.5 g and has a footprint of 6 cm2 in area. The odometry value from the optic flow sensor is then used a feedback information in a simple proportional-integral-derivative (PID) controller on the quadcopter. Experimental results are presented to demonstrate the effectiveness of using optic flow for controlling the motion of the quadcopter aerial robot. The technique presented herein can be applied to different types of aerial robotic systems or unmanned aerial vehicles (UAVs), as well as unmanned ground vehicles (UGV).

  19. Insect-Inspired Optical-Flow Navigation Sensors

    NASA Technical Reports Server (NTRS)

    Thakoor, Sarita; Morookian, John M.; Chahl, Javan; Soccol, Dean; Hines, Butler; Zornetzer, Steven

    2005-01-01

    Integrated circuits that exploit optical flow to sense motions of computer mice on or near surfaces ( optical mouse chips ) are used as navigation sensors in a class of small flying robots now undergoing development for potential use in such applications as exploration, search, and surveillance. The basic principles of these robots were described briefly in Insect-Inspired Flight Control for Small Flying Robots (NPO-30545), NASA Tech Briefs, Vol. 29, No. 1 (January 2005), page 61. To recapitulate from the cited prior article: The concept of optical flow can be defined, loosely, as the use of texture in images as a source of motion cues. The flight-control and navigation systems of these robots are inspired largely by the designs and functions of the vision systems and brains of insects, which have been demonstrated to utilize optical flow (as detected by their eyes and brains) resulting from their own motions in the environment. Optical flow has been shown to be very effective as a means of avoiding obstacles and controlling speeds and altitudes in robotic navigation. Prior systems used in experiments on navigating by means of optical flow have involved the use of panoramic optics, high-resolution image sensors, and programmable imagedata- processing computers.

  20. Measurement of six-degree-of-freedom planar motions by using a multiprobe surface encoder

    NASA Astrophysics Data System (ADS)

    Li, Xinghui; Shimizu, Yuki; Ito, Takeshi; Cai, Yindi; Ito, So; Gao, Wei

    2014-12-01

    A multiprobe surface encoder for optical metrology of six-degree-of-freedom (six-DOF) planar motions is presented. The surface encoder is composed of an XY planar scale grating with identical microstructures in X- and Y-axes and an optical sensor head. In the optical sensor head, three paralleled laser beams were used as laser probes. After being divided by a beam splitter, the three laser probes were projected onto the scale grating and a reference grating with identical microstructures, respectively. For each probe, the first-order positive and negative diffraction beams along the X- and Y-directions from the scale grating and from the reference grating superimposed with each other and four pieces of interference signals were generated. Three-DOF translational motions of the scale grating Δx, Δy, and Δz can be obtained simultaneously from the interference signals of each probe. Three-DOF angular error motions θX, θY, and θZ can also be calculated simultaneously from differences of displacement output variations and the geometric relationship among the three probes. A prototype optical sensor head was designed, constructed, and evaluated. Experimental results verified that this surface encoder could provide measurement resolutions of subnanometer and better than 0.1 arc sec for three-DOF translational motions and three-DOF angular error motions, respectively.

  1. Teasing Apart Complex Motions using VideoPoint

    NASA Astrophysics Data System (ADS)

    Fischer, Mark

    2002-10-01

    Using video analysis software such as VideoPoint, it is possible to explore the physics of any phenomenon that can be captured on videotape. The good news is that complex motions can be filmed and analyzed. The bad news is that the motions can become very complex very quickly. An example of such a complicated motion, the 2-dimensional motion of an object as filmed by a camera that is moving and rotating in the same plane will be discussed. Methods for extracting the desired object motion will be given as well as suggestions for shooting more easily analyzable video clips.

  2. The efficacy of interactive, motion capture-based rehabilitation on functional outcomes in an inpatient stroke population: a randomized controlled trial.

    PubMed

    Cannell, John; Jovic, Emelyn; Rathjen, Amy; Lane, Kylie; Tyson, Anna M; Callisaya, Michele L; Smith, Stuart T; Ahuja, Kiran Dk; Bird, Marie-Louise

    2018-02-01

    To compare the efficacy of novel interactive, motion capture-rehabilitation software to usual care stroke rehabilitation on physical function. Randomized controlled clinical trial. Two subacute hospital rehabilitation units in Australia. In all, 73 people less than six months after stroke with reduced mobility and clinician determined capacity to improve. Both groups received functional retraining and individualized programs for up to an hour, on weekdays for 8-40 sessions (dose matched). For the intervention group, this individualized program used motivating virtual reality rehabilitation and novel gesture controlled interactive motion capture software. For usual care, the individualized program was delivered in a group class on one unit and by rehabilitation assistant 1:1 on the other. Primary outcome was standing balance (functional reach). Secondary outcomes were lateral reach, step test, sitting balance, arm function, and walking. Participants (mean 22 days post-stroke) attended mean 14 sessions. Both groups improved (mean (95% confidence interval)) on primary outcome functional reach (usual care 3.3 (0.6 to 5.9), intervention 4.1 (-3.0 to 5.0) cm) with no difference between groups ( P = 0.69) on this or any secondary measures. No differences between the rehabilitation units were seen except in lateral reach (less affected side) ( P = 0.04). No adverse events were recorded during therapy. Interactive, motion capture rehabilitation for inpatients post stroke produced functional improvements that were similar to those achieved by usual care stroke rehabilitation, safely delivered by either a physical therapist or a rehabilitation assistant.

  3. Smart Sensor-Based Motion Detection System for Hand Movement Training in Open Surgery.

    PubMed

    Sun, Xinyao; Byrns, Simon; Cheng, Irene; Zheng, Bin; Basu, Anup

    2017-02-01

    We introduce a smart sensor-based motion detection technique for objective measurement and assessment of surgical dexterity among users at different experience levels. The goal is to allow trainees to evaluate their performance based on a reference model shared through communication technology, e.g., the Internet, without the physical presence of an evaluating surgeon. While in the current implementation we used a Leap Motion Controller to obtain motion data for analysis, our technique can be applied to motion data captured by other smart sensors, e.g., OptiTrack. To differentiate motions captured from different participants, measurement and assessment in our approach are achieved using two strategies: (1) low level descriptive statistical analysis, and (2) Hidden Markov Model (HMM) classification. Based on our surgical knot tying task experiment, we can conclude that finger motions generated from users with different surgical dexterity, e.g., expert and novice performers, display differences in path length, number of movements and task completion time. In order to validate the discriminatory ability of HMM for classifying different movement patterns, a non-surgical task was included in our analysis. Experimental results demonstrate that our approach had 100 % accuracy in discriminating between expert and novice performances. Our proposed motion analysis technique applied to open surgical procedures is a promising step towards the development of objective computer-assisted assessment and training systems.

  4. Inertial Measures of Motion for Clinical Biomechanics: Comparative Assessment of Accuracy under Controlled Conditions – Changes in Accuracy over Time

    PubMed Central

    Lebel, Karina; Boissy, Patrick; Hamel, Mathieu; Duval, Christian

    2015-01-01

    Background Interest in 3D inertial motion tracking devices (AHRS) has been growing rapidly among the biomechanical community. Although the convenience of such tracking devices seems to open a whole new world of possibilities for evaluation in clinical biomechanics, its limitations haven’t been extensively documented. The objectives of this study are: 1) to assess the change in absolute and relative accuracy of multiple units of 3 commercially available AHRS over time; and 2) to identify different sources of errors affecting AHRS accuracy and to document how they may affect the measurements over time. Methods This study used an instrumented Gimbal table on which AHRS modules were carefully attached and put through a series of velocity-controlled sustained motions including 2 minutes motion trials (2MT) and 12 minutes multiple dynamic phases motion trials (12MDP). Absolute accuracy was assessed by comparison of the AHRS orientation measurements to those of an optical gold standard. Relative accuracy was evaluated using the variation in relative orientation between modules during the trials. Findings Both absolute and relative accuracy decreased over time during 2MT. 12MDP trials showed a significant decrease in accuracy over multiple phases, but accuracy could be enhanced significantly by resetting the reference point and/or compensating for initial Inertial frame estimation reference for each phase. Interpretation The variation in AHRS accuracy observed between the different systems and with time can be attributed in part to the dynamic estimation error, but also and foremost, to the ability of AHRS units to locate the same Inertial frame. Conclusions Mean accuracies obtained under the Gimbal table sustained conditions of motion suggest that AHRS are promising tools for clinical mobility assessment under constrained conditions of use. However, improvement in magnetic compensation and alignment between AHRS modules are desirable in order for AHRS to reach their full potential in capturing clinical outcomes. PMID:25811838

  5. Assessment method of digital Chinese dance movements based on virtual reality technology

    NASA Astrophysics Data System (ADS)

    Feng, Wei; Shao, Shuyuan; Wang, Shumin

    2008-03-01

    Virtual reality has played an increasing role in such areas as medicine, architecture, aviation, engineering science and advertising. However, in the art fields, virtual reality is still in its infancy in the representation of human movements. Based on the techniques of motion capture and reuse of motion capture data in virtual reality environment, this paper presents an assessment method in order to evaluate the quantification of dancers' basic Arm Position movements in Chinese traditional dance. In this paper, the data for quantifying traits of dance motions are defined and measured on dancing which performed by an expert and two beginners, with results indicating that they are beneficial for evaluating dance skills and distinctiveness, and the assessment method of digital Chinese dance movements based on virtual reality technology is validity and feasibility.

  6. Model-Based Reinforcement of Kinect Depth Data for Human Motion Capture Applications

    PubMed Central

    Calderita, Luis Vicente; Bandera, Juan Pedro; Bustos, Pablo; Skiadopoulos, Andreas

    2013-01-01

    Motion capture systems have recently experienced a strong evolution. New cheap depth sensors and open source frameworks, such as OpenNI, allow for perceiving human motion on-line without using invasive systems. However, these proposals do not evaluate the validity of the obtained poses. This paper addresses this issue using a model-based pose generator to complement the OpenNI human tracker. The proposed system enforces kinematics constraints, eliminates odd poses and filters sensor noise, while learning the real dimensions of the performer's body. The system is composed by a PrimeSense sensor, an OpenNI tracker and a kinematics-based filter and has been extensively tested. Experiments show that the proposed system improves pure OpenNI results at a very low computational cost. PMID:23845933

  7. KSC-08pd1900

    NASA Image and Video Library

    2008-07-02

    CAPE CANAVERAL, Fla. –David Voci, NYIT MOCAP (Motion Capture) team co-director (seated at the workstation in the background) prepares to direct a motion capture session assisted by Kennedy Advanced Visualizations Environment staff led by Brad Lawrence (not pictured) and by Lora Ridgwell from United Space Alliance Human Factors (foreground, left). Ridgwell will help assemble the Orion Crew Module mockup. The motion tracking aims to improve efficiency of assembly processes and identify potential ergonomic risks for technicians assembling the mockup. The work is being performed in United Space Alliance's Human Engineering Modeling and Performance Lab in the RLV Hangar at NASA's Kennedy Space Center. Part of NASA's Constellation Program, the Orion spacecraft will return humans to the moon and prepare for future voyages to Mars and other destinations in our solar system.

  8. The relationship between body movements and qualities of social interaction between a boy with severe developmental disabilities and his caregiver.

    PubMed

    Dammeyer, Jesper; Køppe, Simo

    2013-06-01

    Research in social interaction and nonverbal communication among individuals with severe developmental disabilities also includes the study of body movements. Advances in analytical technology give new possibilities for measuring body movements more accurately and reliably. One such advance is the Qualisys Motion Capture System (QMCS), which utilizes optical markers to capture body movements. The aim of this study was to explore the practicality of measuring body movements in the nonverbal communication of a child with severe developmental disabilities. A preliminary case study has been undertaken. The social interaction between a boy with developmental disabilities and his teacher was analyzed (1) using observer ratings on psychological aspects of the social interaction and (2) measuring body positions, velocity, and angles of body movements using the QMCS. Associations between observer ratings and measured body movements were examined. This preliminary case study has indicated that emotional response and attention level during the social interaction corresponded with local, synchronized movements and face-to-face orientation. Measurement of motor behavior is suggested as being a potentially useful methodological approach to studying social interaction and communication development.

  9. A study on validating KinectV2 in comparison of Vicon system as a motion capture system for using in Health Engineering in industry

    NASA Astrophysics Data System (ADS)

    Jebeli, Mahvash; Bilesan, Alireza; Arshi, Ahmadreza

    2017-06-01

    The currently available commercial motion capture systems are constrained by space requirement and thus pose difficulties when used in developing kinematic description of human movements within the existing manufacturing and production cells. The Kinect sensor does not share similar limitations but it is not as accurate. The proposition made in this article is to adopt the Kinect sensor in to facilitate implementation of Health Engineering concepts to industrial environments. This article is an evaluation of the Kinect sensor accuracy when providing three dimensional kinematic data. The sensor is thus utilized to assist in modeling and simulation of worker performance within an industrial cell. For this purpose, Kinect 3D data was compared to that of Vicon motion capture system in a gait analysis laboratory. Results indicated that the Kinect sensor exhibited a coefficient of determination of 0.9996 on the depth axis and 0.9849 along the horizontal axis and 0.2767 on vertical axis. The results prove the competency of the Kinect sensor to be used in the industrial environments.

  10. Optical tweezers with 2.5 kHz bandwidth video detection for single-colloid electrophoresis

    NASA Astrophysics Data System (ADS)

    Otto, Oliver; Gutsche, Christof; Kremer, Friedrich; Keyser, Ulrich F.

    2008-02-01

    We developed an optical tweezers setup to study the electrophoretic motion of colloids in an external electric field. The setup is based on standard components for illumination and video detection. Our video based optical tracking of the colloid motion has a time resolution of 0.2ms, resulting in a bandwidth of 2.5kHz. This enables calibration of the optical tweezers by Brownian motion without applying a quadrant photodetector. We demonstrate that our system has a spatial resolution of 0.5nm and a force sensitivity of 20fN using a Fourier algorithm to detect periodic oscillations of the trapped colloid caused by an external ac field. The electrophoretic mobility and zeta potential of a single colloid can be extracted in aqueous solution avoiding screening effects common for usual bulk measurements.

  11. Electrostatic micromembrane actuator arrays as motion generator

    NASA Astrophysics Data System (ADS)

    Wu, X. T.; Hui, J.; Young, M.; Kayatta, P.; Wong, J.; Kennith, D.; Zhe, J.; Warde, C.

    2004-05-01

    A rigid-body motion generator based on an array of micromembrane actuators is described. Unlike previous microelectromechanical systems (MEMS) techniques, the architecture employs a large number (typically greater than 1000) of micron-sized (10-200 μm) membrane actuators to simultaneously generate the displacement of a large rigid body, such as a conventional optical mirror. For optical applications, the approach provides optical design freedom of MEMS mirrors by enabling large-aperture mirrors to be driven electrostatically by MEMS actuators. The micromembrane actuator arrays have been built using a stacked architecture similar to that employed in the Multiuser MEMS Process (MUMPS), and the motion transfer from the arrayed micron-sized actuators to macro-sized components was demonstrated.

  12. Relative Motion of the WDS 05110+3203 STF 648 System, With a Protocol for Calculating Relative Motion

    NASA Astrophysics Data System (ADS)

    Wiley, E. O.

    2010-07-01

    Relative motion studies of visual double stars can be investigated using least squares regression techniques and readily accessible programs such as Microsoft Excel and a calculator. Optical pairs differ from physical pairs under most geometries in both their simple scatter plots and their regression models. A step-by-step protocol for estimating the rectilinear elements of an optical pair is presented. The characteristics of physical pairs using these techniques are discussed.

  13. Adaptive virus detection using filament-coupled antibodies.

    PubMed

    Stone, Gregory P; Lin, Kelvin S; Haselton, Frederick R

    2006-01-01

    We recently reported the development of a filament-antibody recognition assay (FARA), in which the presence of virions in solution initiates the formation of enzyme-linked immunosorbent assay (ELISA)-like antibody complexes. The unique features of this assay are that processing is achieved by motion of a filament and that, in the presence of a virus, antibody-virus complexes are coupled to the filament at known locations. In this work, we combine the unique features of this assay with a 638-nm laser-based optical detector to enable adaptive control of virus detection. Integration of on-line fluorescence detection yields approximately a five-fold increase in signal-to-noise ratio (SNR) compared to the fluorescence detection method reported previously. A one-minute incubation with an M13K07 test virus is required to detect 10(10) virionsml, and 40 min was required to detect 10(8) virionsml. In tests of the components of an adaptive strategy, a 30-min virus (3.3 x 10(10) virionsml) incubation time, followed by repositioning the filament-captured virus either within the detecting antibody chamber, (20 microg ml) or within the virus chamber, found an increase in signal roughly proportional to the cumulative residence times in these chambers. Furthermore, cumulative fluorescence signals observed for a filament-captured virus after repeated positioning of the filament within the virus chamber are similar to those observed for a single long incubation time. The unique features of the FARA-like design combined with online optical detection to direct subsequent bioprocessing steps provides new flexibility for developing adaptive molecular recognition assays.

  14. Brightness-compensated 3-D optical flow algorithm for monitoring cochlear motion patterns

    NASA Astrophysics Data System (ADS)

    von Tiedemann, Miriam; Fridberger, Anders; Ulfendahl, Mats; de Monvel, Jacques Boutet

    2010-09-01

    A method for three-dimensional motion analysis designed for live cell imaging by fluorescence confocal microscopy is described. The approach is based on optical flow computation and takes into account brightness variations in the image scene that are not due to motion, such as photobleaching or fluorescence variations that may reflect changes in cellular physiology. The 3-D optical flow algorithm allowed almost perfect motion estimation on noise-free artificial sequences, and performed with a relative error of <10% on noisy images typical of real experiments. The method was applied to a series of 3-D confocal image stacks from an in vitro preparation of the guinea pig cochlea. The complex motions caused by slow pressure changes in the cochlear compartments were quantified. At the surface of the hearing organ, the largest motion component was the transverse one (normal to the surface), but significant radial and longitudinal displacements were also present. The outer hair cell displayed larger radial motion at their basolateral membrane than at their apical surface. These movements reflect mechanical interactions between different cellular structures, which may be important for communicating sound-evoked vibrations to the sensory cells. A better understanding of these interactions is important for testing realistic models of cochlear mechanics.

  15. Brightness-compensated 3-D optical flow algorithm for monitoring cochlear motion patterns.

    PubMed

    von Tiedemann, Miriam; Fridberger, Anders; Ulfendahl, Mats; de Monvel, Jacques Boutet

    2010-01-01

    A method for three-dimensional motion analysis designed for live cell imaging by fluorescence confocal microscopy is described. The approach is based on optical flow computation and takes into account brightness variations in the image scene that are not due to motion, such as photobleaching or fluorescence variations that may reflect changes in cellular physiology. The 3-D optical flow algorithm allowed almost perfect motion estimation on noise-free artificial sequences, and performed with a relative error of <10% on noisy images typical of real experiments. The method was applied to a series of 3-D confocal image stacks from an in vitro preparation of the guinea pig cochlea. The complex motions caused by slow pressure changes in the cochlear compartments were quantified. At the surface of the hearing organ, the largest motion component was the transverse one (normal to the surface), but significant radial and longitudinal displacements were also present. The outer hair cell displayed larger radial motion at their basolateral membrane than at their apical surface. These movements reflect mechanical interactions between different cellular structures, which may be important for communicating sound-evoked vibrations to the sensory cells. A better understanding of these interactions is important for testing realistic models of cochlear mechanics.

  16. Optics derotator servo control system for SONG Telescope

    NASA Astrophysics Data System (ADS)

    Xu, Jin; Ren, Changzhi; Ye, Yu

    2012-09-01

    The Stellar Oscillations Network Group (SONG) is an initiative which aims at designing and building a groundbased network of 1m telescopes dedicated to the study of phenomena occurring in the time domain. Chinese standard node of SONG is an Alt-Az Telescope of F/37 with 1m diameter. Optics derotator control system of SONG telescope adopts the development model of "Industrial Computer + UMAC Motion Controller + Servo Motor".1 Industrial computer is the core processing part of the motion control, motion control card(UMAC) is in charge of the details on the motion control, Servo amplifier accepts the control commands from UMAC, and drives the servo motor. The position feedback information comes from the encoder, to form a closed loop control system. This paper describes in detail hardware design and software design for the optics derotator servo control system. In terms of hardware design, the principle, structure, and control algorithm of servo system based on optics derotator are analyzed and explored. In terms of software design, the paper proposes the architecture of the system software based on Object-Oriented Programming.

  17. Development of a novel visuomotor integration paradigm by integrating a virtual environment with mobile eye-tracking and motion-capture systems

    PubMed Central

    Miller, Haylie L.; Bugnariu, Nicoleta; Patterson, Rita M.; Wijayasinghe, Indika; Popa, Dan O.

    2018-01-01

    Visuomotor integration (VMI), the use of visual information to guide motor planning, execution, and modification, is necessary for a wide range of functional tasks. To comprehensively, quantitatively assess VMI, we developed a paradigm integrating virtual environments, motion-capture, and mobile eye-tracking. Virtual environments enable tasks to be repeatable, naturalistic, and varied in complexity. Mobile eye-tracking and minimally-restricted movement enable observation of natural strategies for interacting with the environment. This paradigm yields a rich dataset that may inform our understanding of VMI in typical and atypical development. PMID:29876370

  18. Commercial Motion Sensor Based Low-Cost and Convenient Interactive Treadmill.

    PubMed

    Kim, Jonghyun; Gravunder, Andrew; Park, Hyung-Soon

    2015-09-17

    Interactive treadmills were developed to improve the simulation of overground walking when compared to conventional treadmills. However, currently available interactive treadmills are expensive and inconvenient, which limits their use. We propose a low-cost and convenient version of the interactive treadmill that does not require expensive equipment and a complicated setup. As a substitute for high-cost sensors, such as motion capture systems, a low-cost motion sensor was used to recognize the subject's intention for speed changing. Moreover, the sensor enables the subject to make a convenient and safe stop using gesture recognition. For further cost reduction, the novel interactive treadmill was based on an inexpensive treadmill platform and a novel high-level speed control scheme was applied to maximize performance for simulating overground walking. Pilot tests with ten healthy subjects were conducted and results demonstrated that the proposed treadmill achieves similar performance to a typical, costly, interactive treadmill that contains a motion capture system and an instrumented treadmill, while providing a convenient and safe method for stopping.

  19. Metrics for Performance Evaluation of Patient Exercises during Physical Therapy.

    PubMed

    Vakanski, Aleksandar; Ferguson, Jake M; Lee, Stephen

    2017-06-01

    The article proposes a set of metrics for evaluation of patient performance in physical therapy exercises. Taxonomy is employed that classifies the metrics into quantitative and qualitative categories, based on the level of abstraction of the captured motion sequences. Further, the quantitative metrics are classified into model-less and model-based metrics, in reference to whether the evaluation employs the raw measurements of patient performed motions, or whether the evaluation is based on a mathematical model of the motions. The reviewed metrics include root-mean square distance, Kullback Leibler divergence, log-likelihood, heuristic consistency, Fugl-Meyer Assessment, and similar. The metrics are evaluated for a set of five human motions captured with a Kinect sensor. The metrics can potentially be integrated into a system that employs machine learning for modelling and assessment of the consistency of patient performance in home-based therapy setting. Automated performance evaluation can overcome the inherent subjectivity in human performed therapy assessment, and it can increase the adherence to prescribed therapy plans, and reduce healthcare costs.

  20. Foot and Ankle Kinematics and Dynamic Electromyography: Quantitative Analysis of Recovery From Peroneal Neuropathy in a Professional Football Player.

    PubMed

    Prasad, Nikhil K; Coleman Wood, Krista A; Spinner, Robert J; Kaufman, Kenton R

    The assessment of neuromuscular recovery after peripheral nerve surgery has typically been a subjective physical examination. The purpose of this report was to assess the value of gait analysis in documenting recovery quantitatively. A professional football player underwent gait analysis before and after surgery for a peroneal intraneural ganglion cyst causing a left-sided foot drop. Surface electromyography (SEMG) recording from surface electrodes and motion parameter acquisition from a computerized motion capture system consisting of 10 infrared cameras were performed simultaneously. A comparison between SEMG recordings before and after surgery showed a progression from disorganized activation in the left tibialis anterior and peroneus longus muscles to temporally appropriate activation for the phase of the gait cycle. Kinematic analysis of ankle motion planes showed resolution from a complete foot drop preoperatively to phase-appropriate dorsiflexion postoperatively. Gait analysis with dynamic SEMG and motion capture complements physical examination when assessing postoperative recovery in athletes.

  1. Biomechanical ToolKit: Open-source framework to visualize and process biomechanical data.

    PubMed

    Barre, Arnaud; Armand, Stéphane

    2014-04-01

    C3D file format is widely used in the biomechanical field by companies and laboratories to store motion capture systems data. However, few software packages can visualize and modify the integrality of the data in the C3D file. Our objective was to develop an open-source and multi-platform framework to read, write, modify and visualize data from any motion analysis systems using standard (C3D) and proprietary file formats (used by many companies producing motion capture systems). The Biomechanical ToolKit (BTK) was developed to provide cost-effective and efficient tools for the biomechanical community to easily deal with motion analysis data. A large panel of operations is available to read, modify and process data through C++ API, bindings for high-level languages (Matlab, Octave, and Python), and standalone application (Mokka). All these tools are open-source and cross-platform and run on all major operating systems (Windows, Linux, MacOS X). Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  2. Toward automated formation of microsphere arrangements using multiplexed optical tweezers

    NASA Astrophysics Data System (ADS)

    Rajasekaran, Keshav; Bollavaram, Manasa; Banerjee, Ashis G.

    2016-09-01

    Optical tweezers offer certain advantages such as multiplexing using a programmable spatial light modulator, flexibility in the choice of the manipulated object and the manipulation medium, precise control, easy object release, and minimal object damage. However, automated manipulation of multiple objects in parallel, which is essential for efficient and reliable formation of micro-scale assembly structures, poses a difficult challenge. There are two primary research issues in addressing this challenge. First, the presence of stochastic Langevin force giving rise to Brownian motion requires motion control for all the manipulated objects at fast rates of several Hz. Second, the object dynamics is non-linear and even difficult to represent analytically due to the interaction of multiple optical traps that are manipulating neighboring objects. As a result, automated controllers have not been realized for tens of objects, particularly with three dimensional motions with guaranteed collision avoidances. In this paper, we model the effect of interacting optical traps on microspheres with significant Brownian motions in stationary fluid media, and develop simplified state-space representations. These representations are used to design a model predictive controller to coordinate the motions of several spheres in real time. Preliminary experiments demonstrate the utility of the controller in automatically forming desired arrangements of varying configurations starting with randomly dispersed microspheres.

  3. Optical identification of two nearby Isolated Neutron Stars through proper motion measuremnt.

    NASA Astrophysics Data System (ADS)

    Zane, Silvia

    2004-07-01

    Aim of this proposal is to perform high-resolution imaging of the proposed optical counterparts of the two, radio silent, isolated neutron stars RXJ1308.6+2127 and RX J1605.3+3249 with the STIS/50CCD. Imaging both fields with the same instrumental configuration used in mid 2001 by Kaplan et al {2002; 2003}, will allow us to measure the objects' position and to determine their proper motions over a time base of nearly four years. The measurement of proper motions at the level of at least few tens mas/yr, expected for relatively nearby neutron stars, would unambigouosly secure the proposed optical identifications, not achievable otherwise. In addition, the knowledge of the proper motion will provide useful indications on the space velocity and distance of these neutrons stars, as well as on the radius. Constraining these parameters is of paramount importance to discriminate between the variety of emission mechanisms invoked to explain their observed thermal X-ray spectra and to probe the neutron star equation of state {EOS}. The determination of the proper motion is a decisive step toward a dedicated follow-up program aimed at measuring the objects' optical parallax, thus providing much firmer constrains on the star properties, again to be performed with the STIS/50CCD.

  4. System and Method for Measuring Skin Movement and Strain and Related Techniques

    NASA Technical Reports Server (NTRS)

    Newman, Dava J. (Inventor); Wessendorf, Ashley M. (Inventor)

    2015-01-01

    Described herein are systems and techniques for a motion capture system and a three-dimensional (3D) tracking system used to record body position and/or movements/motions and using the data to measure skin strain (a strain field) all along the body while a joint is in motion (dynamic) as well as in a fixed position (static). The data and technique can be used to quantify strains, calculate 3D contours, and derive patterns believed to reveal skin's properties during natural motions.

  5. A Surface-Coupled Optical Trap with 1-bp Precision via Active Stabilization

    PubMed Central

    Okoniewski, Stephen R.; Carter, Ashley R.; Perkins, Thomas T.

    2017-01-01

    Optical traps can measure bead motions with Å-scale precision. However, using this level of precision to infer 1-bp motion of molecular motors along DNA is difficult, since a variety of noise sources degrade instrumental stability. In this chapter, we detail how to improve instrumental stability by (i) minimizing laser pointing, mode, polarization, and intensity noise using an acousto-optical-modulator mediated feedback loop and (ii) minimizing sample motion relative to the optical trap using a 3-axis piezo-electric-stage mediated feedback loop. These active techniques play a critical role in achieving a surface stability of 1 Å in 3D over tens of seconds and a 1-bp stability and precision in a surface-coupled optical trap over a broad bandwidth (Δf = 0.03–2 Hz) at low force (6 pN). These active stabilization techniques can also aid other biophysical assays that would benefit from improved laser stability and/or Å-scale sample stability, such as atomic force microscopy and super-resolution imaging. PMID:27844426

  6. A high bandwidth three-axis out-of-plane motion measurement system based on optical beam deflection

    NASA Astrophysics Data System (ADS)

    Piyush, P.; Giridhar, M. S.; Jayanth, G. R.

    2018-03-01

    Multi-axis measurement of motion is indispensable for characterization of dynamic systems and control of motion stages. This paper presents an optical beam deflection-based measurement system to simultaneously measure three-axis out-of-plane motion of both micro- and macro-scale targets. Novel strategies are proposed to calibrate the sensitivities of the measurement system. Subsequently the measurement system is experimentally realized and calibrated. The system is employed to characterize coupled linear and angular motion of a piezo-actuated stage. The measured motion is shown to be in agreement with theoretical expectation. Next, the high bandwidth of the measurement system has been showcased by utilizing it to measure coupled two-axis transient motion of a Radio Frequency Micro-Electro-Mechanical System switch with a rise time of about 60 μs. Finally, the ability of the system to measure out-of-plane angular motion about the second axis has been demonstrated by measuring the deformation of a micro-cantilever beam.

  7. Reducing motion artifacts for long-term clinical NIRS monitoring using collodion-fixed prism-based optical fibers

    PubMed Central

    Yücel, Meryem A.; Selb, Juliette; Boas, David A.; Cash, Sydney S.; Cooper, Robert J.

    2013-01-01

    As the applications of near-infrared spectroscopy (NIRS) continue to broaden and long-term clinical monitoring becomes more common, minimizing signal artifacts due to patient movement becomes more pressing. This is particularly true in applications where clinically and physiologically interesting events are intrinsically linked to patient movement, as is the case in the study of epileptic seizures. In this study, we apply an approach common in the application of EEG electrodes to the application of specialized NIRS optical fibers. The method provides improved optode-scalp coupling through the use of miniaturized optical fiber tips fixed to the scalp using collodion, a clinical adhesive. We investigate and quantify the performance of this new method in minimizing motion artifacts in healthy subjects, and apply the technique to allow continuous NIRS monitoring throughout epileptic seizures in two epileptic in-patients. Using collodion-fixed fibers reduces the percent signal change of motion artifacts by 90 % and increases the SNR by 6 and 3 fold at 690 and 830 nm wavelengths respectively when compared to a standard Velcro-based array of optical fibers. The change in both HbO and HbR during motion artifacts is found to be statistically lower for the collodion-fixed fiber probe. The collodion-fixed optical fiber approach has also allowed us to obtain good quality NIRS recording of three epileptic seizures in two patients despite excessive motion in each case. PMID:23796546

  8. Optical Trapping of Ion Coulomb Crystals

    NASA Astrophysics Data System (ADS)

    Schmidt, Julian; Lambrecht, Alexander; Weckesser, Pascal; Debatin, Markus; Karpa, Leon; Schaetz, Tobias

    2018-04-01

    The electronic and motional degrees of freedom of trapped ions can be controlled and coherently coupled on the level of individual quanta. Assembling complex quantum systems ion by ion while keeping this unique level of control remains a challenging task. For many applications, linear chains of ions in conventional traps are ideally suited to address this problem. However, driven motion due to the magnetic or radio-frequency electric trapping fields sometimes limits the performance in one dimension and severely affects the extension to higher-dimensional systems. Here, we report on the trapping of multiple barium ions in a single-beam optical dipole trap without radio-frequency or additional magnetic fields. We study the persistence of order in ensembles of up to six ions within the optical trap, measure their temperature, and conclude that the ions form a linear chain, commonly called a one-dimensional Coulomb crystal. As a proof-of-concept demonstration, we access the collective motion and perform spectrometry of the normal modes in the optical trap. Our system provides a platform that is free of driven motion and combines advantages of optical trapping, such as state-dependent confinement and nanoscale potentials, with the desirable properties of crystals of trapped ions, such as long-range interactions featuring collective motion. Starting with small numbers of ions, it has been proposed that these properties would allow the experimental study of many-body physics and the onset of structural quantum phase transitions between one- and two-dimensional crystals.

  9. Cerebral palsy characterization by estimating ocular motion

    NASA Astrophysics Data System (ADS)

    González, Jully; Atehortúa, Angélica; Moncayo, Ricardo; Romero, Eduardo

    2017-11-01

    Cerebral palsy (CP) is a large group of motion and posture disorders caused during the fetal or infant brain development. Sensorial impairment is commonly found in children with CP, i.e., between 40-75 percent presents some form of vision problems or disabilities. An automatic characterization of the cerebral palsy is herein presented by estimating the ocular motion during a gaze pursuing task. Specifically, After automatically detecting the eye location, an optical flow algorithm tracks the eye motion following a pre-established visual assignment. Subsequently, the optical flow trajectories are characterized in the velocity-acceleration phase plane. Differences are quantified in a small set of patients between four to ten years.

  10. Real-time monitoring and visualization of the multi-dimensional motion of an anisotropic nanoparticle

    NASA Astrophysics Data System (ADS)

    Go, Gi-Hyun; Heo, Seungjin; Cho, Jong-Hoi; Yoo, Yang-Seok; Kim, Minkwan; Park, Chung-Hyun; Cho, Yong-Hoon

    2017-03-01

    As interest in anisotropic particles has increased in various research fields, methods of tracking such particles have become increasingly desirable. Here, we present a new and intuitive method to monitor the Brownian motion of a nanowire, which can construct and visualize multi-dimensional motion of a nanowire confined in an optical trap, using a dual particle tracking system. We measured the isolated angular fluctuations and translational motion of the nanowire in the optical trap, and determined its physical properties, such as stiffness and torque constants, depending on laser power and polarization direction. This has wide implications in nanoscience and nanotechnology with levitated anisotropic nanoparticles.

  11. Motion prediction of a non-cooperative space target

    NASA Astrophysics Data System (ADS)

    Zhou, Bang-Zhao; Cai, Guo-Ping; Liu, Yun-Meng; Liu, Pan

    2018-01-01

    Capturing a non-cooperative space target is a tremendously challenging research topic. Effective acquisition of motion information of the space target is the premise to realize target capture. In this paper, motion prediction of a free-floating non-cooperative target in space is studied and a motion prediction algorithm is proposed. In order to predict the motion of the free-floating non-cooperative target, dynamic parameters of the target must be firstly identified (estimated), such as inertia, angular momentum and kinetic energy and so on; then the predicted motion of the target can be acquired by substituting these identified parameters into the Euler's equations of the target. Accurate prediction needs precise identification. This paper presents an effective method to identify these dynamic parameters of a free-floating non-cooperative target. This method is based on two steps, (1) the rough estimation of the parameters is computed using the motion observation data to the target, and (2) the best estimation of the parameters is found by an optimization method. In the optimization problem, the objective function is based on the difference between the observed and the predicted motion, and the interior-point method (IPM) is chosen as the optimization algorithm, which starts at the rough estimate obtained in the first step and finds a global minimum to the objective function with the guidance of objective function's gradient. So the speed of IPM searching for the global minimum is fast, and an accurate identification can be obtained in time. The numerical results show that the proposed motion prediction algorithm is able to predict the motion of the target.

  12. 1 kHz 2D Visual Motion Sensor Using 20 × 20 Silicon Retina Optical Sensor and DSP Microcontroller.

    PubMed

    Liu, Shih-Chii; Yang, MinHao; Steiner, Andreas; Moeckel, Rico; Delbruck, Tobi

    2015-04-01

    Optical flow sensors have been a long running theme in neuromorphic vision sensors which include circuits that implement the local background intensity adaptation mechanism seen in biological retinas. This paper reports a bio-inspired optical motion sensor aimed towards miniature robotic and aerial platforms. It combines a 20 × 20 continuous-time CMOS silicon retina vision sensor with a DSP microcontroller. The retina sensor has pixels that have local gain control and adapt to background lighting. The system allows the user to validate various motion algorithms without building dedicated custom solutions. Measurements are presented to show that the system can compute global 2D translational motion from complex natural scenes using one particular algorithm: the image interpolation algorithm (I2A). With this algorithm, the system can compute global translational motion vectors at a sample rate of 1 kHz, for speeds up to ±1000 pixels/s, using less than 5 k instruction cycles (12 instructions per pixel) per frame. At 1 kHz sample rate the DSP is 12% occupied with motion computation. The sensor is implemented as a 6 g PCB consuming 170 mW of power.

  13. Reducing motion artifacts for long-term clinical NIRS monitoring using collodion-fixed prism-based optical fibers.

    PubMed

    Yücel, Meryem A; Selb, Juliette; Boas, David A; Cash, Sydney S; Cooper, Robert J

    2014-01-15

    As the applications of near-infrared spectroscopy (NIRS) continue to broaden and long-term clinical monitoring becomes more common, minimizing signal artifacts due to patient movement becomes more pressing. This is particularly true in applications where clinically and physiologically interesting events are intrinsically linked to patient movement, as is the case in the study of epileptic seizures. In this study, we apply an approach common in the application of EEG electrodes to the application of specialized NIRS optical fibers. The method provides improved optode-scalp coupling through the use of miniaturized optical fiber tips fixed to the scalp using collodion, a clinical adhesive. We investigate and quantify the performance of this new method in minimizing motion artifacts in healthy subjects, and apply the technique to allow continuous NIRS monitoring throughout epileptic seizures in two epileptic in-patients. Using collodion-fixed fibers reduces the percent signal change of motion artifacts by 90% and increases the SNR by 6 and 3 fold at 690 and 830 nm wavelengths respectively when compared to a standard Velcro-based array of optical fibers. The SNR has also increased by 2 fold during rest conditions without motion with the new probe design because of better light coupling between the fiber and scalp. The change in both HbO and HbR during motion artifacts is found to be statistically lower for the collodion-fixed fiber probe. The collodion-fixed optical fiber approach has also allowed us to obtain good quality NIRS recording of three epileptic seizures in two patients despite excessive motion in each case. Copyright © 2013 Elsevier Inc. All rights reserved.

  14. Systems and methods for optically measuring properties of hydrocarbon fuel gases

    DOEpatents

    Adler-Golden, S.; Bernstein, L.S.; Bien, F.; Gersh, M.E.; Goldstein, N.

    1998-10-13

    A system and method for optical interrogation and measurement of a hydrocarbon fuel gas includes a light source generating light at near-visible wavelengths. A cell containing the gas is optically coupled to the light source which is in turn partially transmitted by the sample. A spectrometer disperses the transmitted light and captures an image thereof. The image is captured by a low-cost silicon-based two-dimensional CCD array. The captured spectral image is processed by electronics for determining energy or BTU content and composition of the gas. The innovative optical approach provides a relatively inexpensive, durable, maintenance-free sensor and method which is reliable in the field and relatively simple to calibrate. In view of the above, accurate monitoring is possible at a plurality of locations along the distribution chain leading to more efficient distribution. 14 figs.

  15. Systems and methods for optically measuring properties of hydrocarbon fuel gases

    DOEpatents

    Adler-Golden, Steven; Bernstein, Lawrence S.; Bien, Fritz; Gersh, Michael E.; Goldstein, Neil

    1998-10-13

    A system and method for optical interrogation and measurement of a hydrocarbon fuel gas includes a light source generating light at near-visible wavelengths. A cell containing the gas is optically coupled to the light source which is in turn partially transmitted by the sample. A spectrometer disperses the transmitted light and captures an image thereof. The image is captured by a low-cost silicon-based two-dimensional CCD array. The captured spectral image is processed by electronics for determining energy or BTU content and composition of the gas. The innovative optical approach provides a relatively inexpensive, durable, maintenance-free sensor and method which is reliable in the field and relatively simple to calibrate. In view of the above, accurate monitoring is possible at a plurality of locations along the distribution chain leading to more efficient distribution.

  16. Are recent empirical directivity models sufficient in capturing near-fault directivity effect?

    NASA Astrophysics Data System (ADS)

    Chen, Yen-Shin; Cotton, Fabrice; Pagani, Marco; Weatherill, Graeme; Reshi, Owais; Mai, Martin

    2017-04-01

    It has been widely observed that the ground motion variability in the near field can be significantly higher than that commonly reported in published GMPEs, and this has been suggested to be a consequence of directivity. To capture the spatial variation in ground motion amplitude and frequency caused by the near-fault directivity effect, several models for engineering applications have been developed using empirical or, more recently, the combination of empirical and simulation data. Many research works have indicated that the large velocity pulses mainly observed in the near-field are primarily related to slip heterogeneity (i.e., asperities), suggesting that the slip heterogeneity is a more dominant controlling factor than the rupture velocity or source rise time function. The first generation of broadband directivity models for application in ground motion prediction do not account for heterogeneity of slip and rupture speed. With the increased availability of strong motion recordings (e.g., NGA-West 2 database) in the near-fault region, the directivity models moved from broadband to narrowband models to include the magnitude dependence of the period of the rupture directivity pulses, wherein the pulses are believed to be closely related to the heterogeneity of slip distribution. After decades of directivity models development, does the latest generation of models - i.e. the one including narrowband directivity models - better capture the near-fault directivity effects, particularly in presence of strong slip heterogeneity? To address this question, a set of simulated motions for an earthquake rupture scenario, with various kinematic slip models and hypocenter locations, are used as a basis for a comparison with the directivity models proposed by the NGA-West 2 project for application with ground motion prediction equations incorporating a narrowband directivity model. The aim of this research is to gain better insights on the accuracy of narrowband directivity models under conditions commonly encountered in the real world. Our preliminary result shows that empirical models including directivity factors better predict physics based ground-motion and their spatial variability than classical empirical models. However, the results clearly indicate that it is still a challenge for the directivity models to capture the strong directivity effect if a high level of slip heterogeneity is involved during the source rupture process.

  17. An optimal control strategy for two-dimensional motion camouflage with non-holonimic constraints.

    PubMed

    Rañó, Iñaki

    2012-07-01

    Motion camouflage is a stealth behaviour observed both in hover-flies and in dragonflies. Existing controllers for mimicking motion camouflage generate this behaviour on an empirical basis or without considering the kinematic motion restrictions present in animal trajectories. This study summarises our formal contributions to solve the generation of motion camouflage as a non-linear optimal control problem. The dynamics of the system capture the kinematic restrictions to motion of the agents, while the performance index ensures camouflage trajectories. An extensive set of simulations support the technique, and a novel analysis of the obtained trajectories contributes to our understanding of possible mechanisms to obtain sensor based motion camouflage, for instance, in mobile robots.

  18. Brownian Motion in a Speckle Light Field: Tunable Anomalous Diffusion and Selective Optical Manipulation

    PubMed Central

    Volpe, Giorgio; Volpe, Giovanni; Gigan, Sylvain

    2014-01-01

    The motion of particles in random potentials occurs in several natural phenomena ranging from the mobility of organelles within a biological cell to the diffusion of stars within a galaxy. A Brownian particle moving in the random optical potential associated to a speckle pattern, i.e., a complex interference pattern generated by the scattering of coherent light by a random medium, provides an ideal model system to study such phenomena. Here, we derive a theory for the motion of a Brownian particle in a speckle field and, in particular, we identify its universal characteristic timescale. Based on this theoretical insight, we show how speckle light fields can be used to control the anomalous diffusion of a Brownian particle and to perform some basic optical manipulation tasks such as guiding and sorting. Our results might broaden the perspectives of optical manipulation for real-life applications. PMID:24496461

  19. Nonlinear cavity optomechanics with nanomechanical thermal fluctuations

    PubMed Central

    Leijssen, Rick; La Gala, Giada R.; Freisem, Lars; Muhonen, Juha T.; Verhagen, Ewold

    2017-01-01

    Although the interaction between light and motion in cavity optomechanical systems is inherently nonlinear, experimental demonstrations to date have allowed a linearized description in all except highly driven cases. Here, we demonstrate a nanoscale optomechanical system in which the interaction between light and motion is so large (single-photon cooperativity C0≈103) that thermal motion induces optical frequency fluctuations larger than the intrinsic optical linewidth. The system thereby operates in a fully nonlinear regime, which pronouncedly impacts the optical response, displacement measurement and radiation pressure backaction. Specifically, we measure an apparent optical linewidth that is dominated by thermo-mechanically induced frequency fluctuations over a wide temperature range, and show that in this regime thermal displacement measurements cannot be described by conventional analytical models. We perform a proof-of-concept demonstration of exploiting the nonlinearity to conduct sensitive quadratic readout of nanomechanical displacement. Finally, we explore how backaction in this regime affects the mechanical fluctuation spectra. PMID:28685755

  20. Initial Atomic Motion Immediately Following Femtosecond-Laser Excitation in Phase-Change Materials.

    PubMed

    Matsubara, E; Okada, S; Ichitsubo, T; Kawaguchi, T; Hirata, A; Guan, P F; Tokuda, K; Tanimura, K; Matsunaga, T; Chen, M W; Yamada, N

    2016-09-23

    Despite the fact that phase-change materials are widely used for data storage, no consensus exists on the unique mechanism of their ultrafast phase change and its accompanied large and rapid optical change. By using the pump-probe observation method combining a femtosecond optical laser and an x-ray free-electron laser, we substantiate experimentally that, in both GeTe and Ge_{2}Sb_{2}Te_{5} crystals, rattling motion of mainly Ge atoms takes place with keeping the off-center position just after femtosecond-optical-laser irradiation, which eventually leads to a higher symmetry or disordered state. This very initial rattling motion in the undistorted lattice can be related to instantaneous optical change due to the loss of resonant bonding that characterizes GeTe-based phase change materials. Based on the amorphous structure derived by first-principles molecular dynamics simulation, we infer a plausible ultrafast amorphization mechanism via nonmelting.

  1. The role of optical flow in automated quality assessment of full-motion video

    NASA Astrophysics Data System (ADS)

    Harguess, Josh; Shafer, Scott; Marez, Diego

    2017-09-01

    In real-world video data, such as full-motion-video (FMV) taken from unmanned vehicles, surveillance systems, and other sources, various corruptions to the raw data is inevitable. This can be due to the image acquisition process, noise, distortion, and compression artifacts, among other sources of error. However, we desire methods to analyze the quality of the video to determine whether the underlying content of the corrupted video can be analyzed by humans or machines and to what extent. Previous approaches have shown that motion estimation, or optical flow, can be an important cue in automating this video quality assessment. However, there are many different optical flow algorithms in the literature, each with their own advantages and disadvantages. We examine the effect of the choice of optical flow algorithm (including baseline and state-of-the-art), on motionbased automated video quality assessment algorithms.

  2. SMART USE OF COMPUTER-AIDED SPERM ANALYSIS (CASA) TO CHARACTERIZE SPERM MOTION

    EPA Science Inventory

    Computer-aided sperm analysis (CASA) has evolved over the past fifteen years to provide an objective, practical means of measuring and characterizing the velocity and parttern of sperm motion. CASA instruments use video frame-grabber boards to capture multiple images of spermato...

  3. NASA Has Joined America True's Design Mission for 2000

    NASA Technical Reports Server (NTRS)

    Steele, Gynelle C.

    1999-01-01

    Engineers at the NASA Lewis Research Center will support the America True design team led by America s Cup innovator Phil Kaiko. The joint effort between NASA and America True is encouraged by Mission HOME, the official public awareness campaign of the U.S. space community. NASA Lewis and America True have entered into a Space Act Agreement to focus on the interaction between the airfoil and the large deformation of the pretensioned sails and rigs along with the dynamic motions related to the boat motions. This work will require a coupled fluid and structural simulation. Included in the simulation will be both a steadystate capability, to capture the quasi-state interactions between the air loads and sail geometry and the lift and drag on the boat, and a transient capability, to capture the sail/mast pumping effects resulting from hull motions.

  4. Postures and Motions Library Development for Verification of Ground Crew Human Systems Integration Requirements

    NASA Technical Reports Server (NTRS)

    Jackson, Mariea Dunn; Dischinger, Charles; Stambolian, Damon; Henderson, Gena

    2012-01-01

    Spacecraft and launch vehicle ground processing activities require a variety of unique human activities. These activities are being documented in a Primitive motion capture library. The Library will be used by the human factors engineering in the future to infuse real to life human activities into the CAD models to verify ground systems human factors requirements. As the Primitive models are being developed for the library the project has selected several current human factors issues to be addressed for the SLS and Orion launch systems. This paper explains how the Motion Capture of unique ground systems activities are being used to verify the human factors analysis requirements for ground system used to process the STS and Orion vehicles, and how the primitive models will be applied to future spacecraft and launch vehicle processing.

  5. Postures and Motions Library Development for Verification of Ground Crew Human Factors Requirements

    NASA Technical Reports Server (NTRS)

    Stambolian, Damon; Henderson, Gena; Jackson, Mariea Dunn; Dischinger, Charles

    2013-01-01

    Spacecraft and launch vehicle ground processing activities require a variety of unique human activities. These activities are being documented in a primitive motion capture library. The library will be used by human factors engineering analysts to infuse real to life human activities into the CAD models to verify ground systems human factors requirements. As the primitive models are being developed for the library, the project has selected several current human factors issues to be addressed for the Space Launch System (SLS) and Orion launch systems. This paper explains how the motion capture of unique ground systems activities is being used to verify the human factors engineering requirements for ground systems used to process the SLS and Orion vehicles, and how the primitive models will be applied to future spacecraft and launch vehicle processing.

  6. Marker optimization for facial motion acquisition and deformation.

    PubMed

    Le, Binh H; Zhu, Mingyang; Deng, Zhigang

    2013-11-01

    A long-standing problem in marker-based facial motion capture is what are the optimal facial mocap marker layouts. Despite its wide range of potential applications, this problem has not yet been systematically explored to date. This paper describes an approach to compute optimized marker layouts for facial motion acquisition as optimization of characteristic control points from a set of high-resolution, ground-truth facial mesh sequences. Specifically, the thin-shell linear deformation model is imposed onto the example pose reconstruction process via optional hard constraints such as symmetry and multiresolution constraints. Through our experiments and comparisons, we validate the effectiveness, robustness, and accuracy of our approach. Besides guiding minimal yet effective placement of facial mocap markers, we also describe and demonstrate its two selected applications: marker-based facial mesh skinning and multiresolution facial performance capture.

  7. Spared Ability to Perceive Direction of Locomotor Heading and Scene-Relative Object Movement Despite Inability to Perceive Relative Motion

    PubMed Central

    Vaina, Lucia M.; Buonanno, Ferdinando; Rushton, Simon K.

    2014-01-01

    Background All contemporary models of perception of locomotor heading from optic flow (the characteristic patterns of retinal motion that result from self-movement) begin with relative motion. Therefore it would be expected that an impairment on perception of relative motion should impact on the ability to judge heading and other 3D motion tasks. Material/Methods We report two patients with occipital lobe lesions whom we tested on a battery of motion tasks. Patients were impaired on all tests that involved relative motion in plane (motion discontinuity, form from differences in motion direction or speed). Despite this they retained the ability to judge their direction of heading relative to a target. A potential confound is that observers can derive information about heading from scale changes bypassing the need to use optic flow. Therefore we ran further experiments in which we isolated optic flow and scale change. Results Patients’ performance was in normal ranges on both tests. The finding that ability to perceive heading can be retained despite an impairment on ability to judge relative motion questions the assumption that heading perception proceeds from initial processing of relative motion. Furthermore, on a collision detection task, SS and SR’s performance was significantly better for simulated forward movement of the observer in the 3D scene, than for the static observer. This suggests that in spite of severe deficits on relative motion in the frontoparlel (xy) plane, information from self-motion helped identification objects moving along an intercept 3D relative motion trajectory. Conclusions This result suggests a potential use of a flow parsing strategy to detect in a 3D world the trajectory of moving objects when the observer is moving forward. These results have implications for developing rehabilitation strategies for deficits in visually guided navigation. PMID:25183375

  8. Real-time Kalman filter: Cooling of an optically levitated nanoparticle

    NASA Astrophysics Data System (ADS)

    Setter, Ashley; Toroš, Marko; Ralph, Jason F.; Ulbricht, Hendrik

    2018-03-01

    We demonstrate that a Kalman filter applied to estimate the position of an optically levitated nanoparticle, and operated in real-time within a field programmable gate array, is sufficient to perform closed-loop parametric feedback cooling of the center-of-mass motion to sub-Kelvin temperatures. The translational center-of-mass motion along the optical axis of the trapped nanoparticle has been cooled by 3 orders of magnitude, from a temperature of 300 K to a temperature of 162 ±15 mK.

  9. Kinematic parameters of signed verbs.

    PubMed

    Malaia, Evie; Wilbur, Ronnie B; Milkovic, Marina

    2013-10-01

    Sign language users recruit physical properties of visual motion to convey linguistic information. Research on American Sign Language (ASL) indicates that signers systematically use kinematic features (e.g., velocity, deceleration) of dominant hand motion for distinguishing specific semantic properties of verb classes in production ( Malaia & Wilbur, 2012a) and process these distinctions as part of the phonological structure of these verb classes in comprehension ( Malaia, Ranaweera, Wilbur, & Talavage, 2012). These studies are driven by the event visibility hypothesis by Wilbur (2003), who proposed that such use of kinematic features should be universal to sign language (SL) by the grammaticalization of physics and geometry for linguistic purposes. In a prior motion capture study, Malaia and Wilbur (2012a) lent support for the event visibility hypothesis in ASL, but there has not been quantitative data from other SLs to test the generalization to other languages. The authors investigated the kinematic parameters of predicates in Croatian Sign Language ( Hrvatskom Znakovnom Jeziku [HZJ]). Kinematic features of verb signs were affected both by event structure of the predicate (semantics) and phrase position within the sentence (prosody). The data demonstrate that kinematic features of motion in HZJ verb signs are recruited to convey morphological and prosodic information. This is the first crosslinguistic motion capture confirmation that specific kinematic properties of articulator motion are grammaticalized in other SLs to express linguistic features.

  10. Automated video-based assessment of surgical skills for training and evaluation in medical schools.

    PubMed

    Zia, Aneeq; Sharma, Yachna; Bettadapura, Vinay; Sarin, Eric L; Ploetz, Thomas; Clements, Mark A; Essa, Irfan

    2016-09-01

    Routine evaluation of basic surgical skills in medical schools requires considerable time and effort from supervising faculty. For each surgical trainee, a supervisor has to observe the trainees in person. Alternatively, supervisors may use training videos, which reduces some of the logistical overhead. All these approaches however are still incredibly time consuming and involve human bias. In this paper, we present an automated system for surgical skills assessment by analyzing video data of surgical activities. We compare different techniques for video-based surgical skill evaluation. We use techniques that capture the motion information at a coarser granularity using symbols or words, extract motion dynamics using textural patterns in a frame kernel matrix, and analyze fine-grained motion information using frequency analysis. We were successfully able to classify surgeons into different skill levels with high accuracy. Our results indicate that fine-grained analysis of motion dynamics via frequency analysis is most effective in capturing the skill relevant information in surgical videos. Our evaluations show that frequency features perform better than motion texture features, which in-turn perform better than symbol-/word-based features. Put succinctly, skill classification accuracy is positively correlated with motion granularity as demonstrated by our results on two challenging video datasets.

  11. Satellite attitude motion models for capture and retrieval investigations

    NASA Technical Reports Server (NTRS)

    Cochran, John E., Jr.; Lahr, Brian S.

    1986-01-01

    The primary purpose of this research is to provide mathematical models which may be used in the investigation of various aspects of the remote capture and retrieval of uncontrolled satellites. Emphasis has been placed on analytical models; however, to verify analytical solutions, numerical integration must be used. Also, for satellites of certain types, numerical integration may be the only practical or perhaps the only possible method of solution. First, to provide a basis for analytical and numerical work, uncontrolled satellites were categorized using criteria based on: (1) orbital motions, (2) external angular momenta, (3) internal angular momenta, (4) physical characteristics, and (5) the stability of their equilibrium states. Several analytical solutions for the attitude motions of satellite models were compiled, checked, corrected in some minor respects and their short-term prediction capabilities were investigated. Single-rigid-body, dual-spin and multi-rotor configurations are treated. To verify the analytical models and to see how the true motion of a satellite which is acted upon by environmental torques differs from its corresponding torque-free motion, a numerical simulation code was developed. This code contains a relatively general satellite model and models for gravity-gradient and aerodynamic torques. The spacecraft physical model for the code and the equations of motion are given. The two environmental torque models are described.

  12. Accuracy of Jump-Mat Systems for Measuring Jump Height.

    PubMed

    Pueo, Basilio; Lipinska, Patrycja; Jiménez-Olmedo, José M; Zmijewski, Piotr; Hopkins, Will G

    2017-08-01

    Vertical-jump tests are commonly used to evaluate lower-limb power of athletes and nonathletes. Several types of equipment are available for this purpose. To compare the error of measurement of 2 jump-mat systems (Chronojump-Boscosystem and Globus Ergo Tester) with that of a motion-capture system as a criterion and to determine the modifying effect of foot length on jump height. Thirty-one young adult men alternated 4 countermovement jumps with 4 squat jumps. Mean jump height and standard deviations representing technical error of measurement arising from each device and variability arising from the subjects themselves were estimated with a novel mixed model and evaluated via standardization and magnitude-based inference. The jump-mat systems produced nearly identical measures of jump height (differences in means and in technical errors of measurement ≤1 mm). Countermovement and squat-jump height were both 13.6 cm higher with motion capture (90% confidence limits ±0.3 cm), but this very large difference was reduced to small unclear differences when adjusted to a foot length of zero. Variability in countermovement and squat-jump height arising from the subjects was small (1.1 and 1.5 cm, respectively, 90% confidence limits ±0.3 cm); technical error of motion capture was similar in magnitude (1.7 and 1.6 cm, ±0.3 and ±0.4 cm), and that of the jump mats was similar or smaller (1.2 and 0.3 cm, ±0.5 and ±0.9 cm). The jump-mat systems provide trustworthy measurements for monitoring changes in jump height. Foot length can explain the substantially higher jump height observed with motion capture.

  13. The efficacy of interactive, motion capture-based rehabilitation on functional outcomes in an inpatient stroke population: a randomized controlled trial

    PubMed Central

    Cannell, John; Jovic, Emelyn; Rathjen, Amy; Lane, Kylie; Tyson, Anna M; Callisaya, Michele L; Smith, Stuart T; Ahuja, Kiran DK; Bird, Marie-Louise

    2017-01-01

    Objective: To compare the efficacy of novel interactive, motion capture-rehabilitation software to usual care stroke rehabilitation on physical function. Design: Randomized controlled clinical trial. Setting: Two subacute hospital rehabilitation units in Australia. Participants: In all, 73 people less than six months after stroke with reduced mobility and clinician determined capacity to improve. Interventions: Both groups received functional retraining and individualized programs for up to an hour, on weekdays for 8–40 sessions (dose matched). For the intervention group, this individualized program used motivating virtual reality rehabilitation and novel gesture controlled interactive motion capture software. For usual care, the individualized program was delivered in a group class on one unit and by rehabilitation assistant 1:1 on the other. Main measures: Primary outcome was standing balance (functional reach). Secondary outcomes were lateral reach, step test, sitting balance, arm function, and walking. Results: Participants (mean 22 days post-stroke) attended mean 14 sessions. Both groups improved (mean (95% confidence interval)) on primary outcome functional reach (usual care 3.3 (0.6 to 5.9), intervention 4.1 (−3.0 to 5.0) cm) with no difference between groups (P = 0.69) on this or any secondary measures. No differences between the rehabilitation units were seen except in lateral reach (less affected side) (P = 0.04). No adverse events were recorded during therapy. Conclusion: Interactive, motion capture rehabilitation for inpatients post stroke produced functional improvements that were similar to those achieved by usual care stroke rehabilitation, safely delivered by either a physical therapist or a rehabilitation assistant. PMID:28719977

  14. Rapid encoding of relationships between spatially remote motion signals.

    PubMed

    Maruya, Kazushi; Holcombe, Alex O; Nishida, Shin'ya

    2013-02-06

    For visual processing, the temporal correlation of remote local motion signals is a strong cue to detect meaningful large-scale structures in the retinal image, because related points are likely to move together regardless of their spatial separation. While the processing of multi-element motion patterns involved in biological motion and optic flow has been studied intensively, the encoding of simpler pairwise relationships between remote motion signals remains poorly understood. We investigated this process by measuring the temporal rate limit for perceiving the relationship of two motion directions presented at the same time at different spatial locations. Compared to luminance or orientation, motion comparison was more rapid. Performance remained very high even when interstimulus separation was increased up to 100°. Motion comparison also remained rapid regardless of whether the two motion directions were similar to or different from each other. The exception was a dramatic slowing when the elements formed an orthogonal "T," in which two motions do not perceptually group together. Motion presented at task-irrelevant positions did not reduce performance, suggesting that the rapid motion comparison could not be ascribed to global optic flow processing. Our findings reveal the existence and unique nature of specialized processing that encodes long-range relationships between motion signals for quick appreciation of global dynamic scene structure.

  15. SU-E-P-41: Imaging Coordination of Cone Beam CT, On-Board Image Conjunction with Optical Image Guidance for SBRT Treatment with Respiratory Motion Management

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Y; Campbell, J

    2015-06-15

    Purpose: To spare normal tissue for SBRT lung/liver patients, especially for patients with significant tumor motion, image guided respiratory motion management has been widely implemented in clinical practice. The purpose of this study was to evaluate imaging coordination of cone beam CT, on-board X-ray image conjunction with optical image guidance for SBRT treatment with motion management. Methods: Currently in our clinic a Varian Novlis Tx was utilized for treating SBRT patients implementing CBCT. A BrainLAB X-ray ExacTrac imaging system in conjunction with optical guidance was primarily used for SRS patients. CBCT and X-ray imaging system were independently calibrated with 1.0more » mm tolerance. For SBRT lung/liver patients, the magnitude of tumor motion was measured based-on 4DCT and the measurement was analyzed to determine if patients would be beneficial with respiratory motion management. For patients eligible for motion management, an additional CT with breath holding would be scanned and used as primary planning CT and as reference images for Cone beam CT. During the SBRT treatment, a CBCT with pause and continuing technology would be performed with patients holding breath, which may require 3–4 partially scanned CBCT to combine as a whole CBCT depending on how long patients capable of holding breath. After patients being setup by CBCT images, the ExactTrac X-ray imaging system was implemented with patients’ on-board X-ray images compared to breath holding CT-based DRR. Results: For breath holding patients SBRT treatment, after initially localizing patients with CBCT, we then position patients with ExacTrac X-ray and optical imaging system. The observed deviations of real-time optical guided position average at 3.0, 2.5 and 1.5 mm in longitudinal, vertical and lateral respectively based on 35 treatments. Conclusion: The respiratory motion management clinical practice improved our physician confidence level to give tighter tumor margin for sparing normal tissue for SBRT lung/liver patients.« less

  16. Coherent time-stretch transformation for real-time capture of wideband signals.

    PubMed

    Buckley, Brandon W; Madni, Asad M; Jalali, Bahram

    2013-09-09

    Time stretch transformation of wideband waveforms boosts the performance of analog-to-digital converters and digital signal processors by slowing down analog electrical signals before digitization. The transform is based on dispersive Fourier transformation implemented in the optical domain. A coherent receiver would be ideal for capturing the time-stretched optical signal. Coherent receivers offer improved sensitivity, allow for digital cancellation of dispersion-induced impairments and optical nonlinearities, and enable decoding of phase-modulated optical data formats. Because time-stretch uses a chirped broadband (>1 THz) optical carrier, a new coherent detection technique is required. In this paper, we introduce and demonstrate coherent time stretch transformation; a technique that combines dispersive Fourier transform with optically broadband coherent detection.

  17. Quantum correlations from a room-temperature optomechanical cavity.

    PubMed

    Purdy, T P; Grutter, K E; Srinivasan, K; Taylor, J M

    2017-06-23

    The act of position measurement alters the motion of an object being measured. This quantum measurement backaction is typically much smaller than the thermal motion of a room-temperature object and thus difficult to observe. By shining laser light through a nanomechanical beam, we measure the beam's thermally driven vibrations and perturb its motion with optical force fluctuations at a level dictated by the Heisenberg measurement-disturbance uncertainty relation. We demonstrate a cross-correlation technique to distinguish optically driven motion from thermally driven motion, observing this quantum backaction signature up to room temperature. We use the scale of the quantum correlations, which is determined by fundamental constants, to gauge the size of thermal motion, demonstrating a path toward absolute thermometry with quantum mechanically calibrated ticks. Copyright © 2017 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works.

  18. Simultaneous detection of rotational and translational motion in optical tweezers by measurement of backscattered intensity.

    PubMed

    Roy, Basudev; Bera, Sudipta K; Banerjee, Ayan

    2014-06-01

    We describe a simple yet powerful technique of simultaneously measuring both translational and rotational motion of mesoscopic particles in optical tweezers by measuring the backscattered intensity on a quadrant photodiode (QPD). While the measurement of translational motion by taking the difference of the backscattered intensity incident on adjacent quadrants of a QPD is well known, we demonstrate that rotational motion can be measured very precisely by taking the difference between the diagonal quadrants. The latter measurement eliminates the translational component entirely and leads to a detection sensitivity of around 50 mdeg at S/N of 2 for angular motion of a driven microrod. The technique is also able to resolve the translational and rotational Brownian motion components of the microrod in an unperturbed trap and can be very useful in measuring translation-rotation coupling of micro-objects induced by hydrodynamic interactions.

  19. High-stability 48-core bendable and movable optical cable for FAST telescope optical transmission system

    NASA Astrophysics Data System (ADS)

    Liu, Hongfei; Pan, Gaofeng; Lin, Zhong; Liu, Cheng; Zhu, Wenbai; Nan, Rendong; Li, Chunsheng; Gao, Guanjun; Luo, Wenyong; Jin, Chengjin; Song, Jinyou

    2017-11-01

    The construction of FAST telescope was completed in Guizhou province of China in September 2016, and a kind of novel high-stability 48-core bendable and movable optical cable was developed and applied in analog data optical transmission system of FAST. Novel structure and selective material of this optical cable ensure high stability of optical power in the process of cables round-trip motion when telescope is tracking a radio source. The 105 times bend and stretch accelerated experiment for this optical cable was implemented, and real-time optical and RF signal power fluctuation were measured. The physical structure of optical cables after 105 times round-trip motion is in good condition; the real-time optical power attenuation fluctuation is smaller than 0.044 dB; the real-time RF power fluctuation is smaller than 0.12 dB. The optical cable developed in this letter meets the requirement of FAST and has been applied in FAST telescope.

  20. Optic flow-based collision-free strategies: From insects to robots.

    PubMed

    Serres, Julien R; Ruffier, Franck

    2017-09-01

    Flying insects are able to fly smartly in an unpredictable environment. It has been found that flying insects have smart neurons inside their tiny brains that are sensitive to visual motion also called optic flow. Consequently, flying insects rely mainly on visual motion during their flight maneuvers such as: takeoff or landing, terrain following, tunnel crossing, lateral and frontal obstacle avoidance, and adjusting flight speed in a cluttered environment. Optic flow can be defined as the vector field of the apparent motion of objects, surfaces, and edges in a visual scene generated by the relative motion between an observer (an eye or a camera) and the scene. Translational optic flow is particularly interesting for short-range navigation because it depends on the ratio between (i) the relative linear speed of the visual scene with respect to the observer and (ii) the distance of the observer from obstacles in the surrounding environment without any direct measurement of either speed or distance. In flying insects, roll stabilization reflex and yaw saccades attenuate any rotation at the eye level in roll and yaw respectively (i.e. to cancel any rotational optic flow) in order to ensure pure translational optic flow between two successive saccades. Our survey focuses on feedback-loops which use the translational optic flow that insects employ for collision-free navigation. Optic flow is likely, over the next decade to be one of the most important visual cues that can explain flying insects' behaviors for short-range navigation maneuvers in complex tunnels. Conversely, the biorobotic approach can therefore help to develop innovative flight control systems for flying robots with the aim of mimicking flying insects' abilities and better understanding their flight. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  1. Satellite capture as a restricted 2 + 2 body problem

    NASA Astrophysics Data System (ADS)

    Kanaan, Wafaa; Farrelly, David; Lanchares, Víctor

    2018-04-01

    A restricted 2 + 2 body problem is proposed as a possible mechanism to explain the capture of small bodies by a planet. In particular, we consider two primaries revolving in a circular mutual orbit and two small bodies of equal mass, neither of which affects the motion of the primaries. If the small bodies are temporarily captured in the Hill sphere of the smaller primary, they may get close enough to each other to exchange energy in such a way that one of them becomes permanently captured. Numerical simulations show that capture is possible for both prograde and retrograde orbits.

  2. Spatial Attention and Audiovisual Interactions in Apparent Motion

    ERIC Educational Resources Information Center

    Sanabria, Daniel; Soto-Faraco, Salvador; Spence, Charles

    2007-01-01

    In this study, the authors combined the cross-modal dynamic capture task (involving the horizontal apparent movement of visual and auditory stimuli) with spatial cuing in the vertical dimension to investigate the role of spatial attention in cross-modal interactions during motion perception. Spatial attention was manipulated endogenously, either…

  3. Method and System for Producing Full Motion Media to Display on a Spherical Surface

    NASA Technical Reports Server (NTRS)

    Starobin, Michael A. (Inventor)

    2015-01-01

    A method and system for producing full motion media for display on a spherical surface is described. The method may include selecting a subject of full motion media for display on a spherical surface. The method may then include capturing the selected subject as full motion media (e.g., full motion video) in a rectilinear domain. The method may then include processing the full motion media in the rectilinear domain for display on a spherical surface, such as by orienting the full motion media, adding rotation to the full motion media, processing edges of the full motion media, and/or distorting the full motion media in the rectilinear domain for instance. After processing the full motion media, the method may additionally include providing the processed full motion media to a spherical projection system, such as a Science on a Sphere system.

  4. Measurement of nanoparticle size, suspension polydispersity, and stability using near-field optical trapping and light scattering (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Schein, Perry; O'Dell, Dakota; Erickson, David

    2017-02-01

    Nanoparticles are becoming ubiquitous in applications including diagnostic assays, drug delivery and therapeutics. However, there remain challenges in the quality control of these products. Here we present methods for the orthogonal measurement of these parameters by tracking the motion of the nanoparticle in all three special dimensions as it interacts with an optical waveguide. These simultaneous measurements from a single particle basis address some of the gaps left by current measurement technologies such as nanoparticle tracking analysis, ζ-potential measurements, and absorption spectroscopy. As nanoparticles suspended in a microfluidic channel interact with the evanescent field of an optical waveguide, they experience forces and resulting motion in three dimensions: along the propagation axis of the waveguide (x-direction) they are propelled by the optical forces, parallel to the plane of the waveguide and perpendicular to the optical propagation axis (y-direction) they experience an optical gradient force generated from the waveguide mode profile which confines them in a harmonic potential well, and normal to the surface of the waveguide they experience an exponential downward optical force balanced by the surface interactions that confines the particle in an asymmetric well. Building on our Nanophotonic Force Microscopy technique, in this talk we will explain how to simultaneously use the motion in the y-direction to estimate the size of the particle, the comparative velocity in the x-direction to measure the polydispersity of a particle population, and the motion in the z-direction to measure the potential energy landscape of the interaction, providing insight into the colloidal stability.

  5. Design and Kinematic Evaluation of a Novel Joint-Specific Play Controller: Application for Wrist and Forearm Therapy

    PubMed Central

    Schwartz, Joel B.; Wilcox, Bethany; Costa, Laura; Kerman, Karen

    2015-01-01

    Background The wrist extensors and flexors are profoundly affected in most children with hemiparetic cerebral palsy (CP) and are the major target of physical therapists' and occupational therapists' efforts to restore useful hand functions. A limitation of any therapeutic or exercise program can be the level of the child's engagement or adherence. The proposed approach capitalizes on the primary learning avenue for children: toy play. Objective This study aimed to develop and evaluate the measurement accuracy of innovative, motion-specific play controllers that are engaging rehabilitative devices for enhancing therapy and promoting neural plasticity and functional recovery in children with CP. Design Design objectives of the play controller included a cost-effective, home-based supplement to physical therapy, the ability to calibrate the controller so that play can be accomplished with any active range of motion, and the capability of logging play activity and wrist motion over week-long periods. Methods Accuracy of the play controller in measuring wrist flexion-extension was evaluated in 6 children who were developing in a typical manner, using optical motion capture of the wrist and forearm as the gold standard. Results The error of the play controller was estimated at approximately 5 degrees in both maximum wrist flexion and extension. Limitations Measurements were taken during a laboratory session, with children without CP, and no toy or computer game was interfaced with the play controller. Therefore, the potential engagement of the proposed approach for therapy remains to be evaluated. Conclusions This study presented the concept, development, and wrist tracking accuracy of an inexpensive approach to extremity therapy that may have a health benefit for children with hemiparesis, and potentially for patients of any age with a wide range of extremity neuromotor impairments. PMID:25573759

  6. Optical Mapping of Membrane Potential and Epicardial Deformation in Beating Hearts.

    PubMed

    Zhang, Hanyu; Iijima, Kenichi; Huang, Jian; Walcott, Gregory P; Rogers, Jack M

    2016-07-26

    Cardiac optical mapping uses potentiometric fluorescent dyes to image membrane potential (Vm). An important limitation of conventional optical mapping is that contraction is usually arrested pharmacologically to prevent motion artifacts from obscuring Vm signals. However, these agents may alter electrophysiology, and by abolishing contraction, also prevent optical mapping from being used to study coupling between electrical and mechanical function. Here, we present a method to simultaneously map Vm and epicardial contraction in the beating heart. Isolated perfused swine hearts were stained with di-4-ANEPPS and fiducial markers were glued to the epicardium for motion tracking. The heart was imaged at 750 Hz with a video camera. Fluorescence was excited with cyan or blue LEDs on alternating camera frames, thus providing a 375-Hz effective sampling rate. Marker tracking enabled the pixel(s) imaging any epicardial site within the marked region to be identified in each camera frame. Cyan- and blue-elicited fluorescence have different sensitivities to Vm, but other signal features, primarily motion artifacts, are common. Thus, taking the ratio of fluorescence emitted by a motion-tracked epicardial site in adjacent frames removes artifacts, leaving Vm (excitation ratiometry). Reconstructed Vm signals were validated by comparison to monophasic action potentials and to conventional optical mapping signals. Binocular imaging with additional video cameras enabled marker motion to be tracked in three dimensions. From these data, epicardial deformation during the cardiac cycle was quantified by computing finite strain fields. We show that the method can simultaneously map Vm and strain in a left-sided working heart preparation and can image changes in both electrical and mechanical function 5 min after the induction of regional ischemia. By allowing high-resolution optical mapping in the absence of electromechanical uncoupling agents, the method relieves a long-standing limitation of optical mapping and has potential to enhance new studies in coupled cardiac electromechanics. Copyright © 2016 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  7. A Bio-inspired Collision Avoidance Model Based on Spatial Information Derived from Motion Detectors Leads to Common Routes

    PubMed Central

    Bertrand, Olivier J. N.; Lindemann, Jens P.; Egelhaaf, Martin

    2015-01-01

    Avoiding collisions is one of the most basic needs of any mobile agent, both biological and technical, when searching around or aiming toward a goal. We propose a model of collision avoidance inspired by behavioral experiments on insects and by properties of optic flow on a spherical eye experienced during translation, and test the interaction of this model with goal-driven behavior. Insects, such as flies and bees, actively separate the rotational and translational optic flow components via behavior, i.e. by employing a saccadic strategy of flight and gaze control. Optic flow experienced during translation, i.e. during intersaccadic phases, contains information on the depth-structure of the environment, but this information is entangled with that on self-motion. Here, we propose a simple model to extract the depth structure from translational optic flow by using local properties of a spherical eye. On this basis, a motion direction of the agent is computed that ensures collision avoidance. Flying insects are thought to measure optic flow by correlation-type elementary motion detectors. Their responses depend, in addition to velocity, on the texture and contrast of objects and, thus, do not measure the velocity of objects veridically. Therefore, we initially used geometrically determined optic flow as input to a collision avoidance algorithm to show that depth information inferred from optic flow is sufficient to account for collision avoidance under closed-loop conditions. Then, the collision avoidance algorithm was tested with bio-inspired correlation-type elementary motion detectors in its input. Even then, the algorithm led successfully to collision avoidance and, in addition, replicated the characteristics of collision avoidance behavior of insects. Finally, the collision avoidance algorithm was combined with a goal direction and tested in cluttered environments. The simulated agent then showed goal-directed behavior reminiscent of components of the navigation behavior of insects. PMID:26583771

  8. Balance in non-hydrostatic rotating stratified turbulence

    NASA Astrophysics Data System (ADS)

    McKiver, William J.; Dritschel, David G.

    It is now well established that two distinct types of motion occur in geophysical turbulence: slow motions associated with potential vorticity advection and fast oscillations due to inertiamaster variable this is known as balance. In real geophysical flows, deviations from balance in the form of inertiaimbalance|N/f) where optimal potential vorticity balancenonlinear quasi-geostrophic balance’ procedure expands the equations of motion to second order in Rossby number but retains the exact (unexpanded) definition of potential vorticity. This proves crucial for obtaining an accurate estimate of balanced motions. In the analysis of rotating stratified turbulence at Ro1 and N/f1, this procedure captures a significantly greater fraction of the underlying balance than standard (linear) quasi-geostrophic balance (which is based on the linearized equations about a state of rest). Nonlinear quasi-geostrophic balance also compares well with optimal potential vorticity balance, which captures the greatest fraction of the underlying balance overall.More fundamentally, the results of these analyses indicate that balance dominates in carefully initialized simulations of freely decaying rotating stratified turbulence up to O(1) Rossby numbers when N/f1. The fluid motion exhibits important quasi-geostrophic features with, in particular, typical height-to-width scale ratios remaining comparable to f/N.

  9. Optical Modeling Activities for the James Webb Space Telescope (JWST) Project. II; Determining Image Motion and Wavefront Error Over an Extended Field of View with a Segmented Optical System

    NASA Technical Reports Server (NTRS)

    Howard, Joseph M.; Ha, Kong Q.

    2004-01-01

    This is part two of a series on the optical modeling activities for JWST. Starting with the linear optical model discussed in part one, we develop centroid and wavefront error sensitivities for the special case of a segmented optical system such as JWST, where the primary mirror consists of 18 individual segments. Our approach extends standard sensitivity matrix methods used for systems consisting of monolithic optics, where the image motion is approximated by averaging ray coordinates at the image and residual wavefront error is determined with global tip/tilt removed. We develop an exact formulation using the linear optical model, and extend it to cover multiple field points for performance prediction at each instrument aboard JWST. This optical model is then driven by thermal and dynamic structural perturbations in an integrated modeling environment. Results are presented.

  10. A neural model of motion processing and visual navigation by cortical area MST.

    PubMed

    Grossberg, S; Mingolla, E; Pack, C

    1999-12-01

    Cells in the dorsal medial superior temporal cortex (MSTd) process optic flow generated by self-motion during visually guided navigation. A neural model shows how interactions between well-known neural mechanisms (log polar cortical magnification, Gaussian motion-sensitive receptive fields, spatial pooling of motion-sensitive signals and subtractive extraretinal eye movement signals) lead to emergent properties that quantitatively simulate neurophysiological data about MSTd cell properties and psychophysical data about human navigation. Model cells match MSTd neuron responses to optic flow stimuli placed in different parts of the visual field, including position invariance, tuning curves, preferred spiral directions, direction reversals, average response curves and preferred locations for stimulus motion centers. The model shows how the preferred motion direction of the most active MSTd cells can explain human judgments of self-motion direction (heading), without using complex heading templates. The model explains when extraretinal eye movement signals are needed for accurate heading perception, and when retinal input is sufficient, and how heading judgments depend on scene layouts and rotation rates.

  11. Selectivity to Translational Egomotion in Human Brain Motion Areas

    PubMed Central

    Pitzalis, Sabrina; Sdoia, Stefano; Bultrini, Alessandro; Committeri, Giorgia; Di Russo, Francesco; Fattori, Patrizia; Galletti, Claudio; Galati, Gaspare

    2013-01-01

    The optic flow generated when a person moves through the environment can be locally decomposed into several basic components, including radial, circular, translational and spiral motion. Since their analysis plays an important part in the visual perception and control of locomotion and posture it is likely that some brain regions in the primate dorsal visual pathway are specialized to distinguish among them. The aim of this study is to explore the sensitivity to different types of egomotion-compatible visual stimulations in the human motion-sensitive regions of the brain. Event-related fMRI experiments, 3D motion and wide-field stimulation, functional localizers and brain mapping methods were used to study the sensitivity of six distinct motion areas (V6, MT, MST+, V3A, CSv and an Intra-Parietal Sulcus motion [IPSmot] region) to different types of optic flow stimuli. Results show that only areas V6, MST+ and IPSmot are specialized in distinguishing among the various types of flow patterns, with a high response for the translational flow which was maximum in V6 and IPSmot and less marked in MST+. Given that during egomotion the translational optic flow conveys differential information about the near and far external objects, areas V6 and IPSmot likely process visual egomotion signals to extract information about the relative distance of objects with respect to the observer. Since area V6 is also involved in distinguishing object-motion from self-motion, it could provide information about location in space of moving and static objects during self-motion, particularly in a dynamically unstable environment. PMID:23577096

  12. Research on Measurement Accuracy of Laser Tracking System Based on Spherical Mirror with Rotation Errors of Gimbal Mount Axes

    NASA Astrophysics Data System (ADS)

    Shi, Zhaoyao; Song, Huixu; Chen, Hongfang; Sun, Yanqiang

    2018-02-01

    This paper presents a novel experimental approach for confirming that spherical mirror of a laser tracking system can reduce the influences of rotation errors of gimbal mount axes on the measurement accuracy. By simplifying the optical system model of laser tracking system based on spherical mirror, we can easily extract the laser ranging measurement error caused by rotation errors of gimbal mount axes with the positions of spherical mirror, biconvex lens, cat's eye reflector, and measuring beam. The motions of polarization beam splitter and biconvex lens along the optical axis and vertical direction of optical axis are driven by error motions of gimbal mount axes. In order to simplify the experimental process, the motion of biconvex lens is substituted by the motion of spherical mirror according to the principle of relative motion. The laser ranging measurement error caused by the rotation errors of gimbal mount axes could be recorded in the readings of laser interferometer. The experimental results showed that the laser ranging measurement error caused by rotation errors was less than 0.1 μm if radial error motion and axial error motion were within ±10 μm. The experimental method simplified the experimental procedure and the spherical mirror could reduce the influences of rotation errors of gimbal mount axes on the measurement accuracy of the laser tracking system.

  13. Acoustic facilitation of object movement detection during self-motion

    PubMed Central

    Calabro, F. J.; Soto-Faraco, S.; Vaina, L. M.

    2011-01-01

    In humans, as well as most animal species, perception of object motion is critical to successful interaction with the surrounding environment. Yet, as the observer also moves, the retinal projections of the various motion components add to each other and extracting accurate object motion becomes computationally challenging. Recent psychophysical studies have demonstrated that observers use a flow-parsing mechanism to estimate and subtract self-motion from the optic flow field. We investigated whether concurrent acoustic cues for motion can facilitate visual flow parsing, thereby enhancing the detection of moving objects during simulated self-motion. Participants identified an object (the target) that moved either forward or backward within a visual scene containing nine identical textured objects simulating forward observer translation. We found that spatially co-localized, directionally congruent, moving auditory stimuli enhanced object motion detection. Interestingly, subjects who performed poorly on the visual-only task benefited more from the addition of moving auditory stimuli. When auditory stimuli were not co-localized to the visual target, improvements in detection rates were weak. Taken together, these results suggest that parsing object motion from self-motion-induced optic flow can operate on multisensory object representations. PMID:21307050

  14. Global optimization for motion estimation with applications to ultrasound videos of carotid artery plaques

    NASA Astrophysics Data System (ADS)

    Murillo, Sergio; Pattichis, Marios; Soliz, Peter; Barriga, Simon; Loizou, C. P.; Pattichis, C. S.

    2010-03-01

    Motion estimation from digital video is an ill-posed problem that requires a regularization approach. Regularization introduces a smoothness constraint that can reduce the resolution of the velocity estimates. The problem is further complicated for ultrasound videos (US), where speckle noise levels can be significant. Motion estimation using optical flow models requires the modification of several parameters to satisfy the optical flow constraint as well as the level of imposed smoothness. Furthermore, except in simulations or mostly unrealistic cases, there is no ground truth to use for validating the velocity estimates. This problem is present in all real video sequences that are used as input to motion estimation algorithms. It is also an open problem in biomedical applications like motion analysis of US of carotid artery (CA) plaques. In this paper, we study the problem of obtaining reliable ultrasound video motion estimates for atherosclerotic plaques for use in clinical diagnosis. A global optimization framework for motion parameter optimization is presented. This framework uses actual carotid artery motions to provide optimal parameter values for a variety of motions and is tested on ten different US videos using two different motion estimation techniques.

  15. Local statistics of retinal optic flow for self-motion through natural sceneries.

    PubMed

    Calow, Dirk; Lappe, Markus

    2007-12-01

    Image analysis in the visual system is well adapted to the statistics of natural scenes. Investigations of natural image statistics have so far mainly focused on static features. The present study is dedicated to the measurement and the analysis of the statistics of optic flow generated on the retina during locomotion through natural environments. Natural locomotion includes bouncing and swaying of the head and eye movement reflexes that stabilize gaze onto interesting objects in the scene while walking. We investigate the dependencies of the local statistics of optic flow on the depth structure of the natural environment and on the ego-motion parameters. To measure these dependencies we estimate the mutual information between correlated data sets. We analyze the results with respect to the variation of the dependencies over the visual field, since the visual motions in the optic flow vary depending on visual field position. We find that retinal flow direction and retinal speed show only minor statistical interdependencies. Retinal speed is statistically tightly connected to the depth structure of the scene. Retinal flow direction is statistically mostly driven by the relation between the direction of gaze and the direction of ego-motion. These dependencies differ at different visual field positions such that certain areas of the visual field provide more information about ego-motion and other areas provide more information about depth. The statistical properties of natural optic flow may be used to tune the performance of artificial vision systems based on human imitating behavior, and may be useful for analyzing properties of natural vision systems.

  16. Estimation of Full-Body Poses Using Only Five Inertial Sensors: An Eager or Lazy Learning Approach?

    PubMed Central

    Wouda, Frank J.; Giuberti, Matteo; Bellusci, Giovanni; Veltink, Peter H.

    2016-01-01

    Human movement analysis has become easier with the wide availability of motion capture systems. Inertial sensing has made it possible to capture human motion without external infrastructure, therefore allowing measurements in any environment. As high-quality motion capture data is available in large quantities, this creates possibilities to further simplify hardware setups, by use of data-driven methods to decrease the number of body-worn sensors. In this work, we contribute to this field by analyzing the capabilities of using either artificial neural networks (eager learning) or nearest neighbor search (lazy learning) for such a problem. Sparse orientation features, resulting from sensor fusion of only five inertial measurement units with magnetometers, are mapped to full-body poses. Both eager and lazy learning algorithms are shown to be capable of constructing this mapping. The full-body output poses are visually plausible with an average joint position error of approximately 7 cm, and average joint angle error of 7∘. Additionally, the effects of magnetic disturbances typical in orientation tracking on the estimation of full-body poses was also investigated, where nearest neighbor search showed better performance for such disturbances. PMID:27983676

  17. Estimation of Full-Body Poses Using Only Five Inertial Sensors: An Eager or Lazy Learning Approach?

    PubMed

    Wouda, Frank J; Giuberti, Matteo; Bellusci, Giovanni; Veltink, Peter H

    2016-12-15

    Human movement analysis has become easier with the wide availability of motion capture systems. Inertial sensing has made it possible to capture human motion without external infrastructure, therefore allowing measurements in any environment. As high-quality motion capture data is available in large quantities, this creates possibilities to further simplify hardware setups, by use of data-driven methods to decrease the number of body-worn sensors. In this work, we contribute to this field by analyzing the capabilities of using either artificial neural networks (eager learning) or nearest neighbor search (lazy learning) for such a problem. Sparse orientation features, resulting from sensor fusion of only five inertial measurement units with magnetometers, are mapped to full-body poses. Both eager and lazy learning algorithms are shown to be capable of constructing this mapping. The full-body output poses are visually plausible with an average joint position error of approximately 7 cm, and average joint angle error of 7 ∘ . Additionally, the effects of magnetic disturbances typical in orientation tracking on the estimation of full-body poses was also investigated, where nearest neighbor search showed better performance for such disturbances.

  18. Using Xbox kinect motion capture technology to improve clinical rehabilitation outcomes for balance and cardiovascular health in an individual with chronic TBI.

    PubMed

    Chanpimol, Shane; Seamon, Bryant; Hernandez, Haniel; Harris-Love, Michael; Blackman, Marc R

    2017-01-01

    Motion capture virtual reality-based rehabilitation has become more common. However, therapists face challenges to the implementation of virtual reality (VR) in clinical settings. Use of motion capture technology such as the Xbox Kinect may provide a useful rehabilitation tool for the treatment of postural instability and cardiovascular deconditioning in individuals with chronic severe traumatic brain injury (TBI). The primary purpose of this study was to evaluate the effects of a Kinect-based VR intervention using commercially available motion capture games on balance outcomes for an individual with chronic TBI. The secondary purpose was to assess the feasibility of this intervention for eliciting cardiovascular adaptations. A single system experimental design ( n = 1) was utilized, which included baseline, intervention, and retention phases. Repeated measures were used to evaluate the effects of an 8-week supervised exercise intervention using two Xbox One Kinect games. Balance was characterized using the dynamic gait index (DGI), functional reach test (FRT), and Limits of Stability (LOS) test on the NeuroCom Balance Master. The LOS assesses end-point excursion (EPE), maximal excursion (MXE), and directional control (DCL) during weight-shifting tasks. Cardiovascular and activity measures were characterized by heart rate at the end of exercise (HRe), total gameplay time (TAT), and time spent in a therapeutic heart rate (TTR) during the Kinect intervention. Chi-square and ANOVA testing were used to analyze the data. Dynamic balance, characterized by the DGI, increased during the intervention phase χ 2 (1, N = 12) = 12, p = .001. Static balance, characterized by the FRT showed no significant changes. The EPE increased during the intervention phase in the backward direction χ 2 (1, N = 12) = 5.6, p = .02, and notable improvements of DCL were demonstrated in all directions. HRe ( F (2,174) = 29.65, p = < .001) and time in a TTR ( F (2, 12) = 4.19, p = .04) decreased over the course of the intervention phase. Use of a supervised Kinect-based program that incorporated commercial games improved dynamic balance for an individual post severe TBI. Additionally, moderate cardiovascular activity was achieved through motion capture gaming. Further studies appear warranted to determine the potential therapeutic utility of commercial VR games in this patient population. Clinicaltrial.gov ID - NCT02889289.

  19. The complete optical oscilloscope

    NASA Astrophysics Data System (ADS)

    Lei, Cheng; Goda, Keisuke

    2018-04-01

    Observing ultrafast transient dynamics in optics is a challenging task. Two teams in Europe have now independently developed `optical oscilloscopes' that can capture both amplitude and phase information of ultrafast optical signals. Their schemes yield new insights into the nonlinear physics that takes place inside optical fibres.

  20. Accurate estimation of object location in an image sequence using helicopter flight data

    NASA Technical Reports Server (NTRS)

    Tang, Yuan-Liang; Kasturi, Rangachar

    1994-01-01

    In autonomous navigation, it is essential to obtain a three-dimensional (3D) description of the static environment in which the vehicle is traveling. For a rotorcraft conducting low-latitude flight, this description is particularly useful for obstacle detection and avoidance. In this paper, we address the problem of 3D position estimation for static objects from a monocular sequence of images captured from a low-latitude flying helicopter. Since the environment is static, it is well known that the optical flow in the image will produce a radiating pattern from the focus of expansion. We propose a motion analysis system which utilizes the epipolar constraint to accurately estimate 3D positions of scene objects in a real world image sequence taken from a low-altitude flying helicopter. Results show that this approach gives good estimates of object positions near the rotorcraft's intended flight-path.

  1. Quantum-enabled temporal and spectral mode conversion of microwave signals

    PubMed Central

    Andrews, R. W.; Reed, A. P.; Cicak, K.; Teufel, J. D.; Lehnert, K. W.

    2015-01-01

    Electromagnetic waves are ideal candidates for transmitting information in a quantum network as they can be routed rapidly and efficiently between locations using optical fibres or microwave cables. Yet linking quantum-enabled devices with cables has proved difficult because most cavity or circuit quantum electrodynamics systems used in quantum information processing can only absorb and emit signals with a specific frequency and temporal envelope. Here we show that the temporal and spectral content of microwave-frequency electromagnetic signals can be arbitrarily manipulated with a flexible aluminium drumhead embedded in a microwave circuit. The aluminium drumhead simultaneously forms a mechanical oscillator and a tunable capacitor. This device offers a way to build quantum microwave networks using separate and otherwise mismatched components. Furthermore, it will enable the preparation of non-classical states of motion by capturing non-classical microwave signals prepared by the most coherent circuit quantum electrodynamics systems. PMID:26617386

  2. Defect Interactions in Anisotropic Two-Dimensional Fluids

    NASA Astrophysics Data System (ADS)

    Stannarius, R.; Harth, K.

    2016-10-01

    Disclinations in liquid crystals bear striking analogies to defect structures in a wide variety of physical systems, and their straightforward optical observability makes them excellent models to study fundamental properties of defect interactions. We employ freely suspended smectic-C films, which behave as quasi-two-dimensional polar nematics. A procedure to capture high-strength disclinations in localized spots is introduced. These disclinations are released in a controlled way, and the motion of the mutually repelling topological charges with strength +1 is studied quantitatively. We demonstrate that the classical models, which employ elastic one-constant approximation, fail to describe their dynamics correctly. In realistic liquid crystals, even small differences between splay and bend constants lead to the selection of pure splay or pure bend +1 defects. For those, the models work only in very special configurations. In general, additional director walls are involved which reinforce the repulsive interactions substantially.

  3. Fiber-optic extrinsic Fabry-Perot vibration-isolated interferometer for use in absolute gravity meters.

    PubMed

    Canuteson, E L; Zumberge, M

    1996-07-01

    In an absolute gravity meter, a laser interferometer measures the position of a test mass that is falling ina vacuum. The calculated value of gravity is the average acceleration of the mass during a set ofdrops. Since systematic accelerations of the optical system will bias the measured value of gravity,various interferometer geometries have been implemented in the past to isolate the optical system fromground motion. We have developed and tested a low-finesse fiber-optic extrinsic Fabry-Perotinterferometer that is fixed to the mass of a critically damped seismometer in which the effects ofsystematic ground motion and acoustic vibrations are reduced.

  4. Photoacoustic effect generated by moving optical sources: Motion in one dimension

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bai, Wenyu; Diebold, Gerald J.

    2016-03-28

    Although the photoacoustic effect is typically generated by pulsed or amplitude modulated optical beams, it is clear from examination of the wave equation for pressure that motion of an optical source in space will result in the production of sound as well. Here, the properties of the photoacoustic effect generated by moving sources in one dimension are investigated. The cases of a moving Gaussian beam, an oscillating delta function source, and an accelerating Gaussian optical sources are reported. The salient feature of one-dimensional sources in the linear acoustic limit is that the amplitude of the beam increases in time withoutmore » bound.« less

  5. Technical Note: A respiratory monitoring and processing system based on computer vision: prototype and proof of principle

    PubMed Central

    Atallah, Vincent; Escarmant, Patrick; Vinh‐Hung, Vincent

    2016-01-01

    Monitoring and controlling respiratory motion is a challenge for the accuracy and safety of therapeutic irradiation of thoracic tumors. Various commercial systems based on the monitoring of internal or external surrogates have been developed but remain costly. In this article we describe and validate Madibreast, an in‐house‐made respiratory monitoring and processing device based on optical tracking of external markers. We designed an optical apparatus to ensure real‐time submillimetric image resolution at 4 m. Using OpenCv libraries, we optically tracked high‐contrast markers set on patients' breasts. Validation of spatial and time accuracy was performed on a mechanical phantom and on human breast. Madibreast was able to track motion of markers up to a 5 cm/s speed, at a frame rate of 30 fps, with submillimetric accuracy on mechanical phantom and human breasts. Latency was below 100 ms. Concomitant monitoring of three different locations on the breast showed discrepancies in axial motion up to 4 mm for deep‐breathing patterns. This low‐cost, computer‐vision system for real‐time motion monitoring of the irradiation of breast cancer patients showed submillimetric accuracy and acceptable latency. It allowed the authors to highlight differences in surface motion that may be correlated to tumor motion. PACS number(s): 87.55.km PMID:27685116

  6. Technical Note: A respiratory monitoring and processing system based on computer vision: prototype and proof of principle.

    PubMed

    Leduc, Nicolas; Atallah, Vincent; Escarmant, Patrick; Vinh-Hung, Vincent

    2016-09-08

    Monitoring and controlling respiratory motion is a challenge for the accuracy and safety of therapeutic irradiation of thoracic tumors. Various commercial systems based on the monitoring of internal or external surrogates have been developed but remain costly. In this article we describe and validate Madibreast, an in-house-made respiratory monitoring and processing device based on optical tracking of external markers. We designed an optical apparatus to ensure real-time submillimetric image resolution at 4 m. Using OpenCv libraries, we optically tracked high-contrast markers set on patients' breasts. Validation of spatial and time accuracy was performed on a mechanical phantom and on human breast. Madibreast was able to track motion of markers up to a 5 cm/s speed, at a frame rate of 30 fps, with submillimetric accuracy on mechanical phantom and human breasts. Latency was below 100 ms. Concomitant monitoring of three different locations on the breast showed discrepancies in axial motion up to 4 mm for deep-breathing patterns. This low-cost, computer-vision system for real-time motion monitoring of the irradiation of breast cancer patients showed submillimetric accuracy and acceptable latency. It allowed the authors to highlight differences in surface motion that may be correlated to tumor motion.v. © 2016 The Authors.

  7. Manipulating the content of dynamic natural scenes to characterize response in human MT/MST.

    PubMed

    Durant, Szonya; Wall, Matthew B; Zanker, Johannes M

    2011-09-09

    Optic flow is one of the most important sources of information for enabling human navigation through the world. A striking finding from single-cell studies in monkeys is the rapid saturation of response of MT/MST areas with the density of optic flow type motion information. These results are reflected psychophysically in human perception in the saturation of motion aftereffects. We began by comparing responses to natural optic flow scenes in human visual brain areas to responses to the same scenes with inverted contrast (photo negative). This changes scene familiarity while preserving local motion signals. This manipulation had no effect; however, the response was only correlated with the density of local motion (calculated by a motion correlation model) in V1, not in MT/MST. To further investigate this, we manipulated the visible proportion of natural dynamic scenes and found that areas MT and MST did not increase in response over a 16-fold increase in the amount of information presented, i.e., response had saturated. This makes sense in light of the sparseness of motion information in natural scenes, suggesting that the human brain is well adapted to exploit a small amount of dynamic signal and extract information important for survival.

  8. Global velocity constrained cloud motion prediction for short-term solar forecasting

    NASA Astrophysics Data System (ADS)

    Chen, Yanjun; Li, Wei; Zhang, Chongyang; Hu, Chuanping

    2016-09-01

    Cloud motion is the primary reason for short-term solar power output fluctuation. In this work, a new cloud motion estimation algorithm using a global velocity constraint is proposed. Compared to the most used Particle Image Velocity (PIV) algorithm, which assumes the homogeneity of motion vectors, the proposed method can capture the accurate motion vector for each cloud block, including both the motional tendency and morphological changes. Specifically, global velocity derived from PIV is first calculated, and then fine-grained cloud motion estimation can be achieved by global velocity based cloud block researching and multi-scale cloud block matching. Experimental results show that the proposed global velocity constrained cloud motion prediction achieves comparable performance to the existing PIV and filtered PIV algorithms, especially in a short prediction horizon.

  9. A slowly moving foreground can capture an observer's self-motion--a report of a new motion illusion: inverted vection.

    PubMed

    Nakamura, S; Shimojo, S

    2000-01-01

    We investigated interactions between foreground and background stimuli during visually induced perception of self-motion (vection) by using a stimulus composed of orthogonally moving random-dot patterns. The results indicated that, when the foreground moves with a slower speed, a self-motion sensation with a component in the same direction as the foreground is induced. We named this novel component of self-motion perception 'inverted vection'. The robustness of inverted vection was confirmed using various measures of self-motion sensation and under different stimulus conditions. The mechanism underlying inverted vection is discussed with regard to potentially relevant factors, such as relative motion between the foreground and background, and the interaction between the mis-registration of eye-movement information and self-motion perception.

  10. Automation of workplace lifting hazard assessment for musculoskeletal injury prevention.

    PubMed

    Spector, June T; Lieblich, Max; Bao, Stephen; McQuade, Kevin; Hughes, Margaret

    2014-01-01

    Existing methods for practically evaluating musculoskeletal exposures such as posture and repetition in workplace settings have limitations. We aimed to automate the estimation of parameters in the revised United States National Institute for Occupational Safety and Health (NIOSH) lifting equation, a standard manual observational tool used to evaluate back injury risk related to lifting in workplace settings, using depth camera (Microsoft Kinect) and skeleton algorithm technology. A large dataset (approximately 22,000 frames, derived from six subjects) of simultaneous lifting and other motions recorded in a laboratory setting using the Kinect (Microsoft Corporation, Redmond, Washington, United States) and a standard optical motion capture system (Qualysis, Qualysis Motion Capture Systems, Qualysis AB, Sweden) was assembled. Error-correction regression models were developed to improve the accuracy of NIOSH lifting equation parameters estimated from the Kinect skeleton. Kinect-Qualysis errors were modelled using gradient boosted regression trees with a Huber loss function. Models were trained on data from all but one subject and tested on the excluded subject. Finally, models were tested on three lifting trials performed by subjects not involved in the generation of the model-building dataset. Error-correction appears to produce estimates for NIOSH lifting equation parameters that are more accurate than those derived from the Microsoft Kinect algorithm alone. Our error-correction models substantially decreased the variance of parameter errors. In general, the Kinect underestimated parameters, and modelling reduced this bias, particularly for more biased estimates. Use of the raw Kinect skeleton model tended to result in falsely high safe recommended weight limits of loads, whereas error-corrected models gave more conservative, protective estimates. Our results suggest that it may be possible to produce reasonable estimates of posture and temporal elements of tasks such as task frequency in an automated fashion, although these findings should be confirmed in a larger study. Further work is needed to incorporate force assessments and address workplace feasibility challenges. We anticipate that this approach could ultimately be used to perform large-scale musculoskeletal exposure assessment not only for research but also to provide real-time feedback to workers and employers during work method improvement activities and employee training.

  11. Ultrafast chirped optical waveform recorder using a time microscope

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bennett, Corey Vincent

    2015-04-21

    A new technique for capturing both the amplitude and phase of an optical waveform is presented. This technique can capture signals with many THz of bandwidths in a single shot (e.g., temporal resolution of about 44 fs), or be operated repetitively at a high rate. That is, each temporal window (or frame) is captured single shot, in real time, but the process may be run repeatedly or single-shot. By also including a variety of possible demultiplexing techniques, this process is scalable to recoding continuous signals.

  12. Natural motion of the optic nerve head revealed by high speed phase-sensitive OCT

    NASA Astrophysics Data System (ADS)

    OHara, Keith; Schmoll, Tilman; Vass, Clemens; Leitgeb, Rainer A.

    2013-03-01

    We use phase-sensitive optical coherence tomography (OCT) to measure the deformation of the optic nerve head during the pulse cycle, motivated by the possibility that these deformations might be indicative of the progression of glaucoma. A spectral-domain OCT system acquired 100k A-scans per second, with measurements from a pulse-oximeter recorded simultaneously, correlating OCT data to the subject's pulse. Data acquisition lasted for 2 seconds, to cover at least two pulse cycles. A frame-rate of 200-400 B-scans per second results in a sufficient degree of correlated speckle between successive frames that the phase-differences between fames can be extracted. Bulk motion of the entire eye changes the phase by several full cycles between frames, but this does not severely hinder extracting the smaller phase-changes due to differential motion within a frame. The central cup moves about 5 μm/s relative to the retinal-pigment-epithelium edge, with tissue adjacent to blood vessels showing larger motion.

  13. Real-Time Correction By Optical Tracking with Integrated Geometric Distortion Correction for Reducing Motion Artifacts in fMRI

    NASA Astrophysics Data System (ADS)

    Rotenberg, David J.

    Artifacts caused by head motion are a substantial source of error in fMRI that limits its use in neuroscience research and clinical settings. Real-time scan-plane correction by optical tracking has been shown to correct slice misalignment and non-linear spin-history artifacts, however residual artifacts due to dynamic magnetic field non-uniformity may remain in the data. A recently developed correction technique, PLACE, can correct for absolute geometric distortion using the complex image data from two EPI images, with slightly shifted k-space trajectories. We present a correction approach that integrates PLACE into a real-time scan-plane update system by optical tracking, applied to a tissue-equivalent phantom undergoing complex motion and an fMRI finger tapping experiment with overt head motion to induce dynamic field non-uniformity. Experiments suggest that including volume by volume geometric distortion correction by PLACE can suppress dynamic geometric distortion artifacts in a phantom and in vivo and provide more robust activation maps.

  14. A Surface-Coupled Optical Trap with 1-bp Precision via Active Stabilization.

    PubMed

    Okoniewski, Stephen R; Carter, Ashley R; Perkins, Thomas T

    2017-01-01

    Optical traps can measure bead motions with Å-scale precision. However, using this level of precision to infer 1-bp motion of molecular motors along DNA is difficult, since a variety of noise sources degrade instrumental stability. In this chapter, we detail how to improve instrumental stability by (1) minimizing laser pointing, mode, polarization, and intensity noise using an acousto-optical-modulator mediated feedback loop and (2) minimizing sample motion relative to the optical trap using a three-axis piezo-electric-stage mediated feedback loop. These active techniques play a critical role in achieving a surface stability of 1 Å in 3D over tens of seconds and a 1-bp stability and precision in a surface-coupled optical trap over a broad bandwidth (Δf = 0.03-2 Hz) at low force (6 pN). These active stabilization techniques can also aid other biophysical assays that would benefit from improved laser stability and/or Å-scale sample stability, such as atomic force microscopy and super-resolution imaging.

  15. Modelling Nonlinear Dynamic Textures using Hybrid DWT-DCT and Kernel PCA with GPU

    NASA Astrophysics Data System (ADS)

    Ghadekar, Premanand Pralhad; Chopade, Nilkanth Bhikaji

    2016-12-01

    Most of the real-world dynamic textures are nonlinear, non-stationary, and irregular. Nonlinear motion also has some repetition of motion, but it exhibits high variation, stochasticity, and randomness. Hybrid DWT-DCT and Kernel Principal Component Analysis (KPCA) with YCbCr/YIQ colour coding using the Dynamic Texture Unit (DTU) approach is proposed to model a nonlinear dynamic texture, which provides better results than state-of-art methods in terms of PSNR, compression ratio, model coefficients, and model size. Dynamic texture is decomposed into DTUs as they help to extract temporal self-similarity. Hybrid DWT-DCT is used to extract spatial redundancy. YCbCr/YIQ colour encoding is performed to capture chromatic correlation. KPCA is applied to capture nonlinear motion. Further, the proposed algorithm is implemented on Graphics Processing Unit (GPU), which comprise of hundreds of small processors to decrease time complexity and to achieve parallelism.

  16. A low cost PSD-based monocular motion capture system

    NASA Astrophysics Data System (ADS)

    Ryu, Young Kee; Oh, Choonsuk

    2007-10-01

    This paper describes a monocular PSD-based motion capture sensor to employ with commercial video game systems such as Microsoft's XBOX and Sony's Playstation II. The system is compact, low-cost, and only requires a one-time calibration at the factory. The system includes a PSD(Position Sensitive Detector) and active infrared (IR) LED markers that are placed on the object to be tracked. The PSD sensor is placed in the focal plane of a wide-angle lens. The micro-controller calculates the 3D position of the markers using only the measured intensity and the 2D position on the PSD. A series of experiments were performed to evaluate the performance of our prototype system. From the experimental results we see that the proposed system has the advantages of the compact size, the low cost, the easy installation, and the high frame rates to be suitable for high speed motion tracking in games.

  17. Development of esMOCA RULA, Motion Capture Instrumentation for RULA Assessment

    NASA Astrophysics Data System (ADS)

    Akhmad, S.; Arendra, A.

    2018-01-01

    The purpose of this research is to build motion capture instrumentation using sensors fusion accelerometer and gyroscope to assist in RULA assessment. Data processing of sensor orientation is done in every sensor node by digital motion processor. Nine sensors are placed in the upper limb of operator subject. Development of kinematics model is done with Simmechanic Simulink. This kinematics model receives streaming data from sensors via wireless sensors network. The output of the kinematics model is the relative angular angle between upper limb members and visualized on the monitor. This angular information is compared to the look-up table of the RULA worksheet and gives the RULA score. The assessment result of the instrument is compared with the result of the assessment by rula assessors. To sum up, there is no significant difference of assessment by the instrument with an assessment by an assessor.

  18. Incremental inverse kinematics based vision servo for autonomous robotic capture of non-cooperative space debris

    NASA Astrophysics Data System (ADS)

    Dong, Gangqi; Zhu, Z. H.

    2016-04-01

    This paper proposed a new incremental inverse kinematics based vision servo approach for robotic manipulators to capture a non-cooperative target autonomously. The target's pose and motion are estimated by a vision system using integrated photogrammetry and EKF algorithm. Based on the estimated pose and motion of the target, the instantaneous desired position of the end-effector is predicted by inverse kinematics and the robotic manipulator is moved incrementally from its current configuration subject to the joint speed limits. This approach effectively eliminates the multiple solutions in the inverse kinematics and increases the robustness of the control algorithm. The proposed approach is validated by a hardware-in-the-loop simulation, where the pose and motion of the non-cooperative target is estimated by a real vision system. The simulation results demonstrate the effectiveness and robustness of the proposed estimation approach for the target and the incremental control strategy for the robotic manipulator.

  19. Evaluation of the clinical efficacy of the PeTrack motion tracking system for respiratory gating in cardiac PET imaging

    NASA Astrophysics Data System (ADS)

    Manwell, Spencer; Chamberland, Marc J. P.; Klein, Ran; Xu, Tong; deKemp, Robert

    2017-03-01

    Respiratory gating is a common technique used to compensate for patient breathing motion and decrease the prevalence of image artifacts that can impact diagnoses. In this study a new data-driven respiratory gating method (PeTrack) was compared with a conventional optical tracking system. The performance of respiratory gating of the two systems was evaluated by comparing the number of respiratory triggers, patient breathing intervals and gross heart motion as measured in the respiratory-gated image reconstructions of rubidium-82 cardiac PET scans in test and control groups consisting of 15 and 8 scans, respectively. We found evidence suggesting that PeTrack is a robust patient motion tracking system that can be used to retrospectively assess patient motion in the event of failure of the conventional optical tracking system.

  20. Seismic evidence for rotating mantle flow around subducting slab edge associated with oceanic microplate capture

    NASA Astrophysics Data System (ADS)

    Mosher, Stephen G.; Audet, Pascal; L'Heureux, Ivan

    2014-07-01

    Tectonic plate reorganization at a subduction zone edge is a fundamental process that controls oceanic plate fragmentation and capture. However, the various factors responsible for these processes remain elusive. We characterize seismic anisotropy of the upper mantle in the Explorer region at the northern limit of the Cascadia subduction zone from teleseismic shear wave splitting measurements. Our results show that the mantle flow field beneath the Explorer slab is rotating anticlockwise from the convergence-parallel motion between the Juan de Fuca and the North America plates, re-aligning itself with the transcurrent motion between the Pacific and North America plates. We propose that oceanic microplate fragmentation is driven by slab stretching, thus reorganizing the mantle flow around the slab edge and further contributing to slab weakening and increase in buoyancy, eventually leading to cessation of subduction and microplate capture.

  1. Reliability and relative weighting of visual and nonvisual information for perceiving direction of self-motion during walking

    PubMed Central

    Saunders, Jeffrey A.

    2014-01-01

    Direction of self-motion during walking is indicated by multiple cues, including optic flow, nonvisual sensory cues, and motor prediction. I measured the reliability of perceived heading from visual and nonvisual cues during walking, and whether cues are weighted in an optimal manner. I used a heading alignment task to measure perceived heading during walking. Observers walked toward a target in a virtual environment with and without global optic flow. The target was simulated to be infinitely far away, so that it did not provide direct feedback about direction of self-motion. Variability in heading direction was low even without optic flow, with average RMS error of 2.4°. Global optic flow reduced variability to 1.9°–2.1°, depending on the structure of the environment. The small amount of variance reduction was consistent with optimal use of visual information. The relative contribution of visual and nonvisual information was also measured using cue conflict conditions. Optic flow specified a conflicting heading direction (±5°), and bias in walking direction was used to infer relative weighting. Visual feedback influenced heading direction by 16%–34% depending on scene structure, with more effect with dense motion parallax. The weighting of visual feedback was close to the predictions of an optimal integration model given the observed variability measures. PMID:24648194

  2. Integrated Modeling Activities for the James Webb Space Telescope: Optical Jitter Analysis

    NASA Technical Reports Server (NTRS)

    Hyde, T. Tupper; Ha, Kong Q.; Johnston, John D.; Howard, Joseph M.; Mosier, Gary E.

    2004-01-01

    This is a continuation of a series of papers on the integrated modeling activities for the James Webb Space Telescope(JWST). Starting with the linear optical model discussed in part one, and using the optical sensitivities developed in part two, we now assess the optical image motion and wavefront errors from the structural dynamics. This is often referred to as "jitter: analysis. The optical model is combined with the structural model and the control models to create a linear structural/optical/control model. The largest jitter is due to spacecraft reaction wheel assembly disturbances which are harmonic in nature and will excite spacecraft and telescope structural. The structural/optic response causes image quality degradation due to image motion (centroid error) as well as dynamic wavefront error. Jitter analysis results are used to predict imaging performance, improve the structural design, and evaluate the operational impact of the disturbance sources.

  3. The effect of external forces on discrete motion within holographic optical tweezers.

    PubMed

    Eriksson, E; Keen, S; Leach, J; Goksör, M; Padgett, M J

    2007-12-24

    Holographic optical tweezers is a widely used technique to manipulate the individual positions of optically trapped micron-sized particles in a sample. The trap positions are changed by updating the holographic image displayed on a spatial light modulator. The updating process takes a finite time, resulting in a temporary decrease of the intensity, and thus the stiffness, of the optical trap. We have investigated this change in trap stiffness during the updating process by studying the motion of an optically trapped particle in a fluid flow. We found a highly nonlinear behavior of the change in trap stiffness vs. changes in step size. For step sizes up to approximately 300 nm the trap stiffness is decreasing. Above 300 nm the change in trap stiffness remains constant for all step sizes up to one particle radius. This information is crucial for optical force measurements using holographic optical tweezers.

  4. Hockey, iPads, and Projectile Motion in a Physics Classroom

    ERIC Educational Resources Information Center

    Hechter, Richard P.

    2013-01-01

    With the increased availability of modern technology and handheld probeware for classrooms, the iPad and the Video Physics application developed by Vernier are used to capture and analyze the motion of an ice hockey puck within secondary-level physics education. Students collect, analyze, and generate digital modes of representation of physics…

  5. EFFECTS OF TURBULENCE, ECCENTRICITY DAMPING, AND MIGRATION RATE ON THE CAPTURE OF PLANETS INTO MEAN MOTION RESONANCE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ketchum, Jacob A.; Adams, Fred C.; Bloch, Anthony M.

    2011-01-01

    Pairs of migrating extrasolar planets often lock into mean motion resonance as they drift inward. This paper studies the convergent migration of giant planets (driven by a circumstellar disk) and determines the probability that they are captured into mean motion resonance. The probability that such planets enter resonance depends on the type of resonance, the migration rate, the eccentricity damping rate, and the amplitude of the turbulent fluctuations. This problem is studied both through direct integrations of the full three-body problem and via semi-analytic model equations. In general, the probability of resonance decreases with increasing migration rate, and with increasingmore » levels of turbulence, but increases with eccentricity damping. Previous work has shown that the distributions of orbital elements (eccentricity and semimajor axis) for observed extrasolar planets can be reproduced by migration models with multiple planets. However, these results depend on resonance locking, and this study shows that entry into-and maintenance of-mean motion resonance depends sensitively on the migration rate, eccentricity damping, and turbulence.« less

  6. Joint Center Estimation Using Single-Frame Optimization: Part 1: Numerical Simulation.

    PubMed

    Frick, Eric; Rahmatalla, Salam

    2018-04-04

    The biomechanical models used to refine and stabilize motion capture processes are almost invariably driven by joint center estimates, and any errors in joint center calculation carry over and can be compounded when calculating joint kinematics. Unfortunately, accurate determination of joint centers is a complex task, primarily due to measurements being contaminated by soft-tissue artifact (STA). This paper proposes a novel approach to joint center estimation implemented via sequential application of single-frame optimization (SFO). First, the method minimizes the variance of individual time frames’ joint center estimations via the developed variance minimization method to obtain accurate overall initial conditions. These initial conditions are used to stabilize an optimization-based linearization of human motion that determines a time-varying joint center estimation. In this manner, the complex and nonlinear behavior of human motion contaminated by STA can be captured as a continuous series of unique rigid-body realizations without requiring a complex analytical model to describe the behavior of STA. This article intends to offer proof of concept, and the presented method must be further developed before it can be reasonably applied to human motion. Numerical simulations were introduced to verify and substantiate the efficacy of the proposed methodology. When directly compared with a state-of-the-art inertial method, SFO reduced the error due to soft-tissue artifact in all cases by more than 45%. Instead of producing a single vector value to describe the joint center location during a motion capture trial as existing methods often do, the proposed method produced time-varying solutions that were highly correlated ( r > 0.82) with the true, time-varying joint center solution.

  7. Comparative Ergonomic Evaluation of Spacesuit and Space Vehicle Design

    NASA Technical Reports Server (NTRS)

    England, Scott; Cowley, Matthew; Benson, Elizabeth; Harvill, Lauren; Blackledge, Christopher; Perez, Esau; Rajulu, Sudhakar

    2012-01-01

    With the advent of the latest human spaceflight objectives, a series of prototype architectures for a new launch and reentry spacesuit that would be suited to the new mission goals. Four prototype suits were evaluated to compare their performance and enable the selection of the preferred suit components and designs. A consolidated approach to testing was taken: concurrently collecting suit mobility data, seat-suit-vehicle interface clearances, and qualitative assessments of suit performance within the volume of a Multi-Purpose Crew Vehicle mockup. It was necessary to maintain high fidelity in a mockup and use advanced motion-capture technologies in order to achieve the objectives of the study. These seemingly mutually exclusive goals were accommodated with the construction of an optically transparent and fully adjustable frame mockup. The construction of the mockup was such that it could be dimensionally validated rapidly with the motioncapture system. This paper describes the method used to create a space vehicle mockup compatible with use of an optical motion-capture system, the consolidated approach for evaluating spacesuits in action, and a way to use the complex data set resulting from a limited number of test subjects to generate hardware requirements for an entire population. Kinematics, hardware clearance, anthropometry (suited and unsuited), and subjective feedback data were recorded on 15 unsuited and 5 suited subjects. Unsuited subjects were selected chiefly based on their anthropometry in an attempt to find subjects who fell within predefined criteria for medium male, large male, and small female subjects. The suited subjects were selected as a subset of the unsuited medium male subjects and were tested in both unpressurized and pressurized conditions. The prototype spacesuits were each fabricated in a single size to accommodate an approximately average-sized male, so select findings from the suit testing were systematically extrapolated to the extremes of the population to anticipate likely problem areas. This extrapolation was achieved by first comparing suited subjects performance with their unsuited performance, and then applying the results to the entire range of the population. The use of a transparent space vehicle mockup enabled the collection of large amounts of data during human-in-the-loop testing. Mobility data revealed that most of the tested spacesuits had sufficient ranges of motion for the selected tasks to be performed successfully. A suited subject's inability to perform a task most often stemmed from a combination of poor field of view in a seated position, poor dexterity of the pressurized gloves, or from suit/vehicle interface issues. Seat ingress and egress testing showed that problems with anthropometric accommodation did not exclusively occur with the largest or smallest subjects, but also with specific combinations of measurements that led to narrower seat ingress/egress clearance.

  8. Competitive Dynamics in MSTd: A Mechanism for Robust Heading Perception Based on Optic Flow

    PubMed Central

    Layton, Oliver W.; Fajen, Brett R.

    2016-01-01

    Human heading perception based on optic flow is not only accurate, it is also remarkably robust and stable. These qualities are especially apparent when observers move through environments containing other moving objects, which introduce optic flow that is inconsistent with observer self-motion and therefore uninformative about heading direction. Moving objects may also occupy large portions of the visual field and occlude regions of the background optic flow that are most informative about heading perception. The fact that heading perception is biased by no more than a few degrees under such conditions attests to the robustness of the visual system and warrants further investigation. The aim of the present study was to investigate whether recurrent, competitive dynamics among MSTd neurons that serve to reduce uncertainty about heading over time offer a plausible mechanism for capturing the robustness of human heading perception. Simulations of existing heading models that do not contain competitive dynamics yield heading estimates that are far more erratic and unstable than human judgments. We present a dynamical model of primate visual areas V1, MT, and MSTd based on that of Layton, Mingolla, and Browning that is similar to the other models, except that the model includes recurrent interactions among model MSTd neurons. Competitive dynamics stabilize the model’s heading estimate over time, even when a moving object crosses the future path. Soft winner-take-all dynamics enhance units that code a heading direction consistent with the time history and suppress responses to transient changes to the optic flow field. Our findings support recurrent competitive temporal dynamics as a crucial mechanism underlying the robustness and stability of perception of heading. PMID:27341686

  9. An adaptive optics imaging system designed for clinical use.

    PubMed

    Zhang, Jie; Yang, Qiang; Saito, Kenichi; Nozato, Koji; Williams, David R; Rossi, Ethan A

    2015-06-01

    Here we demonstrate a new imaging system that addresses several major problems limiting the clinical utility of conventional adaptive optics scanning light ophthalmoscopy (AOSLO), including its small field of view (FOV), reliance on patient fixation for targeting imaging, and substantial post-processing time. We previously showed an efficient image based eye tracking method for real-time optical stabilization and image registration in AOSLO. However, in patients with poor fixation, eye motion causes the FOV to drift substantially, causing this approach to fail. We solve that problem here by tracking eye motion at multiple spatial scales simultaneously by optically and electronically integrating a wide FOV SLO (WFSLO) with an AOSLO. This multi-scale approach, implemented with fast tip/tilt mirrors, has a large stabilization range of ± 5.6°. Our method consists of three stages implemented in parallel: 1) coarse optical stabilization driven by a WFSLO image, 2) fine optical stabilization driven by an AOSLO image, and 3) sub-pixel digital registration of the AOSLO image. We evaluated system performance in normal eyes and diseased eyes with poor fixation. Residual image motion with incremental compensation after each stage was: 1) ~2-3 arc minutes, (arcmin) 2) ~0.5-0.8 arcmin and, 3) ~0.05-0.07 arcmin, for normal eyes. Performance in eyes with poor fixation was: 1) ~3-5 arcmin, 2) ~0.7-1.1 arcmin and 3) ~0.07-0.14 arcmin. We demonstrate that this system is capable of reducing image motion by a factor of ~400, on average. This new optical design provides additional benefits for clinical imaging, including a steering subsystem for AOSLO that can be guided by the WFSLO to target specific regions of interest such as retinal pathology and real-time averaging of registered images to eliminate image post-processing.

  10. Robust spectral-domain optical coherence tomography speckle model and its cross-correlation coefficient analysis

    PubMed Central

    Liu, Xuan; Ramella-Roman, Jessica C.; Huang, Yong; Guo, Yuan; Kang, Jin U.

    2013-01-01

    In this study, we proposed a generic speckle simulation for optical coherence tomography (OCT) signal, by convolving the point spread function (PSF) of the OCT system with the numerically synthesized random sample field. We validated our model and used the simulation method to study the statistical properties of cross-correlation coefficients (XCC) between Ascans which have been recently applied in transverse motion analysis by our group. The results of simulation show that over sampling is essential for accurate motion tracking; exponential decay of OCT signal leads to an under estimate of motion which can be corrected; lateral heterogeneity of sample leads to an over estimate of motion for a few pixels corresponding to the structural boundary. PMID:23456001

  11. Free Space Optical Communication in the Military Environment

    DTIC Science & Technology

    2014-09-01

    Communications Commission FDA Food and Drug Administration FMV Full Motion Video FOB Forward Operating Base FOENEX Free-Space Optical Experimental Network...from radio and voice to chat message and email. Data-rich multimedia content, such as high-definition pictures, video chat, video files, and...introduction of full-motion video (FMV) via numerous different Intelligence Surveillance and Reconnaissance (ISR) systems, such as targeting pods on

  12. Intracavity optical trapping with Ytterbium doped fiber ring laser

    NASA Astrophysics Data System (ADS)

    Sayed, Rania; Kalantarifard, Fatemeh; Elahi, Parviz; Ilday, F. Omer; Volpe, Giovanni; Maragò, Onofrio M.

    2013-09-01

    We propose a novel approach for trapping micron-sized particles and living cells based on optical feedback. This approach can be implemented at low numerical aperture (NA=0.5, 20X) and long working distance. In this configuration, an optical tweezers is constructed inside a ring cavity fiber laser and the optical feedback in the ring cavity is controlled by the light scattered from a trapped particle. In particular, once the particle is trapped, the laser operation, optical feedback and intracavity power are affected by the particle motion. We demonstrate that using this configuration is possible to stably hold micron-sized particles and single living cells in the focal spot of the laser beam. The calibration of the optical forces is achieved by tracking the Brownian motion of a trapped particle or cell and analysing its position distribution.

  13. Benchmark Modeling of the Near-Field and Far-Field Wave Effects of Wave Energy Arrays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rhinefrank, Kenneth E; Haller, Merrick C; Ozkan-Haller, H Tuba

    2013-01-26

    This project is an industry-led partnership between Columbia Power Technologies and Oregon State University that will perform benchmark laboratory experiments and numerical modeling of the near-field and far-field impacts of wave scattering from an array of wave energy devices. These benchmark experimental observations will help to fill a gaping hole in our present knowledge of the near-field effects of multiple, floating wave energy converters and are a critical requirement for estimating the potential far-field environmental effects of wave energy arrays. The experiments will be performed at the Hinsdale Wave Research Laboratory (Oregon State University) and will utilize an array ofmore » newly developed Buoys' that are realistic, lab-scale floating power converters. The array of Buoys will be subjected to realistic, directional wave forcing (1:33 scale) that will approximate the expected conditions (waves and water depths) to be found off the Central Oregon Coast. Experimental observations will include comprehensive in-situ wave and current measurements as well as a suite of novel optical measurements. These new optical capabilities will include imaging of the 3D wave scattering using a binocular stereo camera system, as well as 3D device motion tracking using a newly acquired LED system. These observing systems will capture the 3D motion history of individual Buoys as well as resolve the 3D scattered wave field; thus resolving the constructive and destructive wave interference patterns produced by the array at high resolution. These data combined with the device motion tracking will provide necessary information for array design in order to balance array performance with the mitigation of far-field impacts. As a benchmark data set, these data will be an important resource for testing of models for wave/buoy interactions, buoy performance, and far-field effects on wave and current patterns due to the presence of arrays. Under the proposed project we will initiate high-resolution (fine scale, very near-field) fluid/structure interaction simulations of buoy motions, as well as array-scale, phase-resolving wave scattering simulations. These modeling efforts will utilize state-of-the-art research quality models, which have not yet been brought to bear on this complex problem of large array wave/structure interaction problem.« less

  14. Comparative abilities of Microsoft Kinect and Vicon 3D motion capture for gait analysis.

    PubMed

    Pfister, Alexandra; West, Alexandre M; Bronner, Shaw; Noah, Jack Adam

    2014-07-01

    Biomechanical analysis is a powerful tool in the evaluation of movement dysfunction in orthopaedic and neurologic populations. Three-dimensional (3D) motion capture systems are widely used, accurate systems, but are costly and not available in many clinical settings. The Microsoft Kinect™ has the potential to be used as an alternative low-cost motion analysis tool. The purpose of this study was to assess concurrent validity of the Kinect™ with Brekel Kinect software in comparison to Vicon Nexus during sagittal plane gait kinematics. Twenty healthy adults (nine male, 11 female) were tracked while walking and jogging at three velocities on a treadmill. Concurrent hip and knee peak flexion and extension and stride timing measurements were compared between Vicon and Kinect™. Although Kinect measurements were representative of normal gait, the Kinect™ generally under-estimated joint flexion and over-estimated extension. Kinect™ and Vicon hip angular displacement correlation was very low and error was large. Kinect™ knee measurements were somewhat better than hip, but were not consistent enough for clinical assessment. Correlation between Kinect™ and Vicon stride timing was high and error was fairly small. Variability in Kinect™ measurements was smallest at the slowest velocity. The Kinect™ has basic motion capture capabilities and with some minor adjustments will be an acceptable tool to measure stride timing, but sophisticated advances in software and hardware are necessary to improve Kinect™ sensitivity before it can be implemented for clinical use.

  15. Detecting apoptosis using dynamic light scattering with optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Farhat, Golnaz; Mariampillai, Adrian; Yang, Victor X. D.; Czarnota, Gregory J.; Kolios, Michael C.

    2011-07-01

    A dynamic light scattering technique is implemented using optical coherence tomography (OCT) to measure the change in intracellular motion as cells undergo apoptosis. Acute myeloid leukemia cells were treated with cisplatin and imaged at a frame rate of 166 Hz using a 1300 nm swept-source OCT system at various times over a period of 48 h. Time correlation analysis of the speckle intensities indicated a significant increase in intracellular motion 24 h after treatment. This rise in intracellular motion correlated with histological findings of irregularly shaped and fragmented cells indicative of cell membrane blebbing and fragmentation.

  16. Effects of non-Gaussian Brownian motion on direct force optical tweezers measurements of the electrostatic forces between pairs of colloidal particles.

    PubMed

    Raudsepp, Allan; A K Williams, Martin; B Hall, Simon

    2016-07-01

    Measurements of the electrostatic force with separation between a fixed and an optically trapped colloidal particle are examined with experiment, simulation and analytical calculation. Non-Gaussian Brownian motion is observed in the position of the optically trapped particle when particles are close and traps weak. As a consequence of this motion, a simple least squares parameterization of direct force measurements, in which force is inferred from the displacement of an optically trapped particle as separation is gradually decreased, contains forces generated by the rectification of thermal fluctuations in addition to those originating directly from the electrostatic interaction between the particles. Thus, when particles are close and traps weak, simply fitting the measured direct force measurement to DLVO theory extracts parameters with modified meanings when compared to the original formulation. In such cases, however, physically meaningful DLVO parameters can be recovered by comparing the measured non-Gaussian statistics to those predicted by solutions to Smoluchowski's equation for diffusion in a potential.

  17. Object Segmentation from Motion Discontinuities and Temporal Occlusions–A Biologically Inspired Model

    PubMed Central

    Beck, Cornelia; Ognibeni, Thilo; Neumann, Heiko

    2008-01-01

    Background Optic flow is an important cue for object detection. Humans are able to perceive objects in a scene using only kinetic boundaries, and can perform the task even when other shape cues are not provided. These kinetic boundaries are characterized by the presence of motion discontinuities in a local neighbourhood. In addition, temporal occlusions appear along the boundaries as the object in front covers the background and the objects that are spatially behind it. Methodology/Principal Findings From a technical point of view, the detection of motion boundaries for segmentation based on optic flow is a difficult task. This is due to the problem that flow detected along such boundaries is generally not reliable. We propose a model derived from mechanisms found in visual areas V1, MT, and MSTl of human and primate cortex that achieves robust detection along motion boundaries. It includes two separate mechanisms for both the detection of motion discontinuities and of occlusion regions based on how neurons respond to spatial and temporal contrast, respectively. The mechanisms are embedded in a biologically inspired architecture that integrates information of different model components of the visual processing due to feedback connections. In particular, mutual interactions between the detection of motion discontinuities and temporal occlusions allow a considerable improvement of the kinetic boundary detection. Conclusions/Significance A new model is proposed that uses optic flow cues to detect motion discontinuities and object occlusion. We suggest that by combining these results for motion discontinuities and object occlusion, object segmentation within the model can be improved. This idea could also be applied in other models for object segmentation. In addition, we discuss how this model is related to neurophysiological findings. The model was successfully tested both with artificial and real sequences including self and object motion. PMID:19043613

  18. Motion as a source of environmental information: a fresh view on biological motion computation by insect brains

    PubMed Central

    Egelhaaf, Martin; Kern, Roland; Lindemann, Jens Peter

    2014-01-01

    Despite their miniature brains insects, such as flies, bees and wasps, are able to navigate by highly erobatic flight maneuvers in cluttered environments. They rely on spatial information that is contained in the retinal motion patterns induced on the eyes while moving around (“optic flow”) to accomplish their extraordinary performance. Thereby, they employ an active flight and gaze strategy that separates rapid saccade-like turns from translatory flight phases where the gaze direction is kept largely constant. This behavioral strategy facilitates the processing of environmental information, because information about the distance of the animal to objects in the environment is only contained in the optic flow generated by translatory motion. However, motion detectors as are widespread in biological systems do not represent veridically the velocity of the optic flow vectors, but also reflect textural information about the environment. This characteristic has often been regarded as a limitation of a biological motion detection mechanism. In contrast, we conclude from analyses challenging insect movement detectors with image flow as generated during translatory locomotion through cluttered natural environments that this mechanism represents the contours of nearby objects. Contrast borders are a main carrier of functionally relevant object information in artificial and natural sceneries. The motion detection system thus segregates in a computationally parsimonious way the environment into behaviorally relevant nearby objects and—in many behavioral contexts—less relevant distant structures. Hence, by making use of an active flight and gaze strategy, insects are capable of performing extraordinarily well even with a computationally simple motion detection mechanism. PMID:25389392

  19. Motion as a source of environmental information: a fresh view on biological motion computation by insect brains.

    PubMed

    Egelhaaf, Martin; Kern, Roland; Lindemann, Jens Peter

    2014-01-01

    Despite their miniature brains insects, such as flies, bees and wasps, are able to navigate by highly erobatic flight maneuvers in cluttered environments. They rely on spatial information that is contained in the retinal motion patterns induced on the eyes while moving around ("optic flow") to accomplish their extraordinary performance. Thereby, they employ an active flight and gaze strategy that separates rapid saccade-like turns from translatory flight phases where the gaze direction is kept largely constant. This behavioral strategy facilitates the processing of environmental information, because information about the distance of the animal to objects in the environment is only contained in the optic flow generated by translatory motion. However, motion detectors as are widespread in biological systems do not represent veridically the velocity of the optic flow vectors, but also reflect textural information about the environment. This characteristic has often been regarded as a limitation of a biological motion detection mechanism. In contrast, we conclude from analyses challenging insect movement detectors with image flow as generated during translatory locomotion through cluttered natural environments that this mechanism represents the contours of nearby objects. Contrast borders are a main carrier of functionally relevant object information in artificial and natural sceneries. The motion detection system thus segregates in a computationally parsimonious way the environment into behaviorally relevant nearby objects and-in many behavioral contexts-less relevant distant structures. Hence, by making use of an active flight and gaze strategy, insects are capable of performing extraordinarily well even with a computationally simple motion detection mechanism.

  20. Three-dimensional quantification of cardiac surface motion: a newly developed three-dimensional digital motion-capture and reconstruction system for beating heart surgery.

    PubMed

    Watanabe, Toshiki; Omata, Sadao; Odamura, Motoki; Okada, Masahumi; Nakamura, Yoshihiko; Yokoyama, Hitoshi

    2006-11-01

    This study aimed to evaluate our newly developed 3-dimensional digital motion-capture and reconstruction system in an animal experiment setting and to characterize quantitatively the three regional cardiac surface motions, in the left anterior descending artery, right coronary artery, and left circumflex artery, before and after stabilization using a stabilizer. Six pigs underwent a full sternotomy. Three tiny metallic markers (diameter 2 mm) coated with a reflective material were attached on three regional cardiac surfaces (left anterior descending, right coronary, and left circumflex coronary artery regions). These markers were captured by two high-speed digital video cameras (955 frames per second) as 2-dimensional coordinates and reconstructed to 3-dimensional data points (about 480 xyz-position data per second) by a newly developed computer program. The remaining motion after stabilization ranged from 0.4 to 1.01 mm at the left anterior descending, 0.91 to 1.52 mm at the right coronary artery, and 0.53 to 1.14 mm at the left circumflex regions. Significant differences before and after stabilization were evaluated in maximum moving velocity (left anterior descending 456.7 +/- 178.7 vs 306.5 +/- 207.4 mm/s; right coronary artery 574.9 +/- 161.7 vs 446.9 +/- 170.7 mm/s; left circumflex 578.7 +/- 226.7 vs 398.9 +/- 192.6 mm/s; P < .0001) and maximum acceleration (left anterior descending 238.8 +/- 137.4 vs 169.4 +/- 132.7 m/s2; right coronary artery 315.0 +/- 123.9 vs 242.9 +/- 120.6 m/s2; left circumflex 307.9 +/- 151.0 vs 217.2 +/- 132.3 m/s2; P < .0001). This system is useful for a precise quantification of the heart surface movement. This helps us better understand the complexity of the heart, its motion, and the need for developing a better stabilizer for beating heart surgery.

  1. Modeling of Fluid-Membrane Interaction in Cellular Microinjection Process

    NASA Astrophysics Data System (ADS)

    Karzar-Jeddi, Mehdi; Diaz, Jhon; Olgac, Nejat; Fan, Tai-Hsi

    2009-11-01

    Cellular microinjection is a well-accepted method to deliver matters such as sperm, nucleus, or macromolecules into biological cells. To improve the success rate of in vitro fertilization and to establish the ideal operating conditions for a novel computer controlled rotationally oscillating intracytoplasmic sperm injection (ICSI) technology, we investigate the fluid-membrane interactions in the ICSI procedure. The procedure consists of anchoring the oocyte (a developing egg) using a holding pipette, penetrating oocyte's zona pellucida (the outer membrane) and the oolemma (the plasma or inner membrane) using an injection micropipette, and finally to deliver sperm into the oocyte for fertilization. To predict the large deformation of the oocyte membranes up to the piercing of the oolemma and the motion of fluids across both membranes, the dynamic fluid-pipette-membrane interactions are formulated by the coupled Stokes' equations and the continuum membrane model based on Helfrich's energy theory. A boundary integral model is developed to simulate the transient membrane deformation and the local membrane stress induced by the longitudinal motion of the injection pipette. The model captures the essential features of the membranes shown on optical images of ICSI experiments, and is capable of suggesting the optimal deformation level of the oolemma to start the rotational oscillations for piercing into the oolemma.

  2. Hierarchical information fusion for global displacement estimation in microsensor motion capture.

    PubMed

    Meng, Xiaoli; Zhang, Zhi-Qiang; Wu, Jian-Kang; Wong, Wai-Choong

    2013-07-01

    This paper presents a novel hierarchical information fusion algorithm to obtain human global displacement for different gait patterns, including walking, running, and hopping based on seven body-worn inertial and magnetic measurement units. In the first-level sensor fusion, the orientation for each segment is achieved by a complementary Kalman filter (CKF) which compensates for the orientation error of the inertial navigation system solution through its error state vector. For each foot segment, the displacement is also estimated by the CKF, and zero velocity update is included for the drift reduction in foot displacement estimation. Based on the segment orientations and left/right foot locations, two global displacement estimates can be acquired from left/right lower limb separately using a linked biomechanical model. In the second-level geometric fusion, another Kalman filter is deployed to compensate for the difference between the two estimates from the sensor fusion and get more accurate overall global displacement estimation. The updated global displacement will be transmitted to left/right foot based on the human lower biomechanical model to restrict the drifts in both feet displacements. The experimental results have shown that our proposed method can accurately estimate human locomotion for the three different gait patterns with regard to the optical motion tracker.

  3. Dynamic Colloidal Molecules Maneuvered by Light-Controlled Janus Micromotors.

    PubMed

    Gao, Yirong; Mou, Fangzhi; Feng, Yizheng; Che, Shengping; Li, Wei; Xu, Leilei; Guan, Jianguo

    2017-07-12

    In this work, we propose and demonstrate a dynamic colloidal molecule that is capable of moving autonomously and performing swift, reversible, and in-place assembly dissociation in a high accuracy by manipulating a TiO 2 /Pt Janus micromotor with light irradiation. Due to the efficient motion of the TiO 2 /Pt Janus motor and the light-switchable electrostatic interactions between the micromotor and colloidal particles, the colloidal particles can be captured and assembled one by one on the fly, subsequently forming into swimming colloidal molecules by mimicking space-filling models of simple molecules with central atoms. The as-demonstrated dynamic colloidal molecules have a configuration accurately controlled and stabilized by regulating the time-dependent intensity of UV light, which controls the stop-and-go motion of the colloidal molecules. The dynamic colloidal molecules are dissociated when the light irradiation is turned off due to the disappearance of light-switchable electrostatic interaction between the motor and the colloidal particles. The strategy for the assembly of dynamic colloidal molecules is applicable to various charged colloidal particles. The simulated optical properties of a dynamic colloidal molecule imply that the results here may provide a novel approach for in-place building functional microdevices, such as microlens arrays, in a swift and reversible manner.

  4. Determining loads acting on the pelvis in upright and recumbent birthing positions: A case study.

    PubMed

    Hemmerich, Andrea; Geens, Emily; Diesbourg, Tara; Dumas, Geneviève A

    2018-05-24

    The biomechanics of mothers' birthing positions and their impact on maternal and newborn health outcomes are poorly understood. Our objectives were to determine the loads applied to the female pelvis during dynamic movement that may occur during childbirth; findings are intended to inform clinical understanding and further research on birth positioning mechanics. An optical motion capture system and force platforms were used to collect upright and supine movement data from two pregnant and three non-pregnant participants. Using an inverse dynamics approach, normalized three-dimensional hip and sagittal plane lumbosacral joint moments were estimated during squatting, all-fours, and supine activities. During squatting, peak hip abduction moments were greater for our pregnant (compared with non-pregnant) participants and lumbosacral extension moments substantially exceeded those during walking. The all-fours activity, conversely, generated flexion moments at the L5/S1 joint throughout most of the cycle. In supine, the magnitude of the ground reaction force reached 100% body weight with legs and upper body raised (McRoberts' position); the centre of pressure remained cranial to the sacrum. Squatting generated appreciable moments at the hip and lumbosacral joints that could potentially affect pelvic motion during childbirth. Copyright © 2018 Elsevier Ltd. All rights reserved.

  5. Autonomous Quality Control of Joint Orientation Measured with Inertial Sensors.

    PubMed

    Lebel, Karina; Boissy, Patrick; Nguyen, Hung; Duval, Christian

    2016-07-05

    Clinical mobility assessment is traditionally performed in laboratories using complex and expensive equipment. The low accessibility to such equipment, combined with the emerging trend to assess mobility in a free-living environment, creates a need for body-worn sensors (e.g., inertial measurement units-IMUs) that are capable of measuring the complexity in motor performance using meaningful measurements, such as joint orientation. However, accuracy of joint orientation estimates using IMUs may be affected by environment, the joint tracked, type of motion performed and velocity. This study investigates a quality control (QC) process to assess the quality of orientation data based on features extracted from the raw inertial sensors' signals. Joint orientation (trunk, hip, knee, ankle) of twenty participants was acquired by an optical motion capture system and IMUs during a variety of tasks (sit, sit-to-stand transition, walking, turning) performed under varying conditions (speed, environment). An artificial neural network was used to classify good and bad sequences of joint orientation with a sensitivity and a specificity above 83%. This study confirms the possibility to perform QC on IMU joint orientation data based on raw signal features. This innovative QC approach may be of particular interest in a big data context, such as for remote-monitoring of patients' mobility.

  6. Analysis of 2D THz-Raman spectroscopy using a non-Markovian Brownian oscillator model with nonlinear system-bath interactions.

    PubMed

    Ikeda, Tatsushi; Ito, Hironobu; Tanimura, Yoshitaka

    2015-06-07

    We explore and describe the roles of inter-molecular vibrations employing a Brownian oscillator (BO) model with linear-linear (LL) and square-linear (SL) system-bath interactions, which we use to analyze two-dimensional (2D) THz-Raman spectra obtained by means of molecular dynamics (MD) simulations. In addition to linear infrared absorption (1D IR), we calculated 2D Raman-THz-THz, THz-Raman-THz, and THz-THz-Raman signals for liquid formamide, water, and methanol using an equilibrium non-equilibrium hybrid MD simulation. The calculated 1D IR and 2D THz-Raman signals are compared with results obtained from the LL+SL BO model applied through use of hierarchal Fokker-Planck equations with non-perturbative and non-Markovian noise. We find that all of the qualitative features of the 2D profiles of the signals obtained from the MD simulations are reproduced with the LL+SL BO model, indicating that this model captures the essential features of the inter-molecular motion. We analyze the fitted 2D profiles in terms of anharmonicity, nonlinear polarizability, and dephasing time. The origins of the echo peaks of the librational motion and the elongated peaks parallel to the probe direction are elucidated using optical Liouville paths.

  7. Efficient Generation of Dancing Animation Synchronizing with Music Based on Meta Motion Graphs

    NASA Astrophysics Data System (ADS)

    Xu, Jianfeng; Takagi, Koichi; Sakazawa, Shigeyuki

    This paper presents a system for automatic generation of dancing animation that is synchronized with a piece of music by re-using motion capture data. Basically, the dancing motion is synthesized according to the rhythm and intensity features of music. For this purpose, we propose a novel meta motion graph structure to embed the necessary features including both rhythm and intensity, which is constructed on the motion capture database beforehand. In this paper, we consider two scenarios for non-streaming music and streaming music, where global search and local search are required respectively. In the case of the former, once a piece of music is input, the efficient dynamic programming algorithm can be employed to globally search a best path in the meta motion graph, where an objective function is properly designed by measuring the quality of beat synchronization, intensity matching, and motion smoothness. In the case of the latter, the input music is stored in a buffer in a streaming mode, then an efficient search method is presented for a certain amount of music data (called a segment) in the buffer with the same objective function, resulting in a segment-based search approach. For streaming applications, we define an additional property in the above meta motion graph to deal with the unpredictable future music, which guarantees that there is some motion to match the unknown remaining music. A user study with totally 60 subjects demonstrates that our system outperforms the stat-of-the-art techniques in both scenarios. Furthermore, our system improves the synthesis speed greatly (maximal speedup is more than 500 times), which is essential for mobile applications. We have implemented our system on commercially available smart phones and confirmed that it works well on these mobile phones.

  8. Dynamic displacement measurement of large-scale structures based on the Lucas-Kanade template tracking algorithm

    NASA Astrophysics Data System (ADS)

    Guo, Jie; Zhu, Chang`an

    2016-01-01

    The development of optics and computer technologies enables the application of the vision-based technique that uses digital cameras to the displacement measurement of large-scale structures. Compared with traditional contact measurements, vision-based technique allows for remote measurement, has a non-intrusive characteristic, and does not necessitate mass introduction. In this study, a high-speed camera system is developed to complete the displacement measurement in real time. The system consists of a high-speed camera and a notebook computer. The high-speed camera can capture images at a speed of hundreds of frames per second. To process the captured images in computer, the Lucas-Kanade template tracking algorithm in the field of computer vision is introduced. Additionally, a modified inverse compositional algorithm is proposed to reduce the computing time of the original algorithm and improve the efficiency further. The modified algorithm can rapidly accomplish one displacement extraction within 1 ms without having to install any pre-designed target panel onto the structures in advance. The accuracy and the efficiency of the system in the remote measurement of dynamic displacement are demonstrated in the experiments on motion platform and sound barrier on suspension viaduct. Experimental results show that the proposed algorithm can extract accurate displacement signal and accomplish the vibration measurement of large-scale structures.

  9. Scale Changes Provide an Alternative Cue For the Discrimination of Heading, But Not Object Motion

    PubMed Central

    Calabro, Finnegan J.; Vaina, Lucia Maria

    2016-01-01

    Background Understanding the dynamics of our surrounding environments is a task usually attributed to the detection of motion based on changes in luminance across space. Yet a number of other cues, both dynamic and static, have been shown to provide useful information about how we are moving and how objects around us move. One such cue, based on changes in spatial frequency, or scale, over time has been shown to be useful in conveying motion in depth even in the absence of a coherent, motion-defined flow field (optic flow). Material/Methods 16 right handed healthy observers (ages 18–28) participated in the behavioral experiments described in this study. Using analytical behavioral methods we investigate the functional specificity of this cue by measuring the ability of observers to perform tasks of heading (direction of self-motion) and 3D trajectory discrimination on the basis of scale changes and optic flow. Results Statistical analyses of performance on the test-experiments in comparison to the control experiments suggests that while scale changes may be involved in the detection of heading, they are not correctly integrated with translational motion and, thus, do not provide a correct discrimination of 3D object trajectories. Conclusions These results have the important implication for the type of visual guided navigation that can be done by an observer blind to optic flow. Scale change is an important alternative cue for self-motion. PMID:27231114

  10. Scale Changes Provide an Alternative Cue For the Discrimination of Heading, But Not Object Motion.

    PubMed

    Calabro, Finnegan J; Vaina, Lucia Maria

    2016-05-27

    BACKGROUND Understanding the dynamics of our surrounding environments is a task usually attributed to the detection of motion based on changes in luminance across space. Yet a number of other cues, both dynamic and static, have been shown to provide useful information about how we are moving and how objects around us move. One such cue, based on changes in spatial frequency, or scale, over time has been shown to be useful in conveying motion in depth even in the absence of a coherent, motion-defined flow field (optic flow). MATERIAL AND METHODS 16 right handed healthy observers (ages 18-28) participated in the behavioral experiments described in this study. Using analytical behavioral methods we investigate the functional specificity of this cue by measuring the ability of observers to perform tasks of heading (direction of self-motion) and 3D trajectory discrimination on the basis of scale changes and optic flow. RESULTS Statistical analyses of performance on the test-experiments in comparison to the control experiments suggests that while scale changes may be involved in the detection of heading, they are not correctly integrated with translational motion and, thus, do not provide a correct discrimination of 3D object trajectories. CONCLUSIONS These results have the important implication for the type of visual guided navigation that can be done by an observer blind to optic flow. Scale change is an important alternative cue for self-motion.

  11. A Robust Method for Ego-Motion Estimation in Urban Environment Using Stereo Camera.

    PubMed

    Ci, Wenyan; Huang, Yingping

    2016-10-17

    Visual odometry estimates the ego-motion of an agent (e.g., vehicle and robot) using image information and is a key component for autonomous vehicles and robotics. This paper proposes a robust and precise method for estimating the 6-DoF ego-motion, using a stereo rig with optical flow analysis. An objective function fitted with a set of feature points is created by establishing the mathematical relationship between optical flow, depth and camera ego-motion parameters through the camera's 3-dimensional motion and planar imaging model. Accordingly, the six motion parameters are computed by minimizing the objective function, using the iterative Levenberg-Marquard method. One of key points for visual odometry is that the feature points selected for the computation should contain inliers as much as possible. In this work, the feature points and their optical flows are initially detected by using the Kanade-Lucas-Tomasi (KLT) algorithm. A circle matching is followed to remove the outliers caused by the mismatching of the KLT algorithm. A space position constraint is imposed to filter out the moving points from the point set detected by the KLT algorithm. The Random Sample Consensus (RANSAC) algorithm is employed to further refine the feature point set, i.e., to eliminate the effects of outliers. The remaining points are tracked to estimate the ego-motion parameters in the subsequent frames. The approach presented here is tested on real traffic videos and the results prove the robustness and precision of the method.

  12. A Robust Method for Ego-Motion Estimation in Urban Environment Using Stereo Camera

    PubMed Central

    Ci, Wenyan; Huang, Yingping

    2016-01-01

    Visual odometry estimates the ego-motion of an agent (e.g., vehicle and robot) using image information and is a key component for autonomous vehicles and robotics. This paper proposes a robust and precise method for estimating the 6-DoF ego-motion, using a stereo rig with optical flow analysis. An objective function fitted with a set of feature points is created by establishing the mathematical relationship between optical flow, depth and camera ego-motion parameters through the camera’s 3-dimensional motion and planar imaging model. Accordingly, the six motion parameters are computed by minimizing the objective function, using the iterative Levenberg–Marquard method. One of key points for visual odometry is that the feature points selected for the computation should contain inliers as much as possible. In this work, the feature points and their optical flows are initially detected by using the Kanade–Lucas–Tomasi (KLT) algorithm. A circle matching is followed to remove the outliers caused by the mismatching of the KLT algorithm. A space position constraint is imposed to filter out the moving points from the point set detected by the KLT algorithm. The Random Sample Consensus (RANSAC) algorithm is employed to further refine the feature point set, i.e., to eliminate the effects of outliers. The remaining points are tracked to estimate the ego-motion parameters in the subsequent frames. The approach presented here is tested on real traffic videos and the results prove the robustness and precision of the method. PMID:27763508

  13. Video Salient Object Detection via Fully Convolutional Networks.

    PubMed

    Wang, Wenguan; Shen, Jianbing; Shao, Ling

    This paper proposes a deep learning model to efficiently detect salient regions in videos. It addresses two important issues: 1) deep video saliency model training with the absence of sufficiently large and pixel-wise annotated video data and 2) fast video saliency training and detection. The proposed deep video saliency network consists of two modules, for capturing the spatial and temporal saliency information, respectively. The dynamic saliency model, explicitly incorporating saliency estimates from the static saliency model, directly produces spatiotemporal saliency inference without time-consuming optical flow computation. We further propose a novel data augmentation technique that simulates video training data from existing annotated image data sets, which enables our network to learn diverse saliency information and prevents overfitting with the limited number of training videos. Leveraging our synthetic video data (150K video sequences) and real videos, our deep video saliency model successfully learns both spatial and temporal saliency cues, thus producing accurate spatiotemporal saliency estimate. We advance the state-of-the-art on the densely annotated video segmentation data set (MAE of .06) and the Freiburg-Berkeley Motion Segmentation data set (MAE of .07), and do so with much improved speed (2 fps with all steps).This paper proposes a deep learning model to efficiently detect salient regions in videos. It addresses two important issues: 1) deep video saliency model training with the absence of sufficiently large and pixel-wise annotated video data and 2) fast video saliency training and detection. The proposed deep video saliency network consists of two modules, for capturing the spatial and temporal saliency information, respectively. The dynamic saliency model, explicitly incorporating saliency estimates from the static saliency model, directly produces spatiotemporal saliency inference without time-consuming optical flow computation. We further propose a novel data augmentation technique that simulates video training data from existing annotated image data sets, which enables our network to learn diverse saliency information and prevents overfitting with the limited number of training videos. Leveraging our synthetic video data (150K video sequences) and real videos, our deep video saliency model successfully learns both spatial and temporal saliency cues, thus producing accurate spatiotemporal saliency estimate. We advance the state-of-the-art on the densely annotated video segmentation data set (MAE of .06) and the Freiburg-Berkeley Motion Segmentation data set (MAE of .07), and do so with much improved speed (2 fps with all steps).

  14. Holographic motion picture camera with Doppler shift compensation

    NASA Technical Reports Server (NTRS)

    Kurtz, R. L. (Inventor)

    1976-01-01

    A holographic motion picture camera is reported for producing three dimensional images by employing an elliptical optical system. There is provided in one of the beam paths (the object or reference beam path) a motion compensator which enables the camera to photograph faster moving objects.

  15. Migration of planets into and out of mean motion resonances in protoplanetary discs: analytical theory of second-order resonances

    NASA Astrophysics Data System (ADS)

    Xu, Wenrui; Lai, Dong

    2017-07-01

    Recent observations of Kepler multiplanet systems have revealed a number of systems with planets very close to second-order mean motion resonances (MMRs, with period ratio 1 : 3, 3 : 5, etc.). We present an analytic study of resonance capture and its stability for planets migrating in gaseous discs. Resonance capture requires slow convergent migration of the planets, with sufficiently large eccentricity damping time-scale Te and small pre-resonance eccentricities. We quantify these requirements and find that they can be satisfied for super-Earths under protoplanetary disc conditions. For planets captured into resonance, an equilibrium state can be reached, in which eccentricity excitation due to resonant planet-planet interaction balances eccentricity damping due to planet-disc interaction. This 'captured' equilibrium can be overstable, leading to partial or permanent escape of the planets from the resonance. In general, the stability of the captured state depends on the inner to outer planet mass ratio q = m1/m2 and the ratio of the eccentricity damping times. The overstability growth time is of the order of Te, but can be much larger for systems close to the stability threshold. For low-mass planets undergoing type I (non-gap opening) migration, convergent migration requires q ≲ 1, while the stability of the capture requires q ≳ 1. These results suggest that planet pairs stably captured into second-order MMRs have comparable masses. This is in contrast to first-order MMRs, where a larger parameter space exists for stable resonance capture. We confirm and extend our analytical results with N-body simulations, and show that for overstable capture, the escape time from the MMR can be comparable to the time the planets spend migrating between resonances.

  16. Light effects in the atomic-motion-induced Ramsey narrowing of dark resonances in wall-coated cells

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Breschi, E.; Schori, C.; Di Domenico, G.

    2010-12-15

    We report on light shift and broadening in the atomic-motion-induced Ramsey narrowing of dark resonances prepared in alkali-metal vapors contained in wall-coated cells without buffer gas. The atomic-motion-induced Ramsey narrowing is due to the free motion of the polarized atomic spins in and out of the optical interaction region before spin relaxation. As a consequence of this effect, we observe a narrowing of the dark resonance linewidth as well as a reduction of the ground states' light shift when the volume of the interaction region decreases at constant optical intensity. The results can be intuitively interpreted as a dilution ofmore » the intensity effect similar to a pulsed interrogation due to the atomic motion. Finally the influence of this effect on the performance of compact atomic clocks is discussed.« less

  17. A retinal code for motion along the gravitational and body axes

    PubMed Central

    Sabbah, Shai; Gemmer, John A.; Bhatia-Lin, Ananya; Manoff, Gabrielle; Castro, Gabriel; Siegel, Jesse K.; Jeffery, Nathan; Berson, David M.

    2017-01-01

    Summary Self-motion triggers complementary visual and vestibular reflexes supporting image-stabilization and balance. Translation through space produces one global pattern of retinal image motion (optic flow), rotation another. We show that each subtype of direction-selective ganglion cell (DSGC) adjusts its direction preference topographically to align with specific translatory optic flow fields, creating a neural ensemble tuned for a specific direction of motion through space. Four cardinal translatory directions are represented, aligned with two axes of high adaptive relevance: the body and gravitational axes. One subtype maximizes its output when the mouse advances, others when it retreats, rises, or falls. ON-DSGCs and ON-OFF-DSGCs share the same spatial geometry but weight the four channels differently. Each subtype ensemble is also tuned for rotation. The relative activation of DSGC channels uniquely encodes every translation and rotation. Though retinal and vestibular systems both encode translatory and rotatory self-motion, their coordinate systems differ. PMID:28607486

  18. Using the Scroll Wheel on a Wireless Mouse as a Motion Sensor

    NASA Astrophysics Data System (ADS)

    Taylor, Richard S.; Wilson, William R.

    2010-12-01

    Since its inception in the mid-80s, the computer mouse has undergone several design changes. As the mouse has evolved, physicists have found new ways to utilize it as a motion sensor. For example, the rollers in a mechanical mouse have been used as pulleys to study the motion of a magnet moving through a copper tube as a quantitative demonstration of Lenz's law and to study mechanical oscillators (e.g., mass-spring system and compound pendulum).1-3 Additionally, the optical system in an optical mouse has been used to study a mechanical oscillator (e.g., mass-spring system).4 The argument for using a mouse as a motion sensor has been and continues to be availability and cost. This paper continues this tradition by detailing the use of the scroll wheel on a wireless mouse as a motion sensor.

  19. Quantum Optomechanics with Silicon Nanostructures

    NASA Astrophysics Data System (ADS)

    Safavi-Naeini, Amir H.

    Mechanical resonators are the most basic and ubiquitous physical systems known. In on-chip form, they are used to process high frequency signals in every cell phone, television, and laptop. They have also been in the last few decades in different shapes and forms, a critical part of progress in quantum information sciences with kilogram scale mirrors for gravitational wave detection measuring motion at its quantum limits, and the motion of single ions being used to link qubits for quantum computation. Optomechanics is a field primarily concerned with coupling light to the motion of mechanical structures. This thesis contains descriptions of recent work with mechanical systems in the megahertz to gigahertz frequency range, formed by nanofabricating novel photonic/phononic structures on a silicon chip. These structures are designed to have both optical and mechanical resonances, and laser light is used to address and manipulate their motional degrees of freedom through radiation pressure forces. We laser cool these mechanical resonators to their ground states, and observe for the first time the quantum zero-point motion of a nanomechanical resonator. Conversely, we show that engineered mechanical resonances drastically modify the optical response of our structures, creating large effective optical nonlinearities not present in bulk silicon. We experimentally demonstrate aspects of these nonlinearities by proposing and observing ``electromagnetically induced transparency'' and light slowed down to 6 m/s, as well as wavelength conversion, and generation of nonclassical optical radiation. Finally, the application of optomechanics to longstanding problems in quantum and classical communications are proposed and investigated.

  20. Position-Specific Hip and Knee Kinematics in NCAA Football Athletes

    PubMed Central

    Deneweth, Jessica M.; Pomeroy, Shannon M.; Russell, Jason R.; McLean, Scott G.; Zernicke, Ronald F.; Bedi, Asheesh; Goulet, Grant C.

    2014-01-01

    Background: Femoroacetabular impingement is a debilitating hip condition commonly affecting athletes playing American football. The condition is associated with reduced hip range of motion; however, little is known about the range-of-motion demands of football athletes. This knowledge is critical to effective management of this condition. Purpose: To (1) develop a normative database of game-like hip and knee kinematics used by football athletes and (2) analyze kinematic data by playing position. The hypothesis was that kinematics would be similar between running backs and defensive backs and between wide receivers and quarterbacks, and that linemen would perform the activities with the most erect lower limb posture. Study Design: Descriptive laboratory study. Methods: Forty National Collegiate Athletic Association (NCAA) football athletes, representing 5 playing positions (quarterback, defensive back, running back, wide receiver, offensive lineman), executed game-like maneuvers while lower body kinematics were recorded via optical motion capture. Passive hip range of motion at 90° of hip flexion was assessed using a goniometer. Passive range of motion, athlete physical dimensions, hip function, and hip and knee rotations were submitted to 1-way analysis of variance to test for differences between playing positions. Correlations between maximal hip and knee kinematics and maximal hip kinematics and passive range of motion were also computed. Results: Hip and knee kinematics were similar across positions. Significant differences arose with linemen, who used lower maximal knee flexion (mean ± SD, 45.04° ± 7.27°) compared with running backs (61.20° ± 6.07°; P < .001) and wide receivers (54.67° ± 6.97°; P = .048) during the cut. No significant differences were found among positions for hip passive range of motion (overall means: 102° ± 15° [flexion]; 25° ± 9° [internal rotation]; 25° ± 8° [external rotation]). Several maximal hip measures were found to negatively correlate with maximal knee kinematics. Conclusion: A normative database of hip and knee kinematics utilized by football athletes was developed. Position-specific analyses revealed that linemen use smaller joint motions when executing dynamic tasks but do not demonstrate passive range of motion deficits compared with other positions. Clinical Relevance: Knowledge of requisite game-like hip and knee ranges of motion is critical for developing goals for nonoperative or surgical recovery of hip and knee range of motion in the symptomatic athlete. These data help to identify playing positions that require remedial hip-related strength and conditioning protocols. Negative correlations between hip and knee kinematics indicated that constrained hip motion, as seen in linemen, could promote injurious motions at the knee. PMID:26535334

  1. Attention maintains mental extrapolation of target position: irrelevant distractors eliminate forward displacement after implied motion.

    PubMed

    Kerzel, Dirk

    2003-05-01

    Observers' judgments of the final position of a moving target are typically shifted in the direction of implied motion ("representational momentum"). The role of attention is unclear: visual attention may be necessary to maintain or halt target displacement. When attention was captured by irrelevant distractors presented during the retention interval, forward displacement after implied target motion disappeared, suggesting that attention may be necessary to maintain mental extrapolation of target motion. In a further corroborative experiment, the deployment of attention was measured after a sequence of implied motion, and faster responses were observed to stimuli appearing in the direction of motion. Thus, attention may guide the mental extrapolation of target motion. Additionally, eye movements were measured during stimulus presentation and retention interval. The results showed that forward displacement with implied motion does not depend on eye movements. Differences between implied and smooth motion are discussed with respect to recent neurophysiological findings.

  2. Optical Head-Mounted Computer Display for Education, Research, and Documentation in Hand Surgery.

    PubMed

    Funk, Shawn; Lee, Donald H

    2016-01-01

    Intraoperative photography and capturing videos is important for the hand surgeon. Recently, optical head-mounted computer display has been introduced as a means of capturing photographs and videos. In this article, we discuss this new technology and review its potential use in hand surgery. Copyright © 2016 American Society for Surgery of the Hand. Published by Elsevier Inc. All rights reserved.

  3. How many atoms are required to characterize accurately trajectory fluctuations of a protein?

    NASA Astrophysics Data System (ADS)

    Cukier, Robert I.

    2010-06-01

    Large molecules, whose thermal fluctuations sample a complex energy landscape, exhibit motions on an extended range of space and time scales. Principal component analysis (PCA) is often used to extract dominant motions that in proteins are typically domain motions. These motions are captured in the large eigenvalue (leading) principal components. There is also information in the small eigenvalues, arising from approximate linear dependencies among the coordinates. These linear dependencies suggest that instead of using all the atom coordinates to represent a trajectory, it should be possible to use a reduced set of coordinates with little loss in the information captured by the large eigenvalue principal components. In this work, methods that can monitor the correlation (overlap) between a reduced set of atoms and any number of retained principal components are introduced. For application to trajectory data generated by simulations, where the overall translational and rotational motion needs to be eliminated before PCA is carried out, some difficulties with the overlap measures arise and methods are developed to overcome them. The overlap measures are evaluated for a trajectory generated by molecular dynamics for the protein adenylate kinase, which consists of a stable, core domain, and two more mobile domains, referred to as the LID domain and the AMP-binding domain. The use of reduced sets corresponding, for the smallest set, to one-eighth of the alpha carbon (CA) atoms relative to using all the CA atoms is shown to predict the dominant motions of adenylate kinase. The overlap between using all the CA atoms and all the backbone atoms is essentially unity for a sum over PCA modes that effectively capture the exact trajectory. A reduction to a few atoms (three in the LID and three in the AMP-binding domain) shows that at least the first principal component, characterizing a large part of the LID-binding and AMP-binding motion, is well described. Based on these results, the overlap criterion should be applicable as a guide to postulating and validating coarse-grained descriptions of generic biomolecular assemblies.

  4. Quality control procedures for dynamic treatment delivery techniques involving couch motion.

    PubMed

    Yu, Victoria Y; Fahimian, Benjamin P; Xing, Lei; Hristov, Dimitre H

    2014-08-01

    In this study, the authors introduce and demonstrate quality control procedures for evaluating the geometric and dosimetric fidelity of dynamic treatment delivery techniques involving treatment couch motion synchronous with gantry and multileaf collimator (MLC). Tests were designed to evaluate positional accuracy, velocity constancy and accuracy for dynamic couch motion under a realistic weight load. A test evaluating the geometric accuracy of the system in delivering treatments over complex dynamic trajectories was also devised. Custom XML scripts that control the Varian TrueBeam™ STx (Serial #3) axes in Developer Mode were written to implement the delivery sequences for the tests. Delivered dose patterns were captured with radiographic film or the electronic portal imaging device. The couch translational accuracy in dynamic treatment mode was 0.01 cm. Rotational accuracy was within 0.3°, with 0.04 cm displacement of the rotational axis. Dose intensity profiles capturing the velocity constancy and accuracy for translations and rotation exhibited standard deviation and maximum deviations below 3%. For complex delivery involving MLC and couch motions, the overall translational accuracy for reproducing programmed patterns was within 0.06 cm. The authors conclude that in Developer Mode, TrueBeam™ is capable of delivering dynamic treatment delivery techniques involving couch motion with good geometric and dosimetric fidelity.

  5. Physical activity classification using time-frequency signatures of motion artifacts in multi-channel electrical impedance plethysmographs.

    PubMed

    Khan, Hassan Aqeel; Gore, Amit; Ashe, Jeff; Chakrabartty, Shantanu

    2017-07-01

    Physical activities are known to introduce motion artifacts in electrical impedance plethysmographic (EIP) sensors. Existing literature considers motion artifacts as a nuisance and generally discards the artifact containing portion of the sensor output. This paper examines the notion of exploiting motion artifacts for detecting the underlying physical activities which give rise to the artifacts in question. In particular, we investigate whether the artifact pattern associated with a physical activity is unique; and does it vary from one human-subject to another? Data was recorded from 19 adult human-subjects while conducting 5 distinct, artifact inducing, activities. A set of novel features based on the time-frequency signatures of the sensor outputs are then constructed. Our analysis demonstrates that these features enable high accuracy detection of the underlying physical activity. Using an SVM classifier we are able to differentiate between 5 distinct physical activities (coughing, reaching, walking, eating and rolling-on-bed) with an average accuracy of 85.46%. Classification is performed solely using features designed specifically to capture the time-frequency signatures of different physical activities. This enables us to measure both respiratory and motion information using only one type of sensor. This is in contrast to conventional approaches to physical activity monitoring; which rely on additional hardware such as accelerometers to capture activity information.

  6. Dynamic Imaging of the Eye, Optic Nerve, and Extraocular Muscles With Golden Angle Radial MRI

    PubMed Central

    Smith, David S.; Smith, Alex K.; Welch, E. Brian; Smith, Seth A.

    2017-01-01

    Purpose The eye and its accessory structures, the optic nerve and the extraocular muscles, form a complex dynamic system. In vivo magnetic resonance imaging (MRI) of this system in motion can have substantial benefits in understanding oculomotor functioning in health and disease, but has been restricted to date to imaging of static gazes only. The purpose of this work was to develop a technique to image the eye and its accessory visual structures in motion. Methods Dynamic imaging of the eye was developed on a 3-Tesla MRI scanner, based on a golden angle radial sequence that allows freely selectable frame-rate and temporal-span image reconstructions from the same acquired data set. Retrospective image reconstructions at a chosen frame rate of 57 ms per image yielded high-quality in vivo movies of various eye motion tasks performed in the scanner. Motion analysis was performed for a left–right version task where motion paths, lengths, and strains/globe angle of the medial and lateral extraocular muscles and the optic nerves were estimated. Results Offline image reconstructions resulted in dynamic images of bilateral visual structures of healthy adults in only ∼15-s imaging time. Qualitative and quantitative analyses of the motion enabled estimation of trajectories, lengths, and strains on the optic nerves and extraocular muscles at very high frame rates of ∼18 frames/s. Conclusions This work presents an MRI technique that enables high-frame-rate dynamic imaging of the eyes and orbital structures. The presented sequence has the potential to be used in furthering the understanding of oculomotor mechanics in vivo, both in health and disease. PMID:28813574

  7. Correction of motion artefacts and pseudo colour visualization of multispectral light scattering images for optical diagnosis of rheumatoid arthritis

    NASA Astrophysics Data System (ADS)

    Minet, Olaf; Scheibe, Patrick; Beuthan, Jürgen; Zabarylo, Urszula

    2009-10-01

    State-of-the-art image processing methods offer new possibilities for diagnosing diseases using scattered light. The optical diagnosis of rheumatism is taken as an example to show that the diagnostic sensitivity can be improved using overlapped pseudo-coloured images of different wavelengths, provided that multispectral images are recorded to compensate for any motion related artefacts which occur during examination.

  8. Correction of motion artefacts and pseudo colour visualization of multispectral light scattering images for optical diagnosis of rheumatoid arthritis

    NASA Astrophysics Data System (ADS)

    Minet, Olaf; Scheibe, Patrick; Beuthan, Jürgen; Zabarylo, Urszula

    2010-02-01

    State-of-the-art image processing methods offer new possibilities for diagnosing diseases using scattered light. The optical diagnosis of rheumatism is taken as an example to show that the diagnostic sensitivity can be improved using overlapped pseudo-coloured images of different wavelengths, provided that multispectral images are recorded to compensate for any motion related artefacts which occur during examination.

  9. The MicronEye Motion Monitor: A New Tool for Class and Laboratory Demonstrations.

    ERIC Educational Resources Information Center

    Nissan, M.; And Others

    1988-01-01

    Describes a special camera that can be directly linked to a computer that has been adapted for studying movement. Discusses capture, processing, and analysis of two-dimensional data with either IBM PC or Apple II computers. Gives examples of a variety of mechanical tests including pendulum motion, air track, and air table. (CW)

  10. Tongue Motion Averaging from Contour Sequences

    ERIC Educational Resources Information Center

    Li, Min; Kambhamettu, Chandra; Stone, Maureen

    2005-01-01

    In this paper, a method to get the best representation of a speech motion from several repetitions is presented. Each repetition is a representation of the same speech captured at different times by sequence of ultrasound images and is composed of a set of 2D spatio-temporal contours. These 2D contours in different repetitions are time aligned…

  11. Grouping of optic flow stimuli during binocular rivalry is driven by monocular information.

    PubMed

    Holten, Vivian; Stuit, Sjoerd M; Verstraten, Frans A J; van der Smagt, Maarten J

    2016-10-01

    During binocular rivalry, perception alternates between two dissimilar images, presented dichoptically. Although binocular rivalry is thought to result from competition at a local level, neighboring image parts with similar features tend to be perceived together for longer durations than image parts with dissimilar features. This simultaneous dominance of two image parts is called grouping during rivalry. Previous studies have shown that this grouping depends on a shared eye-of-origin to a much larger extent than on image content, irrespective of the complexity of a static image. In the current study, we examine whether grouping of dynamic optic flow patterns is also primarily driven by monocular (eye-of-origin) information. In addition, we examine whether image parameters, such as optic flow direction, and partial versus full visibility of the optic flow pattern, affect grouping durations during rivalry. The results show that grouping of optic flow is, as is known for static images, primarily affected by its eye-of-origin. Furthermore, global motion can affect grouping durations, but only under specific conditions. Namely, only when the two full optic flow patterns were presented locally. These results suggest that grouping during rivalry is primarily driven by monocular information even for motion stimuli thought to rely on higher-level motion areas. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Control of self-motion in dynamic fluids: fish do it differently from bees.

    PubMed

    Scholtyssek, Christine; Dacke, Marie; Kröger, Ronald; Baird, Emily

    2014-05-01

    To detect and avoid collisions, animals need to perceive and control the distance and the speed with which they are moving relative to obstacles. This is especially challenging for swimming and flying animals that must control movement in a dynamic fluid without reference from physical contact to the ground. Flying animals primarily rely on optic flow to control flight speed and distance to obstacles. Here, we investigate whether swimming animals use similar strategies for self-motion control to flying animals by directly comparing the trajectories of zebrafish (Danio rerio) and bumblebees (Bombus terrestris) moving through the same experimental tunnel. While moving through the tunnel, black and white patterns produced (i) strong horizontal optic flow cues on both walls, (ii) weak horizontal optic flow cues on both walls and (iii) strong optic flow cues on one wall and weak optic flow cues on the other. We find that the mean speed of zebrafish does not depend on the amount of optic flow perceived from the walls. We further show that zebrafish, unlike bumblebees, move closer to the wall that provides the strongest visual feedback. This unexpected preference for strong optic flow cues may reflect an adaptation for self-motion control in water or in environments where visibility is limited. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  13. Cholinergic modulation of stimulus-driven attentional capture.

    PubMed

    Boucart, Muriel; Michael, George Andrew; Bubicco, Giovanna; Ponchel, Amelie; Waucquier, Nawal; Deplanque, Dominique; Deguil, Julie; Bordet, Régis

    2015-04-15

    Distraction is one of the main problems encountered by people with degenerative diseases that are associated with reduced cortical cholinergic innervations. We examined the effects of donepezil, a cholinesterase inhibitor, on stimulus-driven attentional capture. Reflexive attention shifts to a distractor are usually elicited by abrupt peripheral changes. This bottom-up shift of attention to a salient item is thought to be the result of relatively inflexible hardwired mechanisms. Thirty young male participants were randomly allocated to one of two groups: placebo first/donepezil second session or the opposite. They were asked to locate a target appearing above and below fixation whilst a peripheral distractor moved abruptly (motion-jitter attentional capture condition) or not (baseline condition). A classical attentional capture effect was observed under placebo: moving distractors interfered with the task in slowing down response times as compared to the baseline condition with fixed distractors. Increased interference from moving distractors was found under donepezil. We suggest that attentional capture in our paradigm likely involved low level mechanisms such as automatic reflexive orienting. Peripheral motion-jitter elicited a rapid reflexive orienting response initiated by a cholinergic signal from the brainstem pedunculo-pontine nucleus that activates nicotinic receptors in the superior colliculus. Copyright © 2015 Elsevier B.V. All rights reserved.

  14. Regular oscillations and random motion of glass microspheres levitated by a single optical beam in air

    DOE PAGES

    Moore, Jeremy; Martin, Leopoldo L.; Maayani, Shai; ...

    2016-02-03

    We experimentally reporton optical binding of many glass particles in air that levitate in a single optical beam. A diversity of particle sizes and shapes interact at long range in a single Gaussian beam. Our system dynamics span from oscillatory to random and dimensionality ranges from 1 to 3D. In conclusion, the low loss for the center of mass motion of the beads could allow this system to serve as a standard many body testbed, similar to what is done today with atoms, but at the mesoscopic scale.

  15. Optical and mechanical design of a "zipper" photonic crystal optomechanical cavity.

    PubMed

    Chan, Jasper; Eichenfield, Matt; Camacho, Ryan; Painter, Oskar

    2009-03-02

    Design of a doubly-clamped beam structure capable of localizing mechanical and optical energy at the nanoscale is presented. The optical design is based upon photonic crystal concepts in which patterning of a nanoscale-cross-section beam can result in strong optical localization to an effective optical mode volume of 0.2 cubic wavelengths ( (lambdac)(3)). By placing two identical nanobeams within the near field of each other, strong optomechanical coupling can be realized for differential motion between the beams. Current designs for thin film silicon nitride beams at a wavelength of lambda?= 1.5 microm indicate that such structures can simultaneously realize an optical Q-factor of 7x10(6), motional mass m(u) approximately 40 picograms, mechanical mode frequency Omega(M)/2pi approximately 170 MHz, and an optomechanical coupling factor (g(OM) identical with domega(c)/dx = omega(c)/L(OM)) with effective length L(OM) approximately lambda= 1.5 microm.

  16. Ray-tracing as a tool for efficient specification of beamline optical components

    NASA Astrophysics Data System (ADS)

    Pedreira, P.; Sics, I.; Llonch, M.; Ladrera, J.; Ribó, Ll.; Colldelram, C.; Nicolas, J.

    2016-09-01

    We propose a method to determine the required performances of the positioning mechanics of the optical elements of a beamline. Generally, when designing and specifying a beamline, one assumes that the position and orientations of the optical elements should be aligned to its ideal position. For this, one would generally require six degrees of freedom per optical element. However, this number is reduced due to symmetries (e.g. a flat mirror does not care about yaw). Generally, one ends up by motorizing many axes, with high resolution and a large motion range. On the other hand, the diagnostics available at a beamline provide much less variables than the available motions. Moreover, the actual parameters that one wants to optimize are reduced to a very few. These are basically, spot size and size at the sample, flux, and spectral resolution. The result is that many configurations of the beamline are actually equivalent, and therefore indistinguishable from the ideal alignment in terms of performance.We propose a method in which the effect of misalignment of each one of the degrees of freedom of the beamline is scanned by ray tracing. This allows building a linear system in which one can identify and select the best set of motions to control the relevant parameters of the beam. Once the model is built it provides the required optical pseudomotors as well as the requirements in alignment and manufacturing, for all the motions, as well as the range, resolution and repeatability of the motorized axes.

  17. Coupling reconstruction and motion estimation for dynamic MRI through optical flow constraint

    NASA Astrophysics Data System (ADS)

    Zhao, Ningning; O'Connor, Daniel; Gu, Wenbo; Ruan, Dan; Basarab, Adrian; Sheng, Ke

    2018-03-01

    This paper addresses the problem of dynamic magnetic resonance image (DMRI) reconstruction and motion estimation jointly. Because of the inherent anatomical movements in DMRI acquisition, reconstruction of DMRI using motion estimation/compensation (ME/MC) has been explored under the compressed sensing (CS) scheme. In this paper, by embedding the intensity based optical flow (OF) constraint into the traditional CS scheme, we are able to couple the DMRI reconstruction and motion vector estimation. Moreover, the OF constraint is employed in a specific coarse resolution scale in order to reduce the computational complexity. The resulting optimization problem is then solved using a primal-dual algorithm due to its efficiency when dealing with nondifferentiable problems. Experiments on highly accelerated dynamic cardiac MRI with multiple receiver coils validate the performance of the proposed algorithm.

  18. Computational modeling of magnetic nanoparticle targeting to stent surface under high gradient field

    PubMed Central

    Wang, Shunqiang; Zhou, Yihua; Tan, Jifu; Xu, Jiang; Yang, Jie; Liu, Yaling

    2014-01-01

    A multi-physics model was developed to study the delivery of magnetic nanoparticles (MNPs) to the stent-implanted region under an external magnetic field. The model is firstly validated by experimental work in literature. Then, effects of external magnetic field strength, magnetic particle size, and flow velocity on MNPs’ targeting and binding have been analyzed through a parametric study. Two new dimensionless numbers were introduced to characterize relative effects of Brownian motion (BM), magnetic force induced particle motion, and convective blood flow on MNPs motion. It was found that larger magnetic field strength, bigger MNP size, and slower flow velocity increase the capture efficiency of MNPs. The distribution of captured MNPs on the vessel along axial and azimuthal directions was also discussed. Results showed that the MNPs density decreased exponentially along axial direction after one-dose injection while it was uniform along azimuthal direction in the whole stented region (averaged over all sections). For the beginning section of the stented region, the density ratio distribution of captured MNPs along azimuthal direction is center-symmetrical, corresponding to the center-symmetrical distribution of magnetic force in that section. Two different generation mechanisms are revealed to form four main attraction regions. These results could serve as guidelines to design a better magnetic drug delivery system. PMID:24653546

  19. Inertial Measurement Units for Clinical Movement Analysis: Reliability and Concurrent Validity

    PubMed Central

    Nicholas, Kevin; Sparkes, Valerie; Sheeran, Liba; Davies, Jennifer L

    2018-01-01

    The aim of this study was to investigate the reliability and concurrent validity of a commercially available Xsens MVN BIOMECH inertial-sensor-based motion capture system during clinically relevant functional activities. A clinician with no prior experience of motion capture technologies and an experienced clinical movement scientist each assessed 26 healthy participants within each of two sessions using a camera-based motion capture system and the MVN BIOMECH system. Participants performed overground walking, squatting, and jumping. Sessions were separated by 4 ± 3 days. Reliability was evaluated using intraclass correlation coefficient and standard error of measurement, and validity was evaluated using the coefficient of multiple correlation and the linear fit method. Day-to-day reliability was generally fair-to-excellent in all three planes for hip, knee, and ankle joint angles in all three tasks. Within-day (between-rater) reliability was fair-to-excellent in all three planes during walking and squatting, and poor-to-high during jumping. Validity was excellent in the sagittal plane for hip, knee, and ankle joint angles in all three tasks and acceptable in frontal and transverse planes in squat and jump activity across joints. Our results suggest that the MVN BIOMECH system can be used by a clinician to quantify lower-limb joint angles in clinically relevant movements. PMID:29495600

  20. Aging and visual 3-D shape recognition from motion.

    PubMed

    Norman, J Farley; Adkins, Olivia C; Dowell, Catherine J; Hoyng, Stevie C; Shain, Lindsey M; Pedersen, Lauren E; Kinnard, Jonathan D; Higginbotham, Alexia J; Gilliam, Ashley N

    2017-11-01

    Two experiments were conducted to evaluate the ability of younger and older adults to recognize 3-D object shape from patterns of optical motion. In Experiment 1, participants were required to identify dotted surfaces that rotated in depth (i.e., surface structure portrayed using the kinetic depth effect). The task difficulty was manipulated by limiting the surface point lifetimes within the stimulus apparent motion sequences. In Experiment 2, the participants identified solid, naturally shaped objects (replicas of bell peppers, Capsicum annuum) that were defined by occlusion boundary contours, patterns of specular highlights, or combined optical patterns containing both boundary contours and specular highlights. Significant and adverse effects of increased age were found in both experiments. Despite the fact that previous research has found that increases in age do not reduce solid shape discrimination, our current results indicated that the same conclusion does not hold for shape identification. We demonstrated that aging results in a reduction in the ability to visually recognize 3-D shape independent of how the 3-D structure is defined (motions of isolated points, deformations of smooth optical fields containing specular highlights, etc.).

  1. Blindsight modulation of motion perception.

    PubMed

    Intriligator, James M; Xie, Ruiman; Barton, Jason J S

    2002-11-15

    Monkey data suggest that of all perceptual abilities, motion perception is the most likely to survive striate damage. The results of studies on motion blindsight in humans, though, are mixed. We used an indirect strategy to examine how responses to visible stimuli were modulated by blind-field stimuli. In a 26-year-old man with focal striate lesions, discrimination of visible optic flow was enhanced about 7% by blind-field flow, even though discrimination of optic flow in the blind field alone (the direct strategy) was at chance. Pursuit of an imagined target using peripheral cues showed reduced variance but not increased gain with blind-field cues. Preceding blind-field prompts shortened reaction times to visible targets by about 10 msec, but there was no attentional crowding of visible stimuli by blind-field distractors. A similar efficacy of indirect blind-field optic flow modulation was found in a second patient with residual vision after focal striate damage, but not in a third with more extensive medial occipito-temporal damage. We conclude that indirect modulatory strategies are more effective than direct forced-choice methods at revealing residual motion perception after focal striate lesions.

  2. Quantifying skin motion artifact error of the hindfoot and forefoot marker clusters with the optical tracking of a multi-segment foot model using single-plane fluoroscopy.

    PubMed

    Shultz, R; Kedgley, A E; Jenkyn, T R

    2011-05-01

    The trajectories of skin-mounted markers tracked with optical motion capture are assumed to be an adequate representation of the underlying bone motions. However, it is well known that soft tissue artifact (STA) exists between marker and bone. This study quantifies the STA associated with the hindfoot and midfoot marker clusters of a multi-segment foot model. To quantify STA of the hindfoot and midfoot marker clusters with respect to the calcaneus and navicular respectively, fluoroscopic images were collected on 27 subjects during four quasi-static positions, (1) quiet standing (non-weight bearing), (2) at heel strike (weight-bearing), (3) at midstance (weight-bearing) and (4) at toe-off (weight-bearing). The translation and rotation components of STA were calculated in the sagittal plane. Translational STA at the calcaneus varied from 5.9±7.3mm at heel-strike to 12.1±0.3mm at toe-off. For the navicular the translational STA ranged from 7.6±7.6mm at heel strike to 16.4±16.7mm at toe-off. Rotational STA was relatively smaller for both bones at all foot positions. For the calcaneus they varied between 0.1±2.2° at heel-strike to 0.2±0.6° at toe-off. For the navicular, the rotational STA ranged from 0.6±0.9° at heel-strike to 0.7±0.7° at toe-off. The largest translational STA found in this study (16mm for the navicular) was smaller than those reported in the literature for the thigh and the lower leg, but was larger than the STA of individual spherical markers affixed to the foot. The largest errors occurred at toe-off position for all subjects for both the hindfoot and midfoot clusters. Future studies are recommended to quantify true three-dimensional STA of the entire foot during gait. Copyright © 2011. Published by Elsevier B.V.

  3. Two-character motion analysis and synthesis.

    PubMed

    Kwon, Taesoo; Cho, Young-Sang; Park, Sang Il; Shin, Sung Yong

    2008-01-01

    In this paper, we deal with the problem of synthesizing novel motions of standing-up martial arts such as Kickboxing, Karate, and Taekwondo performed by a pair of human-like characters while reflecting their interactions. Adopting an example-based paradigm, we address three non-trivial issues embedded in this problem: motion modeling, interaction modeling, and motion synthesis. For the first issue, we present a semi-automatic motion labeling scheme based on force-based motion segmentation and learning-based action classification. We also construct a pair of motion transition graphs each of which represents an individual motion stream. For the second issue, we propose a scheme for capturing the interactions between two players. A dynamic Bayesian network is adopted to build a motion transition model on top of the coupled motion transition graph that is constructed from an example motion stream. For the last issue, we provide a scheme for synthesizing a novel sequence of coupled motions, guided by the motion transition model. Although the focus of the present work is on martial arts, we believe that the framework of the proposed approach can be conveyed to other two-player motions as well.

  4. Human Factors Virtual Analysis Techniques for NASA's Space Launch System Ground Support using MSFC's Virtual Environments Lab (VEL)

    NASA Technical Reports Server (NTRS)

    Searcy, Brittani

    2017-01-01

    Using virtual environments to assess complex large scale human tasks provides timely and cost effective results to evaluate designs and to reduce operational risks during assembly and integration of the Space Launch System (SLS). NASA's Marshall Space Flight Center (MSFC) uses a suite of tools to conduct integrated virtual analysis during the design phase of the SLS Program. Siemens Jack is a simulation tool that allows engineers to analyze human interaction with CAD designs by placing a digital human model into the environment to test different scenarios and assess the design's compliance to human factors requirements. Engineers at MSFC are using Jack in conjunction with motion capture and virtual reality systems in MSFC's Virtual Environments Lab (VEL). The VEL provides additional capability beyond standalone Jack to record and analyze a person performing a planned task to assemble the SLS at Kennedy Space Center (KSC). The VEL integrates Vicon Blade motion capture system, Siemens Jack, Oculus Rift, and other virtual tools to perform human factors assessments. By using motion capture and virtual reality, a more accurate breakdown and understanding of how an operator will perform a task can be gained. By virtual analysis, engineers are able to determine if a specific task is capable of being safely performed by both a 5% (approx. 5ft) female and a 95% (approx. 6'1) male. In addition, the analysis will help identify any tools or other accommodations that may to help complete the task. These assessments are critical for the safety of ground support engineers and keeping launch operations on schedule. Motion capture allows engineers to save and examine human movements on a frame by frame basis, while virtual reality gives the actor (person performing a task in the VEL) an immersive view of the task environment. This presentation will discuss the need of human factors for SLS and the benefits of analyzing tasks in NASA MSFC's VEL.

  5. Kinematic discrimination of ataxia in horses is facilitated by blindfolding.

    PubMed

    Olsen, E; FouchÉ, N; Jordan, H; Pfau, T; Piercy, R J

    2018-03-01

    Agreement among experienced clinicians is poor when assessing the presence and severity of ataxia, especially when signs are mild. Consequently, objective gait measurements might be beneficial for assessment of horses with neurological diseases. To assess diagnostic criteria using motion capture to measure variability in spatial gait-characteristics and swing duration derived from ataxic and non-ataxic horses, and to assess if variability increases with blindfolding. Cross-sectional. A total of 21 horses underwent measurements in a gait laboratory and live neurological grading by multiple raters. In the gait laboratory, the horses were made to walk across a runway surrounded by a 12-camera motion capture system with a sample frequency of 240 Hz. They were made to walk normally and with a blindfold in at least three trials each. Displacements of reflective markers on head, fetlock, hoof, fourth lumbar vertebra, tuber coxae and sacrum derived from three to four consecutive strides were processed and descriptive statistics, receiver operator characteristics (ROC) to determine the diagnostic sensitivity, specificity and area under the curve (AUC), and correlation between median ataxia grade and gait parameters were determined. For horses with a median ataxia grade ≥2, coefficient of variation for the location of maximum vertical displacement of pelvic and thoracic distal limbs generated good diagnostic yield. The hoofs of the thoracic limbs yielded an AUC of 0.81 with 64% sensitivity and 90% specificity. Blindfolding exacerbated the variation for ataxic horses compared to non-ataxic horses with the hoof marker having an AUC of 0.89 with 82% sensitivity and 90% specificity. The low number of consecutive strides per horse obtained with motion capture could decrease diagnostic utility. Motion capture can objectively aid the assessment of horses with ataxia. Furthermore, blindfolding increases variation in distal pelvic limb kinematics making it a useful clinical tool. © 2017 EVJ Ltd.

  6. Image processing system design for microcantilever-based optical readout infrared arrays

    NASA Astrophysics Data System (ADS)

    Tong, Qiang; Dong, Liquan; Zhao, Yuejin; Gong, Cheng; Liu, Xiaohua; Yu, Xiaomei; Yang, Lei; Liu, Weiyu

    2012-12-01

    Compared with the traditional infrared imaging technology, the new type of optical-readout uncooled infrared imaging technology based on MEMS has many advantages, such as low cost, small size, producing simple. In addition, the theory proves that the technology's high thermal detection sensitivity. So it has a very broad application prospects in the field of high performance infrared detection. The paper mainly focuses on an image capturing and processing system in the new type of optical-readout uncooled infrared imaging technology based on MEMS. The image capturing and processing system consists of software and hardware. We build our image processing core hardware platform based on TI's high performance DSP chip which is the TMS320DM642, and then design our image capturing board based on the MT9P031. MT9P031 is Micron's company high frame rate, low power consumption CMOS chip. Last we use Intel's company network transceiver devices-LXT971A to design the network output board. The software system is built on the real-time operating system DSP/BIOS. We design our video capture driver program based on TI's class-mini driver and network output program based on the NDK kit for image capturing and processing and transmitting. The experiment shows that the system has the advantages of high capturing resolution and fast processing speed. The speed of the network transmission is up to 100Mbps.

  7. The KIT Motion-Language Dataset.

    PubMed

    Plappert, Matthias; Mandery, Christian; Asfour, Tamim

    2016-12-01

    Linking human motion and natural language is of great interest for the generation of semantic representations of human activities as well as for the generation of robot activities based on natural language input. However, although there have been years of research in this area, no standardized and openly available data set exists to support the development and evaluation of such systems. We, therefore, propose the Karlsruhe Institute of Technology (KIT) Motion-Language Dataset, which is large, open, and extensible. We aggregate data from multiple motion capture databases and include them in our data set using a unified representation that is independent of the capture system or marker set, making it easy to work with the data regardless of its origin. To obtain motion annotations in natural language, we apply a crowd-sourcing approach and a web-based tool that was specifically build for this purpose, the Motion Annotation Tool. We thoroughly document the annotation process itself and discuss gamification methods that we used to keep annotators motivated. We further propose a novel method, perplexity-based selection, which systematically selects motions for further annotation that are either under-represented in our data set or that have erroneous annotations. We show that our method mitigates the two aforementioned problems and ensures a systematic annotation process. We provide an in-depth analysis of the structure and contents of our resulting data set, which, as of October 10, 2016, contains 3911 motions with a total duration of 11.23 hours and 6278 annotations in natural language that contain 52,903 words. We believe this makes our data set an excellent choice that enables more transparent and comparable research in this important area.

  8. Robust optical flow using adaptive Lorentzian filter for image reconstruction under noisy condition

    NASA Astrophysics Data System (ADS)

    Kesrarat, Darun; Patanavijit, Vorapoj

    2017-02-01

    In optical flow for motion allocation, the efficient result in Motion Vector (MV) is an important issue. Several noisy conditions may cause the unreliable result in optical flow algorithms. We discover that many classical optical flows algorithms perform better result under noisy condition when combined with modern optimized model. This paper introduces effective robust models of optical flow by using Robust high reliability spatial based optical flow algorithms using the adaptive Lorentzian norm influence function in computation on simple spatial temporal optical flows algorithm. Experiment on our proposed models confirm better noise tolerance in optical flow's MV under noisy condition when they are applied over simple spatial temporal optical flow algorithms as a filtering model in simple frame-to-frame correlation technique. We illustrate the performance of our models by performing an experiment on several typical sequences with differences in movement speed of foreground and background where the experiment sequences are contaminated by the additive white Gaussian noise (AWGN) at different noise decibels (dB). This paper shows very high effectiveness of noise tolerance models that they are indicated by peak signal to noise ratio (PSNR).

  9. A reduced basis method for molecular dynamics simulation

    NASA Astrophysics Data System (ADS)

    Vincent-Finley, Rachel Elisabeth

    In this dissertation, we develop a method for molecular simulation based on principal component analysis (PCA) of a molecular dynamics trajectory and least squares approximation of a potential energy function. Molecular dynamics (MD) simulation is a computational tool used to study molecular systems as they evolve through time. With respect to protein dynamics, local motions, such as bond stretching, occur within femtoseconds, while rigid body and large-scale motions, occur within a range of nanoseconds to seconds. To capture motion at all levels, time steps on the order of a femtosecond are employed when solving the equations of motion and simulations must continue long enough to capture the desired large-scale motion. To date, simulations of solvated proteins on the order of nanoseconds have been reported. It is typically the case that simulations of a few nanoseconds do not provide adequate information for the study of large-scale motions. Thus, the development of techniques that allow longer simulation times can advance the study of protein function and dynamics. In this dissertation we use principal component analysis (PCA) to identify the dominant characteristics of an MD trajectory and to represent the coordinates with respect to these characteristics. We augment PCA with an updating scheme based on a reduced representation of a molecule and consider equations of motion with respect to the reduced representation. We apply our method to butane and BPTI and compare the results to standard MD simulations of these molecules. Our results indicate that the molecular activity with respect to our simulation method is analogous to that observed in the standard MD simulation with simulations on the order of picoseconds.

  10. Global Methods for Image Motion Analysis

    DTIC Science & Technology

    1992-10-01

    a variant of the same error function as in Adiv [2]. Another related approach was presented by Maybank [46,45]. Nearly all researchers in motion...with an application to stereo vision. In Proc. 7th Intern. Joint Conference on AI, pages 674{679, Vancouver, 1981. [45] S. J. Maybank . Algorithm for...analysing optical ow based on the least-squares method. Image and Vision Computing, 4:38{42, 1986. [46] S. J. Maybank . A Theoretical Study of Optical

  11. Solid-state reversible quadratic nonlinear optical molecular switch with an exceptionally large contrast.

    PubMed

    Sun, Zhihua; Luo, Junhua; Zhang, Shuquan; Ji, Chengmin; Zhou, Lei; Li, Shenhui; Deng, Feng; Hong, Maochun

    2013-08-14

    Exceptional nonlinear optical (NLO) switching behavior, including an extremely large contrast (on/off) of ∼35 and high NLO coefficients, is displayed by a solid-state reversible quadratic NLO switch. The favorable results, induced by very fast molecular motion and anionic ordering, provides impetus for the design of a novel second-harmonic-generation switch involving molecular motion. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Interferometric measurement of angular motion.

    PubMed

    Peña Arellano, Fabián Erasmo; Panjwani, Hasnain; Carbone, Ludovico; Speake, Clive C

    2013-04-01

    This paper describes the design and realization of a homodyne polarization interferometer for measuring angular motion. The optical layout incorporates carefully designed cat's eye retroreflectors that maximize the measurable range of angular motion and facilitate initial alignment. The retroreflectors are optimized and numerically characterized in terms of defocus and spherical aberrations using Zemax software for optical design. The linearity of the measurement is then calculated in terms of the aberrations. The actual physical interferometer is realized as a compact device with optical components from stock and without relying on adjustable holders. Evaluation of its performance using a commercial autocollimator confirmed a reproducibility within 0.1%, a non-linearity of less than 1 ppm with respect to the autocollimator, an upper limit to its sensitivity of about 5 × 10(-11) rad/Hz from audioband down to 100 mHz and an angular measurement range of more than ±1°.

  13. Interferometric measurement of angular motion

    NASA Astrophysics Data System (ADS)

    Peña Arellano, Fabián Erasmo; Panjwani, Hasnain; Carbone, Ludovico; Speake, Clive C.

    2013-04-01

    This paper describes the design and realization of a homodyne polarization interferometer for measuring angular motion. The optical layout incorporates carefully designed cat's eye retroreflectors that maximize the measurable range of angular motion and facilitate initial alignment. The retroreflectors are optimized and numerically characterized in terms of defocus and spherical aberrations using Zemax software for optical design. The linearity of the measurement is then calculated in terms of the aberrations. The actual physical interferometer is realized as a compact device with optical components from stock and without relying on adjustable holders. Evaluation of its performance using a commercial autocollimator confirmed a reproducibility within 0.1%, a non-linearity of less than 1 ppm with respect to the autocollimator, an upper limit to its sensitivity of about 5 × 10-11 rad/sqrt{textrm {Hz}} from audioband down to 100 mHz and an angular measurement range of more than ±1°.

  14. Spherical mirror mount

    NASA Technical Reports Server (NTRS)

    Meyer, Jay L. (Inventor); Messick, Glenn C. (Inventor); Nardell, Carl A. (Inventor); Hendlin, Martin J. (Inventor)

    2011-01-01

    A spherical mounting assembly for mounting an optical element allows for rotational motion of an optical surface of the optical element only. In that regard, an optical surface of the optical element does not translate in any of the three perpendicular translational axes. More importantly, the assembly provides adjustment that may be independently controlled for each of the three mutually perpendicular rotational axes.

  15. An image‐based method to synchronize cone‐beam CT and optical surface tracking

    PubMed Central

    Schaerer, Joël; Riboldi, Marco; Sarrut, David; Baroni, Guido

    2015-01-01

    The integration of in‐room X‐ray imaging and optical surface tracking has gained increasing importance in the field of image guided radiotherapy (IGRT). An essential step for this integration consists of temporally synchronizing the acquisition of X‐ray projections and surface data. We present an image‐based method for the synchronization of cone‐beam computed tomography (CBCT) and optical surface systems, which does not require the use of additional hardware. The method is based on optically tracking the motion of a component of the CBCT/gantry unit, which rotates during the acquisition of the CBCT scan. A calibration procedure was implemented to relate the position of the rotating component identified by the optical system with the time elapsed since the beginning of the CBCT scan, thus obtaining the temporal correspondence between the acquisition of X‐ray projections and surface data. The accuracy of the proposed synchronization method was evaluated on a motorized moving phantom, performing eight simultaneous acquisitions with an Elekta Synergy CBCT machine and the AlignRT optical device. The median time difference between the sinusoidal peaks of phantom motion signals extracted from the synchronized CBCT and AlignRT systems ranged between ‐3.1 and 12.9 msec, with a maximum interquartile range of 14.4 msec. The method was also applied to clinical data acquired from seven lung cancer patients, demonstrating the potential of the proposed approach in estimating the individual and daily variations in respiratory parameters and motion correlation of internal and external structures. The presented synchronization method can be particularly useful for tumor tracking applications in extracranial radiation treatments, especially in the field of patient‐specific breathing models, based on the correlation between internal tumor motion and external surface surrogates. PACS number: 87

  16. Eyes only? Perceiving eye contact is neither sufficient nor necessary for attentional capture by face direction.

    PubMed

    Böckler, Anne; van der Wel, Robrecht P R D; Welsh, Timothy N

    2015-09-01

    Direct eye contact and motion onset both constitute powerful cues that capture attention. Recent research suggests that (social) gaze and (non-social) motion onset influence information processing in parallel, even when combined as sudden onset direct gaze cues (i.e., faces suddenly establishing eye contact). The present study investigated the role of eye visibility for attention capture by these sudden onset face cues. To this end, face direction was manipulated (away or towards onlooker) while faces had closed eyes (eliminating visibility of eyes, Experiment 1), wore sunglasses (eliminating visible eyes, but allowing for the expectation of eyes to be open, Experiment 2), and were inverted with visible eyes (disrupting the integration of eyes and faces, Experiment 3). Participants classified targets appearing on one of four faces. Initially, two faces were oriented towards participants and two faces were oriented away from participants. Simultaneous to target presentation, one averted face became directed and one directed face became averted. Attention capture by face direction (i.e., facilitation for faces directed towards participants) was absent when eyes were closed, but present when faces wore sunglasses. Sudden onset direct faces can, hence, induce attentional capture, even when lacking eye cues. Inverted faces, by contrast, did not elicit attentional capture. Thus, when eyes cannot be integrated into a holistic face representation they are not sufficient to capture attention. Overall, the results suggest that visibility of eyes is neither necessary nor sufficient for the sudden direct face effect. Copyright © 2015 Elsevier B.V. All rights reserved.

  17. Statistics of Storm Updraft Velocities from TWP-ICE Including Verification with Profiling Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Collis, Scott; Protat, Alain; May, Peter T.

    2013-08-01

    Comparisons between direct measurements and modeled values of vertical air motions in precipitating systems are complicated by differences in temporal and spatial scales. On one hand, vertically profiling radars more directly measure the vertical air motion but do not adequately capture full storm dynamics. On the other hand, vertical air motions retrieved from two or more scanning Doppler radars capture the full storm dynamics but require model constraints that may not capture all updraft features because of inadequate sampling, resolution, numerical constraints, and the fact that the storm is evolving as it is scanned by the radars. To investigate themore » veracity of radar-based retrievals, which can be used to verify numerically modeled vertical air motions, this article presents several case studies from storm events around Darwin, Northern Territory, Australia, in which measurements from a dual-frequency radar profiler system and volumetric radar-based wind retrievals are compared. While a direct comparison was not possible because of instrumentation location, an indirect comparison shows promising results, with volume retrievals comparing well to those obtained from the profiling system. This prompted a statistical analysis of an extended period of an active monsoon period during the Tropical Warm Pool International Cloud Experiment (TWP-ICE). Results show less vigorous deep convective cores with maximum updraft velocities occurring at lower heights than some cloudresolving modeling studies suggest. 1. Introduction The regionalization of global climate models has been a driver of demand for more complex convective parameterization schemes. A key readjustment of the modeled atmosphere« less

  18. Weightlifting performance is related to kinematic and kinetic patterns of the hip and knee joints.

    PubMed

    Kipp, Kristof; Redden, Josh; Sabick, Michelle B; Harris, Chad

    2012-07-01

    The purpose of this study was to investigate the correlations between biomechanical outcome measures and weightlifting performance. Joint kinematics and kinetics of the hip, knee, and ankle were calculated while 10 subjects performed a clean at 85% of 1 repetition maximum (1RM). Kinematic and kinetic time-series patterns were extracted with principal components analysis. Discrete scores for each time-series pattern were calculated and used to determine how each pattern was related to body mass-normalized 1RM. Two hip kinematic and 2 knee kinetic patterns were significantly correlated with relative 1RM. The kinematic patterns captured hip and trunk motions during the first pull and hip joint motion during the movement transition between the first and second pulls. The first kinetic pattern captured a peak in the knee extension moment during the second pull. The second kinetic pattern captured a spatiotemporal shift in the timing and amplitude of the peak knee extension moment. The kinematic results suggest that greater lift mass was associated with steady trunk position during the first pull and less hip extension motion during the second-knee bend transition. Further, the kinetic results suggest that greater lift mass was associated with a smaller knee extensor moments during the first pull, but greater knee extension moments during the second pull, and an earlier temporal transition between knee flexion-extension moments at the beginning of the second pull. Collectively, these results highlight the importance of controlled trunk and hip motions during the first pull and rapid employment of the knee extensor muscles during the second pull in relation to weightlifting performance.

  19. Real-time animation software for customized training to use motor prosthetic systems.

    PubMed

    Davoodi, Rahman; Loeb, Gerald E

    2012-03-01

    Research on control of human movement and development of tools for restoration and rehabilitation of movement after spinal cord injury and amputation can benefit greatly from software tools for creating precisely timed animation sequences of human movement. Despite their ability to create sophisticated animation and high quality rendering, existing animation software are not adapted for application to neural prostheses and rehabilitation of human movement. We have developed a software tool known as MSMS (MusculoSkeletal Modeling Software) that can be used to develop models of human or prosthetic limbs and the objects with which they interact and to animate their movement using motion data from a variety of offline and online sources. The motion data can be read from a motion file containing synthesized motion data or recordings from a motion capture system. Alternatively, motion data can be streamed online from a real-time motion capture system, a physics-based simulation program, or any program that can produce real-time motion data. Further, animation sequences of daily life activities can be constructed using the intuitive user interface of Microsoft's PowerPoint software. The latter allows expert and nonexpert users alike to assemble primitive movements into a complex motion sequence with precise timing by simply arranging the order of the slides and editing their properties in PowerPoint. The resulting motion sequence can be played back in an open-loop manner for demonstration and training or in closed-loop virtual reality environments where the timing and speed of animation depends on user inputs. These versatile animation utilities can be used in any application that requires precisely timed animations but they are particularly suited for research and rehabilitation of movement disorders. MSMS's modeling and animation tools are routinely used in a number of research laboratories around the country to study the control of movement and to develop and test neural prostheses for patients with paralysis or amputations.

  20. In-vivo confirmation of the use of the dart thrower's motion during activities of daily living.

    PubMed

    Brigstocke, G H O; Hearnden, A; Holt, C; Whatling, G

    2014-05-01

    The dart thrower's motion is a wrist rotation along an oblique plane from radial extension to ulnar flexion. We report an in-vivo study to confirm the use of the dart thrower's motion during activities of daily living. Global wrist motion in ten volunteers was recorded using a three-dimensional optoelectronic motion capture system, in which digital infra-red cameras track the movement of retro-reflective marker clusters. Global wrist motion has been approximated to the dart thrower's motion when hammering a nail, throwing a ball, drinking from a glass, pouring from a jug and twisting the lid of a jar, but not when combing hair or manipulating buttons. The dart thrower's motion is the plane of global wrist motion used during most activities of daily living. Arthrodesis of the radiocarpal joint instead of the midcarpal joint will allow better wrist function during most activities of daily living by preserving the dart thrower's motion.

Top